sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a73882aa542139fe9f6c210b59e579e80821b9da
|
# Dataset Card for "alpaca-cleaned-10-clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alexrs/alpaca-cleaned-10-clusters
|
[
"region:us"
] |
2023-10-16T13:36:55+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "cluster", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 40490946, "num_examples": 51760}], "download_size": 24184864, "dataset_size": 40490946}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T13:36:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "alpaca-cleaned-10-clusters"
More Information needed
|
[
"# Dataset Card for \"alpaca-cleaned-10-clusters\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"alpaca-cleaned-10-clusters\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"alpaca-cleaned-10-clusters\"\n\nMore Information needed"
] |
e61081d22ea8d537ac114a645ed362325e8dd8dd
|
# Dataset Card for "alpaca-cleaned-5-clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alexrs/alpaca-cleaned-5-clusters
|
[
"region:us"
] |
2023-10-16T13:42:06+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "cluster", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 40490946, "num_examples": 51760}], "download_size": 24177437, "dataset_size": 40490946}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T13:42:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "alpaca-cleaned-5-clusters"
More Information needed
|
[
"# Dataset Card for \"alpaca-cleaned-5-clusters\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"alpaca-cleaned-5-clusters\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"alpaca-cleaned-5-clusters\"\n\nMore Information needed"
] |
a567815a74d9391de1ec438f14b2f1ec850934c9
|
# Dataset Card for "alpaca-cleaned-15-clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alexrs/alpaca-cleaned-15-clusters
|
[
"region:us"
] |
2023-10-16T13:43:02+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "cluster", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 40490946, "num_examples": 51760}], "download_size": 24185910, "dataset_size": 40490946}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T13:43:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "alpaca-cleaned-15-clusters"
More Information needed
|
[
"# Dataset Card for \"alpaca-cleaned-15-clusters\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"alpaca-cleaned-15-clusters\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"alpaca-cleaned-15-clusters\"\n\nMore Information needed"
] |
f2cf292030a598ff4b40efebb9cd288b7a778baa
|
# Dataset Card for "alpaca-cleaned-30-clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alexrs/alpaca-cleaned-30-clusters
|
[
"region:us"
] |
2023-10-16T13:44:30+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "cluster", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 40490946, "num_examples": 51760}], "download_size": 24195677, "dataset_size": 40490946}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T13:44:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "alpaca-cleaned-30-clusters"
More Information needed
|
[
"# Dataset Card for \"alpaca-cleaned-30-clusters\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"alpaca-cleaned-30-clusters\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"alpaca-cleaned-30-clusters\"\n\nMore Information needed"
] |
dd39a2b561b96a5f4b70708a70e3ff4371195288
|
# Dataset Card for "megawika"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
orgcatorg/megawika
|
[
"region:us"
] |
2023-10-16T13:56:20+00:00
|
{"dataset_info": [{"config_name": "my", "features": [{"name": "article_title", "dtype": "string"}, {"name": "article_text", "dtype": "string"}, {"name": "entries", "list": [{"name": "id", "dtype": "string"}, {"name": "original", "dtype": "string"}, {"name": "original_sents", "sequence": "string"}, {"name": "parse_tokens", "sequence": {"sequence": "string"}}, {"name": "passage", "struct": [{"name": "en_lang_token_map", "struct": [{"name": "0", "dtype": "int64"}, {"name": "1", "dtype": "int64"}, {"name": "10", "dtype": "int64"}, {"name": "100", "dtype": "int64"}, {"name": "101", "dtype": "int64"}, {"name": "102", "dtype": "int64"}, {"name": "103", "dtype": "int64"}, {"name": "104", "dtype": "int64"}, {"name": "105", "dtype": "int64"}, {"name": "106", "dtype": "int64"}, {"name": "107", "dtype": "int64"}, {"name": "108", "dtype": "int64"}, {"name": "109", "dtype": "int64"}, {"name": "11", "dtype": "int64"}, {"name": "110", "dtype": "int64"}, {"name": "111", "dtype": "int64"}, {"name": "112", "dtype": "int64"}, {"name": "113", "dtype": "int64"}, {"name": "114", "dtype": "int64"}, {"name": "115", "dtype": "int64"}, {"name": "116", "dtype": "int64"}, {"name": "117", "dtype": "int64"}, {"name": "118", "dtype": "int64"}, {"name": "119", "dtype": "int64"}, {"name": "12", "dtype": "int64"}, {"name": "120", "dtype": "int64"}, {"name": "121", "dtype": "int64"}, {"name": "122", "dtype": "int64"}, {"name": "123", "dtype": "int64"}, {"name": "124", "dtype": "int64"}, {"name": "125", "dtype": "int64"}, {"name": "126", "dtype": "int64"}, {"name": "127", "dtype": "int64"}, {"name": "128", "dtype": "int64"}, {"name": "129", "dtype": "int64"}, {"name": "13", "dtype": "int64"}, {"name": "130", "dtype": "int64"}, {"name": "131", "dtype": "int64"}, {"name": "132", "dtype": "int64"}, {"name": "133", "dtype": "int64"}, {"name": "134", "dtype": "int64"}, {"name": "135", "dtype": "int64"}, {"name": "136", "dtype": "int64"}, {"name": "137", "dtype": "int64"}, {"name": "138", "dtype": "int64"}, {"name": "139", "dtype": "int64"}, {"name": "14", "dtype": "int64"}, {"name": "140", "dtype": "int64"}, {"name": "141", "dtype": "int64"}, {"name": "142", "dtype": "int64"}, {"name": "143", "dtype": "int64"}, {"name": "144", "dtype": "int64"}, {"name": "145", "dtype": "int64"}, {"name": "146", "dtype": "int64"}, {"name": "147", "dtype": "int64"}, {"name": "148", "dtype": "int64"}, {"name": "149", "dtype": "int64"}, {"name": "15", "dtype": "int64"}, {"name": "150", "dtype": "int64"}, {"name": "151", "dtype": "int64"}, {"name": "152", "dtype": "int64"}, {"name": "153", "dtype": "int64"}, {"name": "154", "dtype": "int64"}, {"name": "155", "dtype": "int64"}, {"name": "156", "dtype": "int64"}, {"name": "157", "dtype": "int64"}, {"name": "158", "dtype": "int64"}, {"name": "159", "dtype": "int64"}, {"name": "16", "dtype": "int64"}, {"name": "160", "dtype": "int64"}, {"name": "161", "dtype": "int64"}, {"name": "162", "dtype": "int64"}, {"name": "163", "dtype": "int64"}, {"name": "164", "dtype": "int64"}, {"name": "165", "dtype": "int64"}, {"name": "166", "dtype": "int64"}, {"name": "167", "dtype": "int64"}, {"name": "168", "dtype": "int64"}, {"name": "169", "dtype": "int64"}, {"name": "17", "dtype": "int64"}, {"name": "170", "dtype": "int64"}, {"name": "171", "dtype": "int64"}, {"name": "172", "dtype": "int64"}, {"name": "173", "dtype": "int64"}, {"name": "174", "dtype": "int64"}, {"name": "175", "dtype": "int64"}, {"name": "176", "dtype": "int64"}, {"name": "177", "dtype": "int64"}, {"name": "178", "dtype": "int64"}, {"name": "179", "dtype": "int64"}, {"name": "18", "dtype": "int64"}, {"name": "180", "dtype": "int64"}, {"name": "181", "dtype": "int64"}, {"name": "182", "dtype": "int64"}, {"name": "183", "dtype": "int64"}, {"name": "184", "dtype": "int64"}, {"name": "185", "dtype": "int64"}, {"name": "186", "dtype": "int64"}, {"name": "187", "dtype": "int64"}, {"name": "188", "dtype": "int64"}, {"name": "189", "dtype": "int64"}, {"name": "19", "dtype": "int64"}, {"name": "190", "dtype": "int64"}, {"name": "191", "dtype": "int64"}, {"name": "192", "dtype": "int64"}, {"name": "193", "dtype": "int64"}, {"name": "194", "dtype": "int64"}, {"name": "195", "dtype": "int64"}, {"name": "196", "dtype": "int64"}, {"name": "197", "dtype": "int64"}, {"name": "198", "dtype": "int64"}, {"name": "199", "dtype": "int64"}, {"name": "2", "dtype": "int64"}, {"name": "20", "dtype": "int64"}, {"name": "200", "dtype": "int64"}, {"name": "201", "dtype": "int64"}, {"name": "202", "dtype": "int64"}, {"name": "203", "dtype": "int64"}, {"name": "204", "dtype": "int64"}, {"name": "205", "dtype": "int64"}, {"name": "206", "dtype": "int64"}, {"name": "207", "dtype": "int64"}, {"name": "208", "dtype": "int64"}, {"name": "209", "dtype": "int64"}, {"name": "21", "dtype": "int64"}, {"name": "210", "dtype": "int64"}, {"name": "211", "dtype": "int64"}, {"name": "212", "dtype": "int64"}, {"name": "213", "dtype": "int64"}, {"name": "214", "dtype": "int64"}, {"name": "215", "dtype": "int64"}, {"name": "216", "dtype": "int64"}, {"name": "217", "dtype": "int64"}, {"name": "218", "dtype": "int64"}, {"name": "219", "dtype": "int64"}, {"name": "22", "dtype": "int64"}, {"name": "220", "dtype": "int64"}, {"name": "221", "dtype": "int64"}, {"name": "222", "dtype": "int64"}, {"name": "223", "dtype": "int64"}, {"name": "224", "dtype": "int64"}, {"name": "225", "dtype": "int64"}, {"name": "226", "dtype": "int64"}, {"name": "227", "dtype": "int64"}, {"name": "228", "dtype": "int64"}, {"name": "229", "dtype": "int64"}, {"name": "23", "dtype": "int64"}, {"name": "230", "dtype": "int64"}, {"name": "231", "dtype": "int64"}, {"name": "232", "dtype": "int64"}, {"name": "233", "dtype": "int64"}, {"name": "234", "dtype": "int64"}, {"name": "235", "dtype": "int64"}, {"name": "236", "dtype": "int64"}, {"name": "237", "dtype": "int64"}, {"name": "238", "dtype": "int64"}, {"name": "239", "dtype": "int64"}, {"name": "24", "dtype": "int64"}, {"name": "240", "dtype": "int64"}, {"name": "241", "dtype": "int64"}, {"name": "242", "dtype": "int64"}, {"name": "243", "dtype": "int64"}, {"name": "244", "dtype": "int64"}, {"name": "245", "dtype": "int64"}, {"name": "246", "dtype": "int64"}, {"name": "247", "dtype": "int64"}, {"name": "248", "dtype": "int64"}, {"name": "249", "dtype": "int64"}, {"name": "25", "dtype": "int64"}, {"name": "250", "dtype": "int64"}, {"name": "251", "dtype": "int64"}, {"name": "252", "dtype": "int64"}, {"name": "253", "dtype": "int64"}, {"name": "254", "dtype": "int64"}, {"name": "255", "dtype": "int64"}, {"name": "256", "dtype": "int64"}, {"name": "257", "dtype": "int64"}, {"name": "258", "dtype": "int64"}, {"name": "259", "dtype": "int64"}, {"name": "26", "dtype": "int64"}, {"name": "260", "dtype": "int64"}, {"name": "261", "dtype": "int64"}, {"name": "262", "dtype": "int64"}, {"name": "263", "dtype": "int64"}, {"name": "264", "dtype": "int64"}, {"name": "265", "dtype": "int64"}, {"name": "266", "dtype": "int64"}, {"name": "267", "dtype": "int64"}, {"name": "268", "dtype": "int64"}, {"name": "269", "dtype": "int64"}, {"name": "27", "dtype": "int64"}, {"name": "270", "dtype": "int64"}, {"name": "271", "dtype": "int64"}, {"name": "272", "dtype": "int64"}, {"name": "273", "dtype": "int64"}, {"name": "274", "dtype": "int64"}, {"name": "275", "dtype": "int64"}, {"name": "276", "dtype": "int64"}, {"name": "277", "dtype": "int64"}, {"name": "278", "dtype": "int64"}, {"name": "279", "dtype": "int64"}, {"name": "28", "dtype": "int64"}, {"name": "280", "dtype": "int64"}, {"name": "281", "dtype": "int64"}, {"name": "282", "dtype": "int64"}, {"name": "283", "dtype": "int64"}, {"name": "284", "dtype": "int64"}, {"name": "285", "dtype": "int64"}, {"name": "286", "dtype": "int64"}, {"name": "287", "dtype": "int64"}, {"name": "288", "dtype": "int64"}, {"name": "289", "dtype": "int64"}, {"name": "29", "dtype": "int64"}, {"name": "290", "dtype": "int64"}, {"name": "291", "dtype": "int64"}, {"name": "292", "dtype": "int64"}, {"name": "293", "dtype": "int64"}, {"name": "294", "dtype": "int64"}, {"name": "295", "dtype": "int64"}, {"name": "296", "dtype": "int64"}, {"name": "297", "dtype": "int64"}, {"name": "298", "dtype": "int64"}, {"name": "299", "dtype": "int64"}, {"name": "3", "dtype": "int64"}, {"name": "30", "dtype": "int64"}, {"name": "300", "dtype": "int64"}, {"name": "301", "dtype": "int64"}, {"name": "302", "dtype": "int64"}, {"name": "303", "dtype": "int64"}, {"name": "304", "dtype": "int64"}, {"name": "305", "dtype": "int64"}, {"name": "306", "dtype": "int64"}, {"name": "307", "dtype": "int64"}, {"name": "308", "dtype": "int64"}, {"name": "309", "dtype": "int64"}, {"name": "31", "dtype": "int64"}, {"name": "310", "dtype": "int64"}, {"name": "311", "dtype": "int64"}, {"name": "312", "dtype": "int64"}, {"name": "313", "dtype": "int64"}, {"name": "314", "dtype": "int64"}, {"name": "315", "dtype": "int64"}, {"name": "316", "dtype": "int64"}, {"name": "317", "dtype": "int64"}, {"name": "318", "dtype": "int64"}, {"name": "319", "dtype": "int64"}, {"name": "32", "dtype": "int64"}, {"name": "320", "dtype": "int64"}, {"name": "321", "dtype": "int64"}, {"name": "322", "dtype": "int64"}, {"name": "323", "dtype": "int64"}, {"name": "324", "dtype": "int64"}, {"name": "325", "dtype": "int64"}, {"name": "326", "dtype": "int64"}, {"name": "327", "dtype": "int64"}, {"name": "328", "dtype": "int64"}, {"name": "329", "dtype": "int64"}, {"name": "33", "dtype": "int64"}, {"name": "330", "dtype": "int64"}, {"name": "331", "dtype": "int64"}, {"name": "332", "dtype": "int64"}, {"name": "333", "dtype": "int64"}, {"name": "334", "dtype": "int64"}, {"name": "335", "dtype": "int64"}, {"name": "336", "dtype": "int64"}, {"name": "337", "dtype": "int64"}, {"name": "338", "dtype": "int64"}, {"name": "339", "dtype": "int64"}, {"name": "34", "dtype": "int64"}, {"name": "340", "dtype": "int64"}, {"name": "341", "dtype": "int64"}, {"name": "342", "dtype": "int64"}, {"name": "343", "dtype": "int64"}, {"name": "344", "dtype": "int64"}, {"name": "345", "dtype": "int64"}, {"name": "346", "dtype": "int64"}, {"name": "347", "dtype": "int64"}, {"name": "348", "dtype": "int64"}, {"name": "349", "dtype": "int64"}, {"name": "35", "dtype": "int64"}, {"name": "350", "dtype": "int64"}, {"name": "351", "dtype": "int64"}, {"name": "352", "dtype": "int64"}, {"name": "353", "dtype": "int64"}, {"name": "354", "dtype": "int64"}, {"name": "355", "dtype": "int64"}, {"name": "356", "dtype": "int64"}, {"name": "357", "dtype": "int64"}, {"name": "358", "dtype": "int64"}, {"name": "359", "dtype": "int64"}, {"name": "36", "dtype": "int64"}, {"name": "360", "dtype": "int64"}, {"name": "361", "dtype": "int64"}, {"name": "362", "dtype": "int64"}, {"name": "363", "dtype": "int64"}, {"name": "364", "dtype": "int64"}, {"name": "365", "dtype": "int64"}, {"name": "366", "dtype": "int64"}, {"name": "367", "dtype": "int64"}, {"name": "368", "dtype": "int64"}, {"name": "369", "dtype": "int64"}, {"name": "37", "dtype": "int64"}, {"name": "370", "dtype": "int64"}, {"name": "371", "dtype": "int64"}, {"name": "372", "dtype": "int64"}, {"name": "373", "dtype": "int64"}, {"name": "374", "dtype": "int64"}, {"name": "375", "dtype": "int64"}, {"name": "376", "dtype": "int64"}, {"name": "377", "dtype": "int64"}, {"name": "378", "dtype": "int64"}, {"name": "379", "dtype": "int64"}, {"name": "38", "dtype": "int64"}, {"name": "380", "dtype": "int64"}, {"name": "381", "dtype": "int64"}, {"name": "382", "dtype": "int64"}, {"name": "383", "dtype": "int64"}, {"name": "384", "dtype": "int64"}, {"name": "385", "dtype": "int64"}, {"name": "386", "dtype": "int64"}, {"name": "387", "dtype": "int64"}, {"name": "388", "dtype": "int64"}, {"name": "389", "dtype": "int64"}, {"name": "39", "dtype": "int64"}, {"name": "390", "dtype": "int64"}, {"name": "391", "dtype": "int64"}, {"name": "392", "dtype": "int64"}, {"name": "393", "dtype": "int64"}, {"name": "394", "dtype": "int64"}, {"name": "395", "dtype": "int64"}, {"name": "396", "dtype": "int64"}, {"name": "397", "dtype": "int64"}, {"name": "4", "dtype": "int64"}, {"name": "40", "dtype": "int64"}, {"name": "41", "dtype": "int64"}, {"name": "42", "dtype": "int64"}, {"name": "43", "dtype": "int64"}, {"name": "44", "dtype": "int64"}, {"name": "45", "dtype": "int64"}, {"name": "46", "dtype": "int64"}, {"name": "47", "dtype": "int64"}, {"name": "48", "dtype": "int64"}, {"name": "49", "dtype": "int64"}, {"name": "5", "dtype": "int64"}, {"name": "50", "dtype": "int64"}, {"name": "51", "dtype": "int64"}, {"name": "52", "dtype": "int64"}, {"name": "53", "dtype": "int64"}, {"name": "54", "dtype": "int64"}, {"name": "55", "dtype": "int64"}, {"name": "56", "dtype": "int64"}, {"name": "57", "dtype": "int64"}, {"name": "58", "dtype": "int64"}, {"name": "59", "dtype": "int64"}, {"name": "6", "dtype": "int64"}, {"name": "60", "dtype": "int64"}, {"name": "61", "dtype": "int64"}, {"name": "62", "dtype": "int64"}, {"name": "63", "dtype": "int64"}, {"name": "64", "dtype": "int64"}, {"name": "65", "dtype": "int64"}, {"name": "66", "dtype": "int64"}, {"name": "67", "dtype": "int64"}, {"name": "68", "dtype": "int64"}, {"name": "69", "dtype": "int64"}, {"name": "7", "dtype": "int64"}, {"name": "70", "dtype": "int64"}, {"name": "71", "dtype": "int64"}, {"name": "72", "dtype": "int64"}, {"name": "73", "dtype": "int64"}, {"name": "74", "dtype": "int64"}, {"name": "75", "dtype": "int64"}, {"name": "76", "dtype": "int64"}, {"name": "77", "dtype": "int64"}, {"name": "78", "dtype": "int64"}, {"name": "79", "dtype": "int64"}, {"name": "8", "dtype": "int64"}, {"name": "80", "dtype": "int64"}, {"name": "81", "dtype": "int64"}, {"name": "82", "dtype": "int64"}, {"name": "83", "dtype": "int64"}, {"name": "84", "dtype": "int64"}, {"name": "85", "dtype": "int64"}, {"name": "86", "dtype": "int64"}, {"name": "87", "dtype": "int64"}, {"name": "88", "dtype": "int64"}, {"name": "89", "dtype": "int64"}, {"name": "9", "dtype": "int64"}, {"name": "90", "dtype": "int64"}, {"name": "91", "dtype": "int64"}, {"name": "92", "dtype": "int64"}, {"name": "93", "dtype": "int64"}, {"name": "94", "dtype": "int64"}, {"name": "95", "dtype": "int64"}, {"name": "96", "dtype": "int64"}, {"name": "97", "dtype": "int64"}, {"name": "98", "dtype": "int64"}, {"name": "99", "dtype": "int64"}]}, {"name": "en_tokens", "struct": [{"name": "0", "dtype": "string"}, {"name": "1", "dtype": "string"}, {"name": "10", "dtype": "string"}, {"name": "100", "dtype": "string"}, {"name": "101", "dtype": "string"}, {"name": "102", "dtype": "string"}, {"name": "103", "dtype": "string"}, {"name": "104", "dtype": "string"}, {"name": "105", "dtype": "string"}, {"name": "106", "dtype": "string"}, {"name": "107", "dtype": "string"}, {"name": "108", "dtype": "string"}, {"name": "109", "dtype": "string"}, {"name": "11", "dtype": "string"}, {"name": "110", "dtype": "string"}, {"name": "111", "dtype": "string"}, {"name": "112", "dtype": "string"}, {"name": "113", "dtype": "string"}, {"name": "114", "dtype": "string"}, {"name": "115", "dtype": "string"}, {"name": "116", "dtype": "string"}, {"name": "117", "dtype": "string"}, {"name": "118", "dtype": "string"}, {"name": "119", "dtype": "string"}, {"name": "12", "dtype": "string"}, {"name": "120", "dtype": "string"}, {"name": "121", "dtype": "string"}, {"name": "122", "dtype": "string"}, {"name": "123", "dtype": "string"}, {"name": "124", "dtype": "string"}, {"name": "125", "dtype": "string"}, {"name": "126", "dtype": "string"}, {"name": "127", "dtype": "string"}, {"name": "128", "dtype": "string"}, {"name": "129", "dtype": "string"}, {"name": "13", "dtype": "string"}, {"name": "130", "dtype": "string"}, {"name": "131", "dtype": "string"}, {"name": "132", "dtype": "string"}, {"name": "133", "dtype": "string"}, {"name": "134", "dtype": "string"}, {"name": "135", "dtype": "string"}, {"name": "136", "dtype": "string"}, {"name": "137", "dtype": "string"}, {"name": "138", "dtype": "string"}, {"name": "139", "dtype": "string"}, {"name": "14", "dtype": "string"}, {"name": "140", "dtype": "string"}, {"name": "141", "dtype": "string"}, {"name": "142", "dtype": "string"}, {"name": "143", "dtype": "string"}, {"name": "144", "dtype": "string"}, {"name": "145", "dtype": "string"}, {"name": "146", "dtype": "string"}, {"name": "147", "dtype": "string"}, {"name": "148", "dtype": "string"}, {"name": "149", "dtype": "string"}, {"name": "15", "dtype": "string"}, {"name": "150", "dtype": "string"}, {"name": "151", "dtype": "string"}, {"name": "152", "dtype": "string"}, {"name": "153", "dtype": "string"}, {"name": "154", "dtype": "string"}, {"name": "155", "dtype": "string"}, {"name": "156", "dtype": "string"}, {"name": "157", "dtype": "string"}, {"name": "158", "dtype": "string"}, {"name": "159", "dtype": "string"}, {"name": "16", "dtype": "string"}, {"name": "160", "dtype": "string"}, {"name": "161", "dtype": "string"}, {"name": "162", "dtype": "string"}, {"name": "163", "dtype": "string"}, {"name": "164", "dtype": "string"}, {"name": "165", "dtype": "string"}, {"name": "166", "dtype": "string"}, {"name": "167", "dtype": "string"}, {"name": "168", "dtype": "string"}, {"name": "169", "dtype": "string"}, {"name": "17", "dtype": "string"}, {"name": "170", "dtype": "string"}, {"name": "171", "dtype": "string"}, {"name": "172", "dtype": "string"}, {"name": "173", "dtype": "string"}, {"name": "174", "dtype": "string"}, {"name": "175", "dtype": "string"}, {"name": "176", "dtype": "string"}, {"name": "177", "dtype": "string"}, {"name": "178", "dtype": "string"}, {"name": "179", "dtype": "string"}, {"name": "18", "dtype": "string"}, {"name": "180", "dtype": "string"}, {"name": "181", "dtype": "string"}, {"name": "182", "dtype": "string"}, {"name": "183", "dtype": "string"}, {"name": "184", "dtype": "string"}, {"name": "185", "dtype": "string"}, {"name": "186", "dtype": "string"}, {"name": "187", "dtype": "string"}, {"name": "188", "dtype": "string"}, {"name": "189", "dtype": "string"}, {"name": "19", "dtype": "string"}, {"name": "190", "dtype": "string"}, {"name": "191", "dtype": "string"}, {"name": "192", "dtype": "string"}, {"name": "193", "dtype": "string"}, {"name": "194", "dtype": "string"}, {"name": "195", "dtype": "string"}, {"name": "196", "dtype": "string"}, {"name": "197", "dtype": "string"}, {"name": "198", "dtype": "string"}, {"name": "199", "dtype": "string"}, {"name": "2", "dtype": "string"}, {"name": "20", "dtype": "string"}, {"name": "200", "dtype": "string"}, {"name": "201", "dtype": "string"}, {"name": "202", "dtype": "string"}, {"name": "203", "dtype": "string"}, {"name": "204", "dtype": "string"}, {"name": "205", "dtype": "string"}, {"name": "206", "dtype": "string"}, {"name": "207", "dtype": "string"}, {"name": "208", "dtype": "string"}, {"name": "209", "dtype": "string"}, {"name": "21", "dtype": "string"}, {"name": "210", "dtype": "string"}, {"name": "211", "dtype": "string"}, {"name": "212", "dtype": "string"}, {"name": "213", "dtype": "string"}, {"name": "214", "dtype": "string"}, {"name": "215", "dtype": "string"}, {"name": "216", "dtype": "string"}, {"name": "217", "dtype": "string"}, {"name": "218", "dtype": "string"}, {"name": "219", "dtype": "string"}, {"name": "22", "dtype": "string"}, {"name": "220", "dtype": "string"}, {"name": "221", "dtype": "string"}, {"name": "222", "dtype": "string"}, {"name": "223", "dtype": "string"}, {"name": "224", "dtype": "string"}, {"name": "225", "dtype": "string"}, {"name": "226", "dtype": "string"}, {"name": "227", "dtype": "string"}, {"name": "228", "dtype": "string"}, {"name": "229", "dtype": "string"}, {"name": "23", "dtype": "string"}, {"name": "230", "dtype": "string"}, {"name": "231", "dtype": "string"}, {"name": "232", "dtype": "string"}, {"name": "233", "dtype": "string"}, {"name": "234", "dtype": "string"}, {"name": "235", "dtype": "string"}, {"name": "236", "dtype": "string"}, {"name": "237", "dtype": "string"}, {"name": "238", "dtype": "string"}, {"name": "239", "dtype": "string"}, {"name": "24", "dtype": "string"}, {"name": "240", "dtype": "string"}, {"name": "241", "dtype": "string"}, {"name": "242", "dtype": "string"}, {"name": "243", "dtype": "string"}, {"name": "244", "dtype": "string"}, {"name": "245", "dtype": "string"}, {"name": "246", "dtype": "string"}, {"name": "247", "dtype": "string"}, {"name": "248", "dtype": "string"}, {"name": "249", "dtype": "string"}, {"name": "25", "dtype": "string"}, {"name": "250", "dtype": "string"}, {"name": "251", "dtype": "string"}, {"name": "252", "dtype": "string"}, {"name": "253", "dtype": "string"}, {"name": "254", "dtype": "string"}, {"name": "255", "dtype": "string"}, {"name": "256", "dtype": "string"}, {"name": "257", "dtype": "string"}, {"name": "258", "dtype": "string"}, {"name": "259", "dtype": "string"}, {"name": "26", "dtype": "string"}, {"name": "260", "dtype": "string"}, {"name": "261", "dtype": "string"}, {"name": "262", "dtype": "string"}, {"name": "263", "dtype": "string"}, {"name": "264", "dtype": "string"}, {"name": "265", "dtype": "string"}, {"name": "266", "dtype": "string"}, {"name": "267", "dtype": "string"}, {"name": "268", "dtype": "string"}, {"name": "269", "dtype": "string"}, {"name": "27", "dtype": "string"}, {"name": "270", "dtype": "string"}, {"name": "271", "dtype": "string"}, {"name": "272", "dtype": "string"}, {"name": "273", "dtype": "string"}, {"name": "274", "dtype": "string"}, {"name": "275", "dtype": "string"}, {"name": "276", "dtype": "string"}, {"name": "277", "dtype": "string"}, {"name": "278", "dtype": "string"}, {"name": "279", "dtype": "string"}, {"name": "28", "dtype": "string"}, {"name": "280", "dtype": "string"}, {"name": "281", "dtype": "string"}, {"name": "282", "dtype": "string"}, {"name": "283", "dtype": "string"}, {"name": "284", "dtype": "string"}, {"name": "285", "dtype": "string"}, {"name": "286", "dtype": "string"}, {"name": "287", "dtype": "string"}, {"name": "288", "dtype": "string"}, {"name": "289", "dtype": "string"}, {"name": "29", "dtype": "string"}, {"name": "290", "dtype": "string"}, {"name": "291", "dtype": "string"}, {"name": "292", "dtype": "string"}, {"name": "293", "dtype": "string"}, {"name": "294", "dtype": "string"}, {"name": "295", "dtype": "string"}, {"name": "296", "dtype": "string"}, {"name": "297", "dtype": "string"}, {"name": "298", "dtype": "string"}, {"name": "299", "dtype": "string"}, {"name": "3", "dtype": "string"}, {"name": "30", "dtype": "string"}, {"name": "300", "dtype": "string"}, {"name": "301", "dtype": "string"}, {"name": "302", "dtype": "string"}, {"name": "303", "dtype": "string"}, {"name": "304", "dtype": "string"}, {"name": "305", "dtype": "string"}, {"name": "306", "dtype": "string"}, {"name": "307", "dtype": "string"}, {"name": "308", "dtype": "string"}, {"name": "309", "dtype": "string"}, {"name": "31", "dtype": "string"}, {"name": "310", "dtype": "string"}, {"name": "311", "dtype": "string"}, {"name": "312", "dtype": "string"}, {"name": "313", "dtype": "string"}, {"name": "314", "dtype": "string"}, {"name": "315", "dtype": "string"}, {"name": "316", "dtype": "string"}, {"name": "317", "dtype": "string"}, {"name": "318", "dtype": "string"}, {"name": "319", "dtype": "string"}, {"name": "32", "dtype": "string"}, {"name": "320", "dtype": "string"}, {"name": "321", "dtype": "string"}, {"name": "322", "dtype": "string"}, {"name": "323", "dtype": "string"}, {"name": "324", "dtype": "string"}, {"name": "325", "dtype": "string"}, {"name": "326", "dtype": "string"}, {"name": "327", "dtype": "string"}, {"name": "328", "dtype": "string"}, {"name": "329", "dtype": "string"}, {"name": "33", "dtype": "string"}, {"name": "330", "dtype": "string"}, {"name": "331", "dtype": "string"}, {"name": "332", "dtype": "string"}, {"name": "333", "dtype": "string"}, {"name": "334", "dtype": "string"}, {"name": "335", "dtype": "string"}, {"name": "336", "dtype": "string"}, {"name": "337", "dtype": "string"}, {"name": "338", "dtype": "string"}, {"name": "339", "dtype": "string"}, {"name": "34", "dtype": "string"}, {"name": "340", "dtype": "string"}, {"name": "341", "dtype": "string"}, {"name": "342", "dtype": "string"}, {"name": "343", "dtype": "string"}, {"name": "344", "dtype": "string"}, {"name": "345", "dtype": "string"}, {"name": "346", "dtype": "string"}, {"name": "347", "dtype": "string"}, {"name": "348", "dtype": "string"}, {"name": "349", "dtype": "string"}, {"name": "35", "dtype": "string"}, {"name": "350", "dtype": "string"}, {"name": "351", "dtype": "string"}, {"name": "352", "dtype": "string"}, {"name": "353", "dtype": "string"}, {"name": "354", "dtype": "string"}, {"name": "355", "dtype": "string"}, {"name": "356", "dtype": "string"}, {"name": "357", "dtype": "string"}, {"name": "358", "dtype": "string"}, {"name": "359", "dtype": "string"}, {"name": "36", "dtype": "string"}, {"name": "360", "dtype": "string"}, {"name": "361", "dtype": "string"}, {"name": "362", "dtype": "string"}, {"name": "363", "dtype": "string"}, {"name": "364", "dtype": "string"}, {"name": "365", "dtype": "string"}, {"name": "366", "dtype": "string"}, {"name": "367", "dtype": "string"}, {"name": "368", "dtype": "string"}, {"name": "369", "dtype": "string"}, {"name": "37", "dtype": "string"}, {"name": "370", "dtype": "string"}, {"name": "371", "dtype": "string"}, {"name": "372", "dtype": "string"}, {"name": "373", "dtype": "string"}, {"name": "374", "dtype": "string"}, {"name": "375", "dtype": "string"}, {"name": "376", "dtype": "string"}, {"name": "377", "dtype": "string"}, {"name": "378", "dtype": "string"}, {"name": "379", "dtype": "string"}, {"name": "38", "dtype": "string"}, {"name": "380", "dtype": "string"}, {"name": "381", "dtype": "string"}, {"name": "382", "dtype": "string"}, {"name": "383", "dtype": "string"}, {"name": "384", "dtype": "string"}, {"name": "385", "dtype": "string"}, {"name": "386", "dtype": "string"}, {"name": "387", "dtype": "string"}, {"name": "388", "dtype": "string"}, {"name": "389", "dtype": "string"}, {"name": "39", "dtype": "string"}, {"name": "390", "dtype": "string"}, {"name": "391", "dtype": "string"}, {"name": "392", "dtype": "string"}, {"name": "393", "dtype": "string"}, {"name": "394", "dtype": "string"}, {"name": "395", "dtype": "string"}, {"name": "396", "dtype": "string"}, {"name": "397", "dtype": "string"}, {"name": "4", "dtype": "string"}, {"name": "40", "dtype": "string"}, {"name": "41", "dtype": "string"}, {"name": "42", "dtype": "string"}, {"name": "43", "dtype": "string"}, {"name": "44", "dtype": "string"}, {"name": "45", "dtype": "string"}, {"name": "46", "dtype": "string"}, {"name": "47", "dtype": "string"}, {"name": "48", "dtype": "string"}, {"name": "49", "dtype": "string"}, {"name": "5", "dtype": "string"}, {"name": "50", "dtype": "string"}, {"name": "51", "dtype": "string"}, {"name": "52", "dtype": "string"}, {"name": "53", "dtype": "string"}, {"name": "54", "dtype": "string"}, {"name": "55", "dtype": "string"}, {"name": "56", "dtype": "string"}, {"name": "57", "dtype": "string"}, {"name": "58", "dtype": "string"}, {"name": "59", "dtype": "string"}, {"name": "6", "dtype": "string"}, {"name": "60", "dtype": "string"}, {"name": "61", "dtype": "string"}, {"name": "62", "dtype": "string"}, {"name": "63", "dtype": "string"}, {"name": "64", "dtype": "string"}, {"name": "65", "dtype": "string"}, {"name": "66", "dtype": "string"}, {"name": "67", "dtype": "string"}, {"name": "68", "dtype": "string"}, {"name": "69", "dtype": "string"}, {"name": "7", "dtype": "string"}, {"name": "70", "dtype": "string"}, {"name": "71", "dtype": "string"}, {"name": "72", "dtype": "string"}, {"name": "73", "dtype": "string"}, {"name": "74", "dtype": "string"}, {"name": "75", "dtype": "string"}, {"name": "76", "dtype": "string"}, {"name": "77", "dtype": "string"}, {"name": "78", "dtype": "string"}, {"name": "79", "dtype": "string"}, {"name": "8", "dtype": "string"}, {"name": "80", "dtype": "string"}, {"name": "81", "dtype": "string"}, {"name": "82", "dtype": "string"}, {"name": "83", "dtype": "string"}, {"name": "84", "dtype": "string"}, {"name": "85", "dtype": "string"}, {"name": "86", "dtype": "string"}, {"name": "87", "dtype": "string"}, {"name": "88", "dtype": "string"}, {"name": "89", "dtype": "string"}, {"name": "9", "dtype": "string"}, {"name": "90", "dtype": "string"}, {"name": "91", "dtype": "string"}, {"name": "92", "dtype": "string"}, {"name": "93", "dtype": "string"}, {"name": "94", "dtype": "string"}, {"name": "95", "dtype": "string"}, {"name": "96", "dtype": "string"}, {"name": "97", "dtype": "string"}, {"name": "98", "dtype": "string"}, {"name": "99", "dtype": "string"}]}, {"name": "lang_tokens", "struct": [{"name": "0", "dtype": "string"}, {"name": "1", "dtype": "string"}, {"name": "10", "dtype": "string"}, {"name": "100", "dtype": "string"}, {"name": "101", "dtype": "string"}, {"name": "102", "dtype": "string"}, {"name": "103", "dtype": "string"}, {"name": "104", "dtype": "string"}, {"name": "105", "dtype": "string"}, {"name": "106", "dtype": "string"}, {"name": "107", "dtype": "string"}, {"name": "108", "dtype": "string"}, {"name": "109", "dtype": "string"}, {"name": "11", "dtype": "string"}, {"name": "110", "dtype": "string"}, {"name": "111", "dtype": "string"}, {"name": "112", "dtype": "string"}, {"name": "113", "dtype": "string"}, {"name": "114", "dtype": "string"}, {"name": "115", "dtype": "string"}, {"name": "116", "dtype": "string"}, {"name": "117", "dtype": "string"}, {"name": "118", "dtype": "string"}, {"name": "119", "dtype": "string"}, {"name": "12", "dtype": "string"}, {"name": "120", "dtype": "string"}, {"name": "121", "dtype": "string"}, {"name": "122", "dtype": "string"}, {"name": "123", "dtype": "string"}, {"name": "124", "dtype": "string"}, {"name": "125", "dtype": "string"}, {"name": "126", "dtype": "string"}, {"name": "127", "dtype": "string"}, {"name": "128", "dtype": "string"}, {"name": "129", "dtype": "string"}, {"name": "13", "dtype": "string"}, {"name": "130", "dtype": "string"}, {"name": "131", "dtype": "string"}, {"name": "132", "dtype": "string"}, {"name": "133", "dtype": "string"}, {"name": "134", "dtype": "string"}, {"name": "135", "dtype": "string"}, {"name": "136", "dtype": "string"}, {"name": "137", "dtype": "string"}, {"name": "138", "dtype": "string"}, {"name": "139", "dtype": "string"}, {"name": "14", "dtype": "string"}, {"name": "140", "dtype": "string"}, {"name": "141", "dtype": "string"}, {"name": "142", "dtype": "string"}, {"name": "143", "dtype": "string"}, {"name": "144", "dtype": "string"}, {"name": "145", "dtype": "string"}, {"name": "146", "dtype": "string"}, {"name": "147", "dtype": "string"}, {"name": "148", "dtype": "string"}, {"name": "149", "dtype": "string"}, {"name": "15", "dtype": "string"}, {"name": "150", "dtype": "string"}, {"name": "151", "dtype": "string"}, {"name": "152", "dtype": "string"}, {"name": "153", "dtype": "string"}, {"name": "154", "dtype": "string"}, {"name": "155", "dtype": "string"}, {"name": "156", "dtype": "string"}, {"name": "157", "dtype": "string"}, {"name": "158", "dtype": "string"}, {"name": "159", "dtype": "string"}, {"name": "16", "dtype": "string"}, {"name": "160", "dtype": "string"}, {"name": "161", "dtype": "string"}, {"name": "162", "dtype": "string"}, {"name": "163", "dtype": "string"}, {"name": "164", "dtype": "string"}, {"name": "165", "dtype": "string"}, {"name": "166", "dtype": "string"}, {"name": "167", "dtype": "string"}, {"name": "168", "dtype": "string"}, {"name": "169", "dtype": "string"}, {"name": "17", "dtype": "string"}, {"name": "170", "dtype": "string"}, {"name": "171", "dtype": "string"}, {"name": "172", "dtype": "string"}, {"name": "173", "dtype": "string"}, {"name": "174", "dtype": "string"}, {"name": "175", "dtype": "string"}, {"name": "176", "dtype": "string"}, {"name": "177", "dtype": "string"}, {"name": "178", "dtype": "string"}, {"name": "179", "dtype": "string"}, {"name": "18", "dtype": "string"}, {"name": "180", "dtype": "string"}, {"name": "181", "dtype": "string"}, {"name": "182", "dtype": "string"}, {"name": "183", "dtype": "string"}, {"name": "184", "dtype": "string"}, {"name": "185", "dtype": "string"}, {"name": "186", "dtype": "string"}, {"name": "187", "dtype": "string"}, {"name": "188", "dtype": "string"}, {"name": "189", "dtype": "string"}, {"name": "19", "dtype": "string"}, {"name": "190", "dtype": "string"}, {"name": "191", "dtype": "string"}, {"name": "192", "dtype": "string"}, {"name": "193", "dtype": "string"}, {"name": "194", "dtype": "string"}, {"name": "195", "dtype": "string"}, {"name": "196", "dtype": "string"}, {"name": "197", "dtype": "string"}, {"name": "198", "dtype": "string"}, {"name": "199", "dtype": "string"}, {"name": "2", "dtype": "string"}, {"name": "20", "dtype": "string"}, {"name": "200", "dtype": "string"}, {"name": "201", "dtype": "string"}, {"name": "202", "dtype": "string"}, {"name": "203", "dtype": "string"}, {"name": "204", "dtype": "string"}, {"name": "205", "dtype": "string"}, {"name": "206", "dtype": "string"}, {"name": "207", "dtype": "string"}, {"name": "208", "dtype": "string"}, {"name": "209", "dtype": "string"}, {"name": "21", "dtype": "string"}, {"name": "210", "dtype": "string"}, {"name": "211", "dtype": "string"}, {"name": "212", "dtype": "string"}, {"name": "213", "dtype": "string"}, {"name": "214", "dtype": "string"}, {"name": "215", "dtype": "string"}, {"name": "216", "dtype": "string"}, {"name": "217", "dtype": "string"}, {"name": "218", "dtype": "string"}, {"name": "219", "dtype": "string"}, {"name": "22", "dtype": "string"}, {"name": "220", "dtype": "string"}, {"name": "221", "dtype": "string"}, {"name": "222", "dtype": "string"}, {"name": "223", "dtype": "string"}, {"name": "224", "dtype": "string"}, {"name": "225", "dtype": "string"}, {"name": "226", "dtype": "string"}, {"name": "227", "dtype": "string"}, {"name": "228", "dtype": "string"}, {"name": "229", "dtype": "string"}, {"name": "23", "dtype": "string"}, {"name": "230", "dtype": "string"}, {"name": "231", "dtype": "string"}, {"name": "232", "dtype": "string"}, {"name": "233", "dtype": "string"}, {"name": "234", "dtype": "string"}, {"name": "235", "dtype": "string"}, {"name": "236", "dtype": "string"}, {"name": "237", "dtype": "string"}, {"name": "238", "dtype": "string"}, {"name": "239", "dtype": "string"}, {"name": "24", "dtype": "string"}, {"name": "240", "dtype": "string"}, {"name": "241", "dtype": "string"}, {"name": "242", "dtype": "string"}, {"name": "243", "dtype": "string"}, {"name": "244", "dtype": "string"}, {"name": "245", "dtype": "string"}, {"name": "246", "dtype": "string"}, {"name": "247", "dtype": "string"}, {"name": "248", "dtype": "string"}, {"name": "249", "dtype": "string"}, {"name": "25", "dtype": "string"}, {"name": "250", "dtype": "string"}, {"name": "251", "dtype": "string"}, {"name": "252", "dtype": "string"}, {"name": "253", "dtype": "string"}, {"name": "254", "dtype": "string"}, {"name": "255", "dtype": "string"}, {"name": "256", "dtype": "string"}, {"name": "257", "dtype": "string"}, {"name": "258", "dtype": "string"}, {"name": "259", "dtype": "string"}, {"name": "26", "dtype": "string"}, {"name": "260", "dtype": "string"}, {"name": "261", "dtype": "string"}, {"name": "262", "dtype": "string"}, {"name": "263", "dtype": "string"}, {"name": "264", "dtype": "string"}, {"name": "265", "dtype": "string"}, {"name": "266", "dtype": "string"}, {"name": "267", "dtype": "string"}, {"name": "268", "dtype": "string"}, {"name": "269", "dtype": "string"}, {"name": "27", "dtype": "string"}, {"name": "270", "dtype": "string"}, {"name": "271", "dtype": "string"}, {"name": "272", "dtype": "string"}, {"name": "273", "dtype": "string"}, {"name": "274", "dtype": "string"}, {"name": "275", "dtype": "string"}, {"name": "276", "dtype": "string"}, {"name": "277", "dtype": "string"}, {"name": "278", "dtype": "string"}, {"name": "279", "dtype": "string"}, {"name": "28", "dtype": "string"}, {"name": "280", "dtype": "string"}, {"name": "281", "dtype": "string"}, {"name": "282", "dtype": "string"}, {"name": "283", "dtype": "string"}, {"name": "284", "dtype": "string"}, {"name": "285", "dtype": "string"}, {"name": "286", "dtype": "string"}, {"name": "287", "dtype": "string"}, {"name": "288", "dtype": "string"}, {"name": "289", "dtype": "string"}, {"name": "29", "dtype": "string"}, {"name": "290", "dtype": "string"}, {"name": "291", "dtype": "string"}, {"name": "292", "dtype": "string"}, {"name": "293", "dtype": "string"}, {"name": "294", "dtype": "string"}, {"name": "295", "dtype": "string"}, {"name": "296", "dtype": "string"}, {"name": "297", "dtype": "string"}, {"name": "298", "dtype": "string"}, {"name": "299", "dtype": "string"}, {"name": "3", "dtype": "string"}, {"name": "30", "dtype": "string"}, {"name": "300", "dtype": "string"}, {"name": "301", "dtype": "string"}, {"name": "302", "dtype": "string"}, {"name": "303", "dtype": "string"}, {"name": "304", "dtype": "string"}, {"name": "305", "dtype": "string"}, {"name": "306", "dtype": "string"}, {"name": "307", "dtype": "string"}, {"name": "308", "dtype": "string"}, {"name": "309", "dtype": "string"}, {"name": "31", "dtype": "string"}, {"name": "310", "dtype": "string"}, {"name": "311", "dtype": "string"}, {"name": "312", "dtype": "string"}, {"name": "313", "dtype": "string"}, {"name": "314", "dtype": "string"}, {"name": "315", "dtype": "string"}, {"name": "316", "dtype": "string"}, {"name": "317", "dtype": "string"}, {"name": "318", "dtype": "string"}, {"name": "319", "dtype": "string"}, {"name": "32", "dtype": "string"}, {"name": "320", "dtype": "string"}, {"name": "321", "dtype": "string"}, {"name": "322", "dtype": "string"}, {"name": "323", "dtype": "string"}, {"name": "324", "dtype": "string"}, {"name": "325", "dtype": "string"}, {"name": "326", "dtype": "string"}, {"name": "327", "dtype": "string"}, {"name": "328", "dtype": "string"}, {"name": "329", "dtype": "string"}, {"name": "33", "dtype": "string"}, {"name": "330", "dtype": "string"}, {"name": "331", "dtype": "string"}, {"name": "332", "dtype": "string"}, {"name": "333", "dtype": "string"}, {"name": "334", "dtype": "string"}, {"name": "335", "dtype": "string"}, {"name": "336", "dtype": "string"}, {"name": "337", "dtype": "string"}, {"name": "338", "dtype": "string"}, {"name": "339", "dtype": "string"}, {"name": "34", "dtype": "string"}, {"name": "340", "dtype": "string"}, {"name": "341", "dtype": "string"}, {"name": "342", "dtype": "string"}, {"name": "343", "dtype": "string"}, {"name": "344", "dtype": "string"}, {"name": "345", "dtype": "string"}, {"name": "346", "dtype": "string"}, {"name": "347", "dtype": "string"}, {"name": "348", "dtype": "string"}, {"name": "349", "dtype": "string"}, {"name": "35", "dtype": "string"}, {"name": "350", "dtype": "string"}, {"name": "351", "dtype": "string"}, {"name": "352", "dtype": "string"}, {"name": "353", "dtype": "string"}, {"name": "354", "dtype": "string"}, {"name": "355", "dtype": "string"}, {"name": "356", "dtype": "string"}, {"name": "357", "dtype": "string"}, {"name": "358", "dtype": "string"}, {"name": "359", "dtype": "string"}, {"name": "36", "dtype": "string"}, {"name": "360", "dtype": "string"}, {"name": "361", "dtype": "string"}, {"name": "362", "dtype": "string"}, {"name": "363", "dtype": "string"}, {"name": "364", "dtype": "string"}, {"name": "365", "dtype": "string"}, {"name": "366", "dtype": "string"}, {"name": "367", "dtype": "string"}, {"name": "368", "dtype": "string"}, {"name": "369", "dtype": "string"}, {"name": "37", "dtype": "string"}, {"name": "370", "dtype": "string"}, {"name": "371", "dtype": "string"}, {"name": "372", "dtype": "string"}, {"name": "373", "dtype": "string"}, {"name": "374", "dtype": "string"}, {"name": "375", "dtype": "string"}, {"name": "376", "dtype": "string"}, {"name": "38", "dtype": "string"}, {"name": "39", "dtype": "string"}, {"name": "4", "dtype": "string"}, {"name": "40", "dtype": "string"}, {"name": "41", "dtype": "string"}, {"name": "42", "dtype": "string"}, {"name": "43", "dtype": "string"}, {"name": "44", "dtype": "string"}, {"name": "45", "dtype": "string"}, {"name": "46", "dtype": "string"}, {"name": "47", "dtype": "string"}, {"name": "48", "dtype": "string"}, {"name": "49", "dtype": "string"}, {"name": "5", "dtype": "string"}, {"name": "50", "dtype": "string"}, {"name": "51", "dtype": "string"}, {"name": "52", "dtype": "string"}, {"name": "53", "dtype": "string"}, {"name": "54", "dtype": "string"}, {"name": "55", "dtype": "string"}, {"name": "56", "dtype": "string"}, {"name": "57", "dtype": "string"}, {"name": "58", "dtype": "string"}, {"name": "59", "dtype": "string"}, {"name": "6", "dtype": "string"}, {"name": "60", "dtype": "string"}, {"name": "61", "dtype": "string"}, {"name": "62", "dtype": "string"}, {"name": "63", "dtype": "string"}, {"name": "64", "dtype": "string"}, {"name": "65", "dtype": "string"}, {"name": "66", "dtype": "string"}, {"name": "67", "dtype": "string"}, {"name": "68", "dtype": "string"}, {"name": "69", "dtype": "string"}, {"name": "7", "dtype": "string"}, {"name": "70", "dtype": "string"}, {"name": "71", "dtype": "string"}, {"name": "72", "dtype": "string"}, {"name": "73", "dtype": "string"}, {"name": "74", "dtype": "string"}, {"name": "75", "dtype": "string"}, {"name": "76", "dtype": "string"}, {"name": "77", "dtype": "string"}, {"name": "78", "dtype": "string"}, {"name": "79", "dtype": "string"}, {"name": "8", "dtype": "string"}, {"name": "80", "dtype": "string"}, {"name": "81", "dtype": "string"}, {"name": "82", "dtype": "string"}, {"name": "83", "dtype": "string"}, {"name": "84", "dtype": "string"}, {"name": "85", "dtype": "string"}, {"name": "86", "dtype": "string"}, {"name": "87", "dtype": "string"}, {"name": "88", "dtype": "string"}, {"name": "89", "dtype": "string"}, {"name": "9", "dtype": "string"}, {"name": "90", "dtype": "string"}, {"name": "91", "dtype": "string"}, {"name": "92", "dtype": "string"}, {"name": "93", "dtype": "string"}, {"name": "94", "dtype": "string"}, {"name": "95", "dtype": "string"}, {"name": "96", "dtype": "string"}, {"name": "97", "dtype": "string"}, {"name": "98", "dtype": "string"}, {"name": "99", "dtype": "string"}]}, {"name": "parse", "list": [{"name": "children", "list": [{"name": "children", "list": [{"name": "children", "sequence": "null"}, {"name": "confidence", "dtype": "float64"}, {"name": "label", "dtype": "string"}, {"name": "span", "sequence": "int64"}]}, {"name": "confidence", "dtype": "float64"}, {"name": "label", "dtype": "string"}, {"name": "span", "sequence": "int64"}]}, {"name": "confidence", "dtype": "float64"}, {"name": "label", "dtype": "string"}, {"name": "span", "sequence": "int64"}]}, {"name": "text", "sequence": "string"}]}, {"name": "qa_pairs", "list": [{"name": "en_answer", "dtype": "string"}, {"name": "en_answer_tokens", "sequence": "string"}, {"name": "en_match_in_passage", "sequence": "int64"}, {"name": "en_matches_in_source", "sequence": {"sequence": "int64"}}, {"name": "frames", "list": [{"name": "argument", "dtype": "string"}, {"name": "frame", "dtype": "string"}]}, {"name": "lang_answer", "dtype": "string"}, {"name": "lang_match_in_passage", "sequence": "int64"}, {"name": "lang_matches_in_source", "sequence": {"sequence": "int64"}}, {"name": "match_disambiguated_question", "dtype": "string"}, {"name": "passage", "sequence": "string"}, {"name": "passage_id", "dtype": "string"}, {"name": "question", "dtype": "string"}]}, {"name": "repetitious_translation", "dtype": "bool"}, {"name": "source_lang", "dtype": "string"}, {"name": "source_text", "dtype": "string"}, {"name": "source_url", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "translation_probs", "sequence": "string"}, {"name": "translation_sents", "sequence": "string"}]}], "splits": [{"name": "my", "num_bytes": 1457184817, "num_examples": 58619}], "download_size": 0, "dataset_size": 1457184817}, {"config_name": "my_refined", "features": [{"name": "article_title", "dtype": "string"}, {"name": "article_text", "dtype": "string"}, {"name": "entries", "list": [{"name": "id", "dtype": "string"}, {"name": "original", "dtype": "string"}, {"name": "original_sents", "sequence": "string"}, {"name": "parse_tokens", "sequence": {"sequence": "string"}}, {"name": "passage", "struct": [{"name": "en_lang_token_map", "struct": [{"name": "0", "dtype": "int64"}, {"name": "1", "dtype": "int64"}, {"name": "10", "dtype": "int64"}, {"name": "100", "dtype": "int64"}, {"name": "101", "dtype": "int64"}, {"name": "102", "dtype": "int64"}, {"name": "103", "dtype": "int64"}, {"name": "104", "dtype": "int64"}, {"name": "105", "dtype": "int64"}, {"name": "106", "dtype": "int64"}, {"name": "107", "dtype": "int64"}, {"name": "108", "dtype": "int64"}, {"name": "109", "dtype": "int64"}, {"name": "11", "dtype": "int64"}, {"name": "110", "dtype": "int64"}, {"name": "111", "dtype": "int64"}, {"name": "112", "dtype": "int64"}, {"name": "113", "dtype": "int64"}, {"name": "114", "dtype": "int64"}, {"name": "115", "dtype": "int64"}, {"name": "116", "dtype": "int64"}, {"name": "117", "dtype": "int64"}, {"name": "118", "dtype": "int64"}, {"name": "119", "dtype": "int64"}, {"name": "12", "dtype": "int64"}, {"name": "120", "dtype": "int64"}, {"name": "121", "dtype": "int64"}, {"name": "122", "dtype": "int64"}, {"name": "123", "dtype": "int64"}, {"name": "124", "dtype": "int64"}, {"name": "125", "dtype": "int64"}, {"name": "126", "dtype": "int64"}, {"name": "127", "dtype": "int64"}, {"name": "128", "dtype": "int64"}, {"name": "129", "dtype": "int64"}, {"name": "13", "dtype": "int64"}, {"name": "130", "dtype": "int64"}, {"name": "131", "dtype": "int64"}, {"name": "132", "dtype": "int64"}, {"name": "133", "dtype": "int64"}, {"name": "134", "dtype": "int64"}, {"name": "135", "dtype": "int64"}, {"name": "136", "dtype": "int64"}, {"name": "137", "dtype": "int64"}, {"name": "138", "dtype": "int64"}, {"name": "139", "dtype": "int64"}, {"name": "14", "dtype": "int64"}, {"name": "140", "dtype": "null"}, {"name": "141", "dtype": "int64"}, {"name": "142", "dtype": "int64"}, {"name": "143", "dtype": "int64"}, {"name": "144", "dtype": "int64"}, {"name": "145", "dtype": "int64"}, {"name": "146", "dtype": "int64"}, {"name": "147", "dtype": "int64"}, {"name": "148", "dtype": "int64"}, {"name": "149", "dtype": "int64"}, {"name": "15", "dtype": "int64"}, {"name": "150", "dtype": "int64"}, {"name": "151", "dtype": "int64"}, {"name": "152", "dtype": "int64"}, {"name": "153", "dtype": "int64"}, {"name": "154", "dtype": "int64"}, {"name": "155", "dtype": "null"}, {"name": "156", "dtype": "int64"}, {"name": "157", "dtype": "int64"}, {"name": "158", "dtype": "int64"}, {"name": "159", "dtype": "int64"}, {"name": "16", "dtype": "int64"}, {"name": "160", "dtype": "int64"}, {"name": "161", "dtype": "int64"}, {"name": "162", "dtype": "int64"}, {"name": "163", "dtype": "int64"}, {"name": "164", "dtype": "int64"}, {"name": "165", "dtype": "int64"}, {"name": "166", "dtype": "null"}, {"name": "167", "dtype": "int64"}, {"name": "168", "dtype": "int64"}, {"name": "169", "dtype": "null"}, {"name": "17", "dtype": "int64"}, {"name": "170", "dtype": "int64"}, {"name": "171", "dtype": "int64"}, {"name": "172", "dtype": "int64"}, {"name": "173", "dtype": "int64"}, {"name": "174", "dtype": "int64"}, {"name": "175", "dtype": "int64"}, {"name": "176", "dtype": "int64"}, {"name": "177", "dtype": "int64"}, {"name": "178", "dtype": "int64"}, {"name": "179", "dtype": "int64"}, {"name": "18", "dtype": "int64"}, {"name": "180", "dtype": "int64"}, {"name": "181", "dtype": "int64"}, {"name": "182", "dtype": "null"}, {"name": "183", "dtype": "int64"}, {"name": "184", "dtype": "int64"}, {"name": "185", "dtype": "int64"}, {"name": "186", "dtype": "null"}, {"name": "187", "dtype": "null"}, {"name": "188", "dtype": "int64"}, {"name": "189", "dtype": "int64"}, {"name": "19", "dtype": "int64"}, {"name": "190", "dtype": "int64"}, {"name": "191", "dtype": "int64"}, {"name": "192", "dtype": "int64"}, {"name": "193", "dtype": "int64"}, {"name": "194", "dtype": "int64"}, {"name": "195", "dtype": "int64"}, {"name": "196", "dtype": "int64"}, {"name": "197", "dtype": "int64"}, {"name": "198", "dtype": "int64"}, {"name": "199", "dtype": "int64"}, {"name": "2", "dtype": "int64"}, {"name": "20", "dtype": "int64"}, {"name": "200", "dtype": "int64"}, {"name": "201", "dtype": "int64"}, {"name": "202", "dtype": "null"}, {"name": "203", "dtype": "null"}, {"name": "204", "dtype": "int64"}, {"name": "205", "dtype": "int64"}, {"name": "206", "dtype": "null"}, {"name": "207", "dtype": "int64"}, {"name": "208", "dtype": "null"}, {"name": "209", "dtype": "int64"}, {"name": "21", "dtype": "int64"}, {"name": "210", "dtype": "int64"}, {"name": "211", "dtype": "int64"}, {"name": "212", "dtype": "null"}, {"name": "213", "dtype": "int64"}, {"name": "214", "dtype": "int64"}, {"name": "215", "dtype": "int64"}, {"name": "216", "dtype": "null"}, {"name": "217", "dtype": "int64"}, {"name": "218", "dtype": "int64"}, {"name": "219", "dtype": "null"}, {"name": "22", "dtype": "int64"}, {"name": "220", "dtype": "int64"}, {"name": "221", "dtype": "int64"}, {"name": "222", "dtype": "int64"}, {"name": "223", "dtype": "int64"}, {"name": "224", "dtype": "null"}, {"name": "225", "dtype": "int64"}, {"name": "226", "dtype": "null"}, {"name": "227", "dtype": "null"}, {"name": "228", "dtype": "int64"}, {"name": "229", "dtype": "int64"}, {"name": "23", "dtype": "int64"}, {"name": "230", "dtype": "null"}, {"name": "231", "dtype": "int64"}, {"name": "232", "dtype": "int64"}, {"name": "233", "dtype": "int64"}, {"name": "234", "dtype": "null"}, {"name": "235", "dtype": "int64"}, {"name": "236", "dtype": "int64"}, {"name": "237", "dtype": "null"}, {"name": "238", "dtype": "int64"}, {"name": "239", "dtype": "int64"}, {"name": "24", "dtype": "int64"}, {"name": "240", "dtype": "int64"}, {"name": "241", "dtype": "int64"}, {"name": "242", "dtype": "int64"}, {"name": "243", "dtype": "int64"}, {"name": "244", "dtype": "int64"}, {"name": "245", "dtype": "int64"}, {"name": "246", "dtype": "null"}, {"name": "247", "dtype": "null"}, {"name": "248", "dtype": "null"}, {"name": "249", "dtype": "int64"}, {"name": "25", "dtype": "int64"}, {"name": "250", "dtype": "int64"}, {"name": "251", "dtype": "null"}, {"name": "252", "dtype": "null"}, {"name": "253", "dtype": "int64"}, {"name": "254", "dtype": "int64"}, {"name": "255", "dtype": "null"}, {"name": "256", "dtype": "int64"}, {"name": "257", "dtype": "null"}, {"name": "258", "dtype": "null"}, {"name": "259", "dtype": "int64"}, {"name": "26", "dtype": "int64"}, {"name": "260", "dtype": "null"}, {"name": "261", "dtype": "null"}, {"name": "262", "dtype": "int64"}, {"name": "263", "dtype": "int64"}, {"name": "264", "dtype": "null"}, {"name": "265", "dtype": "null"}, {"name": "266", "dtype": "null"}, {"name": "267", "dtype": "int64"}, {"name": "268", "dtype": "null"}, {"name": "269", "dtype": "int64"}, {"name": "27", "dtype": "int64"}, {"name": "270", "dtype": "int64"}, {"name": "271", "dtype": "null"}, {"name": "272", "dtype": "null"}, {"name": "273", "dtype": "null"}, {"name": "274", "dtype": "int64"}, {"name": "275", "dtype": "int64"}, {"name": "276", "dtype": "int64"}, {"name": "277", "dtype": "int64"}, {"name": "278", "dtype": "null"}, {"name": "279", "dtype": "int64"}, {"name": "28", "dtype": "int64"}, {"name": "280", "dtype": "int64"}, {"name": "281", "dtype": "int64"}, {"name": "282", "dtype": "int64"}, {"name": "283", "dtype": "int64"}, {"name": "284", "dtype": "int64"}, {"name": "285", "dtype": "int64"}, {"name": "286", "dtype": "null"}, {"name": "287", "dtype": "null"}, {"name": "288", "dtype": "null"}, {"name": "289", "dtype": "null"}, {"name": "29", "dtype": "int64"}, {"name": "290", "dtype": "int64"}, {"name": "291", "dtype": "null"}, {"name": "292", "dtype": "null"}, {"name": "293", "dtype": "null"}, {"name": "294", "dtype": "null"}, {"name": "295", "dtype": "null"}, {"name": "296", "dtype": "null"}, {"name": "297", "dtype": "null"}, {"name": "298", "dtype": "null"}, {"name": "299", "dtype": "null"}, {"name": "3", "dtype": "int64"}, {"name": "30", "dtype": "int64"}, {"name": "300", "dtype": "null"}, {"name": "301", "dtype": "null"}, {"name": "302", "dtype": "null"}, {"name": "303", "dtype": "null"}, {"name": "304", "dtype": "null"}, {"name": "305", "dtype": "null"}, {"name": "306", "dtype": "null"}, {"name": "307", "dtype": "null"}, {"name": "308", "dtype": "null"}, {"name": "309", "dtype": "null"}, {"name": "31", "dtype": "int64"}, {"name": "310", "dtype": "null"}, {"name": "311", "dtype": "null"}, {"name": "312", "dtype": "null"}, {"name": "313", "dtype": "null"}, {"name": "314", "dtype": "null"}, {"name": "315", "dtype": "null"}, {"name": "316", "dtype": "null"}, {"name": "317", "dtype": "null"}, {"name": "318", "dtype": "null"}, {"name": "319", "dtype": "null"}, {"name": "32", "dtype": "int64"}, {"name": "320", "dtype": "null"}, {"name": "321", "dtype": "null"}, {"name": "322", "dtype": "null"}, {"name": "323", "dtype": "null"}, {"name": "324", "dtype": "null"}, {"name": "325", "dtype": "null"}, {"name": "326", "dtype": "null"}, {"name": "327", "dtype": "null"}, {"name": "328", "dtype": "null"}, {"name": "329", "dtype": "null"}, {"name": "33", "dtype": "int64"}, {"name": "330", "dtype": "null"}, {"name": "331", "dtype": "null"}, {"name": "332", "dtype": "null"}, {"name": "333", "dtype": "null"}, {"name": "334", "dtype": "null"}, {"name": "335", "dtype": "null"}, {"name": "336", "dtype": "null"}, {"name": "337", "dtype": "null"}, {"name": "338", "dtype": "null"}, {"name": "339", "dtype": "null"}, {"name": "34", "dtype": "int64"}, {"name": "340", "dtype": "null"}, {"name": "341", "dtype": "null"}, {"name": "342", "dtype": "null"}, {"name": "343", "dtype": "null"}, {"name": "344", "dtype": "null"}, {"name": "345", "dtype": "null"}, {"name": "346", "dtype": "null"}, {"name": "347", "dtype": "null"}, {"name": "348", "dtype": "null"}, {"name": "349", "dtype": "null"}, {"name": "35", "dtype": "int64"}, {"name": "350", "dtype": "null"}, {"name": "351", "dtype": "null"}, {"name": "352", "dtype": "null"}, {"name": "353", "dtype": "null"}, {"name": "354", "dtype": "null"}, {"name": "355", "dtype": "null"}, {"name": "356", "dtype": "null"}, {"name": "357", "dtype": "null"}, {"name": "358", "dtype": "null"}, {"name": "359", "dtype": "null"}, {"name": "36", "dtype": "int64"}, {"name": "360", "dtype": "null"}, {"name": "361", "dtype": "null"}, {"name": "362", "dtype": "null"}, {"name": "363", "dtype": "null"}, {"name": "364", "dtype": "null"}, {"name": "365", "dtype": "null"}, {"name": "366", "dtype": "null"}, {"name": "367", "dtype": "null"}, {"name": "368", "dtype": "null"}, {"name": "369", "dtype": "null"}, {"name": "37", "dtype": "int64"}, {"name": "370", "dtype": "null"}, {"name": "371", "dtype": "null"}, {"name": "372", "dtype": "null"}, {"name": "373", "dtype": "null"}, {"name": "374", "dtype": "null"}, {"name": "375", "dtype": "null"}, {"name": "376", "dtype": "null"}, {"name": "377", "dtype": "null"}, {"name": "378", "dtype": "null"}, {"name": "379", "dtype": "null"}, {"name": "38", "dtype": "int64"}, {"name": "380", "dtype": "null"}, {"name": "381", "dtype": "null"}, {"name": "382", "dtype": "null"}, {"name": "383", "dtype": "null"}, {"name": "384", "dtype": "null"}, {"name": "385", "dtype": "null"}, {"name": "386", "dtype": "null"}, {"name": "387", "dtype": "null"}, {"name": "388", "dtype": "null"}, {"name": "389", "dtype": "null"}, {"name": "39", "dtype": "int64"}, {"name": "390", "dtype": "null"}, {"name": "391", "dtype": "null"}, {"name": "392", "dtype": "null"}, {"name": "393", "dtype": "null"}, {"name": "394", "dtype": "null"}, {"name": "395", "dtype": "null"}, {"name": "396", "dtype": "null"}, {"name": "397", "dtype": "null"}, {"name": "4", "dtype": "int64"}, {"name": "40", "dtype": "int64"}, {"name": "41", "dtype": "int64"}, {"name": "42", "dtype": "int64"}, {"name": "43", "dtype": "int64"}, {"name": "44", "dtype": "int64"}, {"name": "45", "dtype": "int64"}, {"name": "46", "dtype": "int64"}, {"name": "47", "dtype": "int64"}, {"name": "48", "dtype": "int64"}, {"name": "49", "dtype": "int64"}, {"name": "5", "dtype": "int64"}, {"name": "50", "dtype": "int64"}, {"name": "51", "dtype": "int64"}, {"name": "52", "dtype": "int64"}, {"name": "53", "dtype": "int64"}, {"name": "54", "dtype": "int64"}, {"name": "55", "dtype": "int64"}, {"name": "56", "dtype": "int64"}, {"name": "57", "dtype": "int64"}, {"name": "58", "dtype": "int64"}, {"name": "59", "dtype": "int64"}, {"name": "6", "dtype": "int64"}, {"name": "60", "dtype": "int64"}, {"name": "61", "dtype": "int64"}, {"name": "62", "dtype": "int64"}, {"name": "63", "dtype": "int64"}, {"name": "64", "dtype": "int64"}, {"name": "65", "dtype": "int64"}, {"name": "66", "dtype": "int64"}, {"name": "67", "dtype": "int64"}, {"name": "68", "dtype": "int64"}, {"name": "69", "dtype": "int64"}, {"name": "7", "dtype": "int64"}, {"name": "70", "dtype": "int64"}, {"name": "71", "dtype": "int64"}, {"name": "72", "dtype": "int64"}, {"name": "73", "dtype": "int64"}, {"name": "74", "dtype": "int64"}, {"name": "75", "dtype": "int64"}, {"name": "76", "dtype": "int64"}, {"name": "77", "dtype": "int64"}, {"name": "78", "dtype": "int64"}, {"name": "79", "dtype": "int64"}, {"name": "8", "dtype": "int64"}, {"name": "80", "dtype": "int64"}, {"name": "81", "dtype": "int64"}, {"name": "82", "dtype": "int64"}, {"name": "83", "dtype": "int64"}, {"name": "84", "dtype": "int64"}, {"name": "85", "dtype": "int64"}, {"name": "86", "dtype": "int64"}, {"name": "87", "dtype": "int64"}, {"name": "88", "dtype": "int64"}, {"name": "89", "dtype": "int64"}, {"name": "9", "dtype": "int64"}, {"name": "90", "dtype": "int64"}, {"name": "91", "dtype": "int64"}, {"name": "92", "dtype": "int64"}, {"name": "93", "dtype": "int64"}, {"name": "94", "dtype": "int64"}, {"name": "95", "dtype": "int64"}, {"name": "96", "dtype": "int64"}, {"name": "97", "dtype": "int64"}, {"name": "98", "dtype": "int64"}, {"name": "99", "dtype": "int64"}]}, {"name": "en_tokens", "struct": [{"name": "0", "dtype": "string"}, {"name": "1", "dtype": "string"}, {"name": "10", "dtype": "string"}, {"name": "100", "dtype": "string"}, {"name": "101", "dtype": "string"}, {"name": "102", "dtype": "string"}, {"name": "103", "dtype": "string"}, {"name": "104", "dtype": "string"}, {"name": "105", "dtype": "string"}, {"name": "106", "dtype": "string"}, {"name": "107", "dtype": "string"}, {"name": "108", "dtype": "string"}, {"name": "109", "dtype": "string"}, {"name": "11", "dtype": "string"}, {"name": "110", "dtype": "string"}, {"name": "111", "dtype": "string"}, {"name": "112", "dtype": "string"}, {"name": "113", "dtype": "string"}, {"name": "114", "dtype": "string"}, {"name": "115", "dtype": "string"}, {"name": "116", "dtype": "string"}, {"name": "117", "dtype": "string"}, {"name": "118", "dtype": "string"}, {"name": "119", "dtype": "string"}, {"name": "12", "dtype": "string"}, {"name": "120", "dtype": "string"}, {"name": "121", "dtype": "string"}, {"name": "122", "dtype": "string"}, {"name": "123", "dtype": "string"}, {"name": "124", "dtype": "string"}, {"name": "125", "dtype": "string"}, {"name": "126", "dtype": "string"}, {"name": "127", "dtype": "string"}, {"name": "128", "dtype": "string"}, {"name": "129", "dtype": "string"}, {"name": "13", "dtype": "string"}, {"name": "130", "dtype": "string"}, {"name": "131", "dtype": "string"}, {"name": "132", "dtype": "string"}, {"name": "133", "dtype": "string"}, {"name": "134", "dtype": "string"}, {"name": "135", "dtype": "string"}, {"name": "136", "dtype": "string"}, {"name": "137", "dtype": "string"}, {"name": "138", "dtype": "string"}, {"name": "139", "dtype": "string"}, {"name": "14", "dtype": "string"}, {"name": "140", "dtype": "string"}, {"name": "141", "dtype": "string"}, {"name": "142", "dtype": "string"}, {"name": "143", "dtype": "string"}, {"name": "144", "dtype": "string"}, {"name": "145", "dtype": "string"}, {"name": "146", "dtype": "string"}, {"name": "147", "dtype": "string"}, {"name": "148", "dtype": "string"}, {"name": "149", "dtype": "string"}, {"name": "15", "dtype": "string"}, {"name": "150", "dtype": "string"}, {"name": "151", "dtype": "string"}, {"name": "152", "dtype": "string"}, {"name": "153", "dtype": "string"}, {"name": "154", "dtype": "string"}, {"name": "155", "dtype": "string"}, {"name": "156", "dtype": "string"}, {"name": "157", "dtype": "string"}, {"name": "158", "dtype": "string"}, {"name": "159", "dtype": "string"}, {"name": "16", "dtype": "string"}, {"name": "160", "dtype": "string"}, {"name": "161", "dtype": "string"}, {"name": "162", "dtype": "string"}, {"name": "163", "dtype": "string"}, {"name": "164", "dtype": "string"}, {"name": "165", "dtype": "string"}, {"name": "166", "dtype": "string"}, {"name": "167", "dtype": "string"}, {"name": "168", "dtype": "string"}, {"name": "169", "dtype": "string"}, {"name": "17", "dtype": "string"}, {"name": "170", "dtype": "string"}, {"name": "171", "dtype": "string"}, {"name": "172", "dtype": "string"}, {"name": "173", "dtype": "string"}, {"name": "174", "dtype": "string"}, {"name": "175", "dtype": "string"}, {"name": "176", "dtype": "string"}, {"name": "177", "dtype": "string"}, {"name": "178", "dtype": "string"}, {"name": "179", "dtype": "string"}, {"name": "18", "dtype": "string"}, {"name": "180", "dtype": "string"}, {"name": "181", "dtype": "string"}, {"name": "182", "dtype": "string"}, {"name": "183", "dtype": "string"}, {"name": "184", "dtype": "string"}, {"name": "185", "dtype": "string"}, {"name": "186", "dtype": "string"}, {"name": "187", "dtype": "string"}, {"name": "188", "dtype": "string"}, {"name": "189", "dtype": "string"}, {"name": "19", "dtype": "string"}, {"name": "190", "dtype": "string"}, {"name": "191", "dtype": "string"}, {"name": "192", "dtype": "string"}, {"name": "193", "dtype": "string"}, {"name": "194", "dtype": "string"}, {"name": "195", "dtype": "string"}, {"name": "196", "dtype": "string"}, {"name": "197", "dtype": "string"}, {"name": "198", "dtype": "string"}, {"name": "199", "dtype": "string"}, {"name": "2", "dtype": "string"}, {"name": "20", "dtype": "string"}, {"name": "200", "dtype": "string"}, {"name": "201", "dtype": "string"}, {"name": "202", "dtype": "string"}, {"name": "203", "dtype": "string"}, {"name": "204", "dtype": "string"}, {"name": "205", "dtype": "string"}, {"name": "206", "dtype": "string"}, {"name": "207", "dtype": "string"}, {"name": "208", "dtype": "string"}, {"name": "209", "dtype": "string"}, {"name": "21", "dtype": "string"}, {"name": "210", "dtype": "string"}, {"name": "211", "dtype": "string"}, {"name": "212", "dtype": "string"}, {"name": "213", "dtype": "string"}, {"name": "214", "dtype": "string"}, {"name": "215", "dtype": "string"}, {"name": "216", "dtype": "string"}, {"name": "217", "dtype": "string"}, {"name": "218", "dtype": "string"}, {"name": "219", "dtype": "string"}, {"name": "22", "dtype": "string"}, {"name": "220", "dtype": "string"}, {"name": "221", "dtype": "string"}, {"name": "222", "dtype": "string"}, {"name": "223", "dtype": "string"}, {"name": "224", "dtype": "string"}, {"name": "225", "dtype": "string"}, {"name": "226", "dtype": "string"}, {"name": "227", "dtype": "string"}, {"name": "228", "dtype": "string"}, {"name": "229", "dtype": "string"}, {"name": "23", "dtype": "string"}, {"name": "230", "dtype": "string"}, {"name": "231", "dtype": "string"}, {"name": "232", "dtype": "string"}, {"name": "233", "dtype": "string"}, {"name": "234", "dtype": "string"}, {"name": "235", "dtype": "string"}, {"name": "236", "dtype": "string"}, {"name": "237", "dtype": "string"}, {"name": "238", "dtype": "string"}, {"name": "239", "dtype": "string"}, {"name": "24", "dtype": "string"}, {"name": "240", "dtype": "string"}, {"name": "241", "dtype": "string"}, {"name": "242", "dtype": "string"}, {"name": "243", "dtype": "string"}, {"name": "244", "dtype": "string"}, {"name": "245", "dtype": "string"}, {"name": "246", "dtype": "string"}, {"name": "247", "dtype": "string"}, {"name": "248", "dtype": "string"}, {"name": "249", "dtype": "string"}, {"name": "25", "dtype": "string"}, {"name": "250", "dtype": "string"}, {"name": "251", "dtype": "string"}, {"name": "252", "dtype": "string"}, {"name": "253", "dtype": "string"}, {"name": "254", "dtype": "string"}, {"name": "255", "dtype": "string"}, {"name": "256", "dtype": "string"}, {"name": "257", "dtype": "string"}, {"name": "258", "dtype": "string"}, {"name": "259", "dtype": "string"}, {"name": "26", "dtype": "string"}, {"name": "260", "dtype": "string"}, {"name": "261", "dtype": "string"}, {"name": "262", "dtype": "string"}, {"name": "263", "dtype": "string"}, {"name": "264", "dtype": "string"}, {"name": "265", "dtype": "string"}, {"name": "266", "dtype": "string"}, {"name": "267", "dtype": "string"}, {"name": "268", "dtype": "string"}, {"name": "269", "dtype": "string"}, {"name": "27", "dtype": "string"}, {"name": "270", "dtype": "string"}, {"name": "271", "dtype": "string"}, {"name": "272", "dtype": "string"}, {"name": "273", "dtype": "string"}, {"name": "274", "dtype": "string"}, {"name": "275", "dtype": "string"}, {"name": "276", "dtype": "string"}, {"name": "277", "dtype": "string"}, {"name": "278", "dtype": "string"}, {"name": "279", "dtype": "string"}, {"name": "28", "dtype": "string"}, {"name": "280", "dtype": "string"}, {"name": "281", "dtype": "string"}, {"name": "282", "dtype": "string"}, {"name": "283", "dtype": "string"}, {"name": "284", "dtype": "string"}, {"name": "285", "dtype": "string"}, {"name": "286", "dtype": "string"}, {"name": "287", "dtype": "string"}, {"name": "288", "dtype": "string"}, {"name": "289", "dtype": "string"}, {"name": "29", "dtype": "string"}, {"name": "290", "dtype": "string"}, {"name": "291", "dtype": "null"}, {"name": "292", "dtype": "null"}, {"name": "293", "dtype": "null"}, {"name": "294", "dtype": "null"}, {"name": "295", "dtype": "null"}, {"name": "296", "dtype": "null"}, {"name": "297", "dtype": "null"}, {"name": "298", "dtype": "null"}, {"name": "299", "dtype": "null"}, {"name": "3", "dtype": "string"}, {"name": "30", "dtype": "string"}, {"name": "300", "dtype": "null"}, {"name": "301", "dtype": "null"}, {"name": "302", "dtype": "null"}, {"name": "303", "dtype": "null"}, {"name": "304", "dtype": "null"}, {"name": "305", "dtype": "null"}, {"name": "306", "dtype": "null"}, {"name": "307", "dtype": "null"}, {"name": "308", "dtype": "null"}, {"name": "309", "dtype": "null"}, {"name": "31", "dtype": "string"}, {"name": "310", "dtype": "null"}, {"name": "311", "dtype": "null"}, {"name": "312", "dtype": "null"}, {"name": "313", "dtype": "null"}, {"name": "314", "dtype": "null"}, {"name": "315", "dtype": "null"}, {"name": "316", "dtype": "null"}, {"name": "317", "dtype": "null"}, {"name": "318", "dtype": "null"}, {"name": "319", "dtype": "null"}, {"name": "32", "dtype": "string"}, {"name": "320", "dtype": "null"}, {"name": "321", "dtype": "null"}, {"name": "322", "dtype": "null"}, {"name": "323", "dtype": "null"}, {"name": "324", "dtype": "null"}, {"name": "325", "dtype": "null"}, {"name": "326", "dtype": "null"}, {"name": "327", "dtype": "null"}, {"name": "328", "dtype": "null"}, {"name": "329", "dtype": "null"}, {"name": "33", "dtype": "string"}, {"name": "330", "dtype": "null"}, {"name": "331", "dtype": "null"}, {"name": "332", "dtype": "null"}, {"name": "333", "dtype": "null"}, {"name": "334", "dtype": "null"}, {"name": "335", "dtype": "null"}, {"name": "336", "dtype": "null"}, {"name": "337", "dtype": "null"}, {"name": "338", "dtype": "null"}, {"name": "339", "dtype": "null"}, {"name": "34", "dtype": "string"}, {"name": "340", "dtype": "null"}, {"name": "341", "dtype": "null"}, {"name": "342", "dtype": "null"}, {"name": "343", "dtype": "null"}, {"name": "344", "dtype": "null"}, {"name": "345", "dtype": "null"}, {"name": "346", "dtype": "null"}, {"name": "347", "dtype": "null"}, {"name": "348", "dtype": "null"}, {"name": "349", "dtype": "null"}, {"name": "35", "dtype": "string"}, {"name": "350", "dtype": "null"}, {"name": "351", "dtype": "null"}, {"name": "352", "dtype": "null"}, {"name": "353", "dtype": "null"}, {"name": "354", "dtype": "null"}, {"name": "355", "dtype": "null"}, {"name": "356", "dtype": "null"}, {"name": "357", "dtype": "null"}, {"name": "358", "dtype": "null"}, {"name": "359", "dtype": "null"}, {"name": "36", "dtype": "string"}, {"name": "360", "dtype": "null"}, {"name": "361", "dtype": "null"}, {"name": "362", "dtype": "null"}, {"name": "363", "dtype": "null"}, {"name": "364", "dtype": "null"}, {"name": "365", "dtype": "null"}, {"name": "366", "dtype": "null"}, {"name": "367", "dtype": "null"}, {"name": "368", "dtype": "null"}, {"name": "369", "dtype": "null"}, {"name": "37", "dtype": "string"}, {"name": "370", "dtype": "null"}, {"name": "371", "dtype": "null"}, {"name": "372", "dtype": "null"}, {"name": "373", "dtype": "null"}, {"name": "374", "dtype": "null"}, {"name": "375", "dtype": "null"}, {"name": "376", "dtype": "null"}, {"name": "377", "dtype": "null"}, {"name": "378", "dtype": "null"}, {"name": "379", "dtype": "null"}, {"name": "38", "dtype": "string"}, {"name": "380", "dtype": "null"}, {"name": "381", "dtype": "null"}, {"name": "382", "dtype": "null"}, {"name": "383", "dtype": "null"}, {"name": "384", "dtype": "null"}, {"name": "385", "dtype": "null"}, {"name": "386", "dtype": "null"}, {"name": "387", "dtype": "null"}, {"name": "388", "dtype": "null"}, {"name": "389", "dtype": "null"}, {"name": "39", "dtype": "string"}, {"name": "390", "dtype": "null"}, {"name": "391", "dtype": "null"}, {"name": "392", "dtype": "null"}, {"name": "393", "dtype": "null"}, {"name": "394", "dtype": "null"}, {"name": "395", "dtype": "null"}, {"name": "396", "dtype": "null"}, {"name": "397", "dtype": "null"}, {"name": "4", "dtype": "string"}, {"name": "40", "dtype": "string"}, {"name": "41", "dtype": "string"}, {"name": "42", "dtype": "string"}, {"name": "43", "dtype": "string"}, {"name": "44", "dtype": "string"}, {"name": "45", "dtype": "string"}, {"name": "46", "dtype": "string"}, {"name": "47", "dtype": "string"}, {"name": "48", "dtype": "string"}, {"name": "49", "dtype": "string"}, {"name": "5", "dtype": "string"}, {"name": "50", "dtype": "string"}, {"name": "51", "dtype": "string"}, {"name": "52", "dtype": "string"}, {"name": "53", "dtype": "string"}, {"name": "54", "dtype": "string"}, {"name": "55", "dtype": "string"}, {"name": "56", "dtype": "string"}, {"name": "57", "dtype": "string"}, {"name": "58", "dtype": "string"}, {"name": "59", "dtype": "string"}, {"name": "6", "dtype": "string"}, {"name": "60", "dtype": "string"}, {"name": "61", "dtype": "string"}, {"name": "62", "dtype": "string"}, {"name": "63", "dtype": "string"}, {"name": "64", "dtype": "string"}, {"name": "65", "dtype": "string"}, {"name": "66", "dtype": "string"}, {"name": "67", "dtype": "string"}, {"name": "68", "dtype": "string"}, {"name": "69", "dtype": "string"}, {"name": "7", "dtype": "string"}, {"name": "70", "dtype": "string"}, {"name": "71", "dtype": "string"}, {"name": "72", "dtype": "string"}, {"name": "73", "dtype": "string"}, {"name": "74", "dtype": "string"}, {"name": "75", "dtype": "string"}, {"name": "76", "dtype": "string"}, {"name": "77", "dtype": "string"}, {"name": "78", "dtype": "string"}, {"name": "79", "dtype": "string"}, {"name": "8", "dtype": "string"}, {"name": "80", "dtype": "string"}, {"name": "81", "dtype": "string"}, {"name": "82", "dtype": "string"}, {"name": "83", "dtype": "string"}, {"name": "84", "dtype": "string"}, {"name": "85", "dtype": "string"}, {"name": "86", "dtype": "string"}, {"name": "87", "dtype": "string"}, {"name": "88", "dtype": "string"}, {"name": "89", "dtype": "string"}, {"name": "9", "dtype": "string"}, {"name": "90", "dtype": "string"}, {"name": "91", "dtype": "string"}, {"name": "92", "dtype": "string"}, {"name": "93", "dtype": "string"}, {"name": "94", "dtype": "string"}, {"name": "95", "dtype": "string"}, {"name": "96", "dtype": "string"}, {"name": "97", "dtype": "string"}, {"name": "98", "dtype": "string"}, {"name": "99", "dtype": "string"}]}, {"name": "lang_tokens", "struct": [{"name": "0", "dtype": "string"}, {"name": "1", "dtype": "string"}, {"name": "10", "dtype": "string"}, {"name": "100", "dtype": "string"}, {"name": "101", "dtype": "string"}, {"name": "102", "dtype": "string"}, {"name": "103", "dtype": "string"}, {"name": "104", "dtype": "string"}, {"name": "105", "dtype": "string"}, {"name": "106", "dtype": "string"}, {"name": "107", "dtype": "string"}, {"name": "108", "dtype": "string"}, {"name": "109", "dtype": "string"}, {"name": "11", "dtype": "string"}, {"name": "110", "dtype": "string"}, {"name": "111", "dtype": "string"}, {"name": "112", "dtype": "string"}, {"name": "113", "dtype": "string"}, {"name": "114", "dtype": "string"}, {"name": "115", "dtype": "string"}, {"name": "116", "dtype": "string"}, {"name": "117", "dtype": "string"}, {"name": "118", "dtype": "string"}, {"name": "119", "dtype": "string"}, {"name": "12", "dtype": "string"}, {"name": "120", "dtype": "string"}, {"name": "121", "dtype": "string"}, {"name": "122", "dtype": "string"}, {"name": "123", "dtype": "string"}, {"name": "124", "dtype": "string"}, {"name": "125", "dtype": "string"}, {"name": "126", "dtype": "string"}, {"name": "127", "dtype": "string"}, {"name": "128", "dtype": "string"}, {"name": "129", "dtype": "string"}, {"name": "13", "dtype": "string"}, {"name": "130", "dtype": "string"}, {"name": "131", "dtype": "string"}, {"name": "132", "dtype": "string"}, {"name": "133", "dtype": "string"}, {"name": "134", "dtype": "string"}, {"name": "135", "dtype": "string"}, {"name": "136", "dtype": "string"}, {"name": "137", "dtype": "string"}, {"name": "138", "dtype": "string"}, {"name": "139", "dtype": "string"}, {"name": "14", "dtype": "string"}, {"name": "140", "dtype": "string"}, {"name": "141", "dtype": "string"}, {"name": "142", "dtype": "string"}, {"name": "143", "dtype": "string"}, {"name": "144", "dtype": "string"}, {"name": "145", "dtype": "string"}, {"name": "146", "dtype": "string"}, {"name": "147", "dtype": "string"}, {"name": "148", "dtype": "string"}, {"name": "149", "dtype": "string"}, {"name": "15", "dtype": "string"}, {"name": "150", "dtype": "string"}, {"name": "151", "dtype": "string"}, {"name": "152", "dtype": "string"}, {"name": "153", "dtype": "string"}, {"name": "154", "dtype": "string"}, {"name": "155", "dtype": "string"}, {"name": "156", "dtype": "string"}, {"name": "157", "dtype": "string"}, {"name": "158", "dtype": "string"}, {"name": "159", "dtype": "string"}, {"name": "16", "dtype": "string"}, {"name": "160", "dtype": "string"}, {"name": "161", "dtype": "string"}, {"name": "162", "dtype": "string"}, {"name": "163", "dtype": "string"}, {"name": "164", "dtype": "string"}, {"name": "165", "dtype": "string"}, {"name": "166", "dtype": "string"}, {"name": "167", "dtype": "string"}, {"name": "168", "dtype": "string"}, {"name": "169", "dtype": "string"}, {"name": "17", "dtype": "string"}, {"name": "170", "dtype": "string"}, {"name": "171", "dtype": "string"}, {"name": "172", "dtype": "string"}, {"name": "173", "dtype": "string"}, {"name": "174", "dtype": "string"}, {"name": "175", "dtype": "string"}, {"name": "176", "dtype": "string"}, {"name": "177", "dtype": "string"}, {"name": "178", "dtype": "string"}, {"name": "179", "dtype": "string"}, {"name": "18", "dtype": "string"}, {"name": "180", "dtype": "string"}, {"name": "181", "dtype": "string"}, {"name": "182", "dtype": "string"}, {"name": "183", "dtype": "string"}, {"name": "184", "dtype": "string"}, {"name": "185", "dtype": "string"}, {"name": "186", "dtype": "string"}, {"name": "187", "dtype": "string"}, {"name": "188", "dtype": "string"}, {"name": "189", "dtype": "string"}, {"name": "19", "dtype": "string"}, {"name": "190", "dtype": "string"}, {"name": "191", "dtype": "string"}, {"name": "192", "dtype": "string"}, {"name": "193", "dtype": "string"}, {"name": "194", "dtype": "string"}, {"name": "195", "dtype": "string"}, {"name": "196", "dtype": "string"}, {"name": "197", "dtype": "string"}, {"name": "198", "dtype": "string"}, {"name": "199", "dtype": "string"}, {"name": "2", "dtype": "string"}, {"name": "20", "dtype": "string"}, {"name": "200", "dtype": "string"}, {"name": "201", "dtype": "string"}, {"name": "202", "dtype": "string"}, {"name": "203", "dtype": "string"}, {"name": "204", "dtype": "string"}, {"name": "205", "dtype": "string"}, {"name": "206", "dtype": "string"}, {"name": "207", "dtype": "string"}, {"name": "208", "dtype": "string"}, {"name": "209", "dtype": "string"}, {"name": "21", "dtype": "string"}, {"name": "210", "dtype": "string"}, {"name": "211", "dtype": "string"}, {"name": "212", "dtype": "string"}, {"name": "213", "dtype": "string"}, {"name": "214", "dtype": "string"}, {"name": "215", "dtype": "string"}, {"name": "216", "dtype": "string"}, {"name": "217", "dtype": "string"}, {"name": "218", "dtype": "string"}, {"name": "219", "dtype": "string"}, {"name": "22", "dtype": "string"}, {"name": "220", "dtype": "string"}, {"name": "221", "dtype": "string"}, {"name": "222", "dtype": "string"}, {"name": "223", "dtype": "string"}, {"name": "224", "dtype": "string"}, {"name": "225", "dtype": "string"}, {"name": "226", "dtype": "string"}, {"name": "227", "dtype": "string"}, {"name": "228", "dtype": "string"}, {"name": "229", "dtype": "string"}, {"name": "23", "dtype": "string"}, {"name": "230", "dtype": "string"}, {"name": "231", "dtype": "string"}, {"name": "232", "dtype": "string"}, {"name": "233", "dtype": "string"}, {"name": "234", "dtype": "string"}, {"name": "235", "dtype": "string"}, {"name": "236", "dtype": "string"}, {"name": "237", "dtype": "string"}, {"name": "238", "dtype": "string"}, {"name": "239", "dtype": "string"}, {"name": "24", "dtype": "string"}, {"name": "240", "dtype": "string"}, {"name": "241", "dtype": "string"}, {"name": "242", "dtype": "string"}, {"name": "243", "dtype": "string"}, {"name": "244", "dtype": "string"}, {"name": "245", "dtype": "string"}, {"name": "246", "dtype": "string"}, {"name": "247", "dtype": "string"}, {"name": "248", "dtype": "string"}, {"name": "249", "dtype": "string"}, {"name": "25", "dtype": "string"}, {"name": "250", "dtype": "string"}, {"name": "251", "dtype": "string"}, {"name": "252", "dtype": "string"}, {"name": "253", "dtype": "string"}, {"name": "254", "dtype": "string"}, {"name": "255", "dtype": "string"}, {"name": "256", "dtype": "string"}, {"name": "257", "dtype": "string"}, {"name": "258", "dtype": "string"}, {"name": "259", "dtype": "string"}, {"name": "26", "dtype": "string"}, {"name": "260", "dtype": "string"}, {"name": "261", "dtype": "string"}, {"name": "262", "dtype": "string"}, {"name": "263", "dtype": "string"}, {"name": "264", "dtype": "string"}, {"name": "265", "dtype": "string"}, {"name": "266", "dtype": "string"}, {"name": "267", "dtype": "string"}, {"name": "268", "dtype": "string"}, {"name": "269", "dtype": "string"}, {"name": "27", "dtype": "string"}, {"name": "270", "dtype": "string"}, {"name": "271", "dtype": "string"}, {"name": "272", "dtype": "string"}, {"name": "273", "dtype": "string"}, {"name": "274", "dtype": "string"}, {"name": "275", "dtype": "string"}, {"name": "276", "dtype": "string"}, {"name": "277", "dtype": "string"}, {"name": "278", "dtype": "string"}, {"name": "279", "dtype": "string"}, {"name": "28", "dtype": "string"}, {"name": "280", "dtype": "string"}, {"name": "281", "dtype": "null"}, {"name": "282", "dtype": "null"}, {"name": "283", "dtype": "null"}, {"name": "284", "dtype": "null"}, {"name": "285", "dtype": "null"}, {"name": "286", "dtype": "null"}, {"name": "287", "dtype": "null"}, {"name": "288", "dtype": "null"}, {"name": "289", "dtype": "null"}, {"name": "29", "dtype": "string"}, {"name": "290", "dtype": "null"}, {"name": "291", "dtype": "null"}, {"name": "292", "dtype": "null"}, {"name": "293", "dtype": "null"}, {"name": "294", "dtype": "null"}, {"name": "295", "dtype": "null"}, {"name": "296", "dtype": "null"}, {"name": "297", "dtype": "null"}, {"name": "298", "dtype": "null"}, {"name": "299", "dtype": "null"}, {"name": "3", "dtype": "string"}, {"name": "30", "dtype": "string"}, {"name": "300", "dtype": "null"}, {"name": "301", "dtype": "null"}, {"name": "302", "dtype": "null"}, {"name": "303", "dtype": "null"}, {"name": "304", "dtype": "null"}, {"name": "305", "dtype": "null"}, {"name": "306", "dtype": "null"}, {"name": "307", "dtype": "null"}, {"name": "308", "dtype": "null"}, {"name": "309", "dtype": "null"}, {"name": "31", "dtype": "string"}, {"name": "310", "dtype": "null"}, {"name": "311", "dtype": "null"}, {"name": "312", "dtype": "null"}, {"name": "313", "dtype": "null"}, {"name": "314", "dtype": "null"}, {"name": "315", "dtype": "null"}, {"name": "316", "dtype": "null"}, {"name": "317", "dtype": "null"}, {"name": "318", "dtype": "null"}, {"name": "319", "dtype": "null"}, {"name": "32", "dtype": "string"}, {"name": "320", "dtype": "null"}, {"name": "321", "dtype": "null"}, {"name": "322", "dtype": "null"}, {"name": "323", "dtype": "null"}, {"name": "324", "dtype": "null"}, {"name": "325", "dtype": "null"}, {"name": "326", "dtype": "null"}, {"name": "327", "dtype": "null"}, {"name": "328", "dtype": "null"}, {"name": "329", "dtype": "null"}, {"name": "33", "dtype": "string"}, {"name": "330", "dtype": "null"}, {"name": "331", "dtype": "null"}, {"name": "332", "dtype": "null"}, {"name": "333", "dtype": "null"}, {"name": "334", "dtype": "null"}, {"name": "335", "dtype": "null"}, {"name": "336", "dtype": "null"}, {"name": "337", "dtype": "null"}, {"name": "338", "dtype": "null"}, {"name": "339", "dtype": "null"}, {"name": "34", "dtype": "string"}, {"name": "340", "dtype": "null"}, {"name": "341", "dtype": "null"}, {"name": "342", "dtype": "null"}, {"name": "343", "dtype": "null"}, {"name": "344", "dtype": "null"}, {"name": "345", "dtype": "null"}, {"name": "346", "dtype": "null"}, {"name": "347", "dtype": "null"}, {"name": "348", "dtype": "null"}, {"name": "349", "dtype": "null"}, {"name": "35", "dtype": "string"}, {"name": "350", "dtype": "null"}, {"name": "351", "dtype": "null"}, {"name": "352", "dtype": "null"}, {"name": "353", "dtype": "null"}, {"name": "354", "dtype": "null"}, {"name": "355", "dtype": "null"}, {"name": "356", "dtype": "null"}, {"name": "357", "dtype": "null"}, {"name": "358", "dtype": "null"}, {"name": "359", "dtype": "null"}, {"name": "36", "dtype": "string"}, {"name": "360", "dtype": "null"}, {"name": "361", "dtype": "null"}, {"name": "362", "dtype": "null"}, {"name": "363", "dtype": "null"}, {"name": "364", "dtype": "null"}, {"name": "365", "dtype": "null"}, {"name": "366", "dtype": "null"}, {"name": "367", "dtype": "null"}, {"name": "368", "dtype": "null"}, {"name": "369", "dtype": "null"}, {"name": "37", "dtype": "string"}, {"name": "370", "dtype": "null"}, {"name": "371", "dtype": "null"}, {"name": "372", "dtype": "null"}, {"name": "373", "dtype": "null"}, {"name": "374", "dtype": "null"}, {"name": "375", "dtype": "null"}, {"name": "376", "dtype": "null"}, {"name": "38", "dtype": "string"}, {"name": "39", "dtype": "string"}, {"name": "4", "dtype": "string"}, {"name": "40", "dtype": "string"}, {"name": "41", "dtype": "string"}, {"name": "42", "dtype": "string"}, {"name": "43", "dtype": "string"}, {"name": "44", "dtype": "string"}, {"name": "45", "dtype": "string"}, {"name": "46", "dtype": "string"}, {"name": "47", "dtype": "string"}, {"name": "48", "dtype": "string"}, {"name": "49", "dtype": "string"}, {"name": "5", "dtype": "string"}, {"name": "50", "dtype": "string"}, {"name": "51", "dtype": "string"}, {"name": "52", "dtype": "string"}, {"name": "53", "dtype": "string"}, {"name": "54", "dtype": "string"}, {"name": "55", "dtype": "string"}, {"name": "56", "dtype": "string"}, {"name": "57", "dtype": "string"}, {"name": "58", "dtype": "string"}, {"name": "59", "dtype": "string"}, {"name": "6", "dtype": "string"}, {"name": "60", "dtype": "string"}, {"name": "61", "dtype": "string"}, {"name": "62", "dtype": "string"}, {"name": "63", "dtype": "string"}, {"name": "64", "dtype": "string"}, {"name": "65", "dtype": "string"}, {"name": "66", "dtype": "string"}, {"name": "67", "dtype": "string"}, {"name": "68", "dtype": "string"}, {"name": "69", "dtype": "string"}, {"name": "7", "dtype": "string"}, {"name": "70", "dtype": "string"}, {"name": "71", "dtype": "string"}, {"name": "72", "dtype": "string"}, {"name": "73", "dtype": "string"}, {"name": "74", "dtype": "string"}, {"name": "75", "dtype": "string"}, {"name": "76", "dtype": "string"}, {"name": "77", "dtype": "string"}, {"name": "78", "dtype": "string"}, {"name": "79", "dtype": "string"}, {"name": "8", "dtype": "string"}, {"name": "80", "dtype": "string"}, {"name": "81", "dtype": "string"}, {"name": "82", "dtype": "string"}, {"name": "83", "dtype": "string"}, {"name": "84", "dtype": "string"}, {"name": "85", "dtype": "string"}, {"name": "86", "dtype": "string"}, {"name": "87", "dtype": "string"}, {"name": "88", "dtype": "string"}, {"name": "89", "dtype": "string"}, {"name": "9", "dtype": "string"}, {"name": "90", "dtype": "string"}, {"name": "91", "dtype": "string"}, {"name": "92", "dtype": "string"}, {"name": "93", "dtype": "string"}, {"name": "94", "dtype": "string"}, {"name": "95", "dtype": "string"}, {"name": "96", "dtype": "string"}, {"name": "97", "dtype": "string"}, {"name": "98", "dtype": "string"}, {"name": "99", "dtype": "string"}]}, {"name": "parse", "list": [{"name": "children", "list": [{"name": "children", "list": [{"name": "children", "sequence": "null"}, {"name": "confidence", "dtype": "float64"}, {"name": "label", "dtype": "string"}, {"name": "span", "sequence": "int64"}]}, {"name": "confidence", "dtype": "float64"}, {"name": "label", "dtype": "string"}, {"name": "span", "sequence": "int64"}]}, {"name": "confidence", "dtype": "float64"}, {"name": "label", "dtype": "string"}, {"name": "span", "sequence": "int64"}]}, {"name": "text", "sequence": "string"}]}, {"name": "qa_pairs", "list": [{"name": "en_answer", "dtype": "string"}, {"name": "en_answer_tokens", "sequence": "string"}, {"name": "en_match_in_passage", "sequence": "int64"}, {"name": "en_matches_in_source", "sequence": {"sequence": "int64"}}, {"name": "frames", "list": [{"name": "argument", "dtype": "string"}, {"name": "frame", "dtype": "string"}]}, {"name": "lang_answer", "dtype": "string"}, {"name": "lang_match_in_passage", "sequence": "int64"}, {"name": "lang_matches_in_source", "sequence": {"sequence": "int64"}}, {"name": "match_disambiguated_question", "dtype": "string"}, {"name": "passage", "sequence": "string"}, {"name": "passage_id", "dtype": "string"}, {"name": "question", "dtype": "string"}]}, {"name": "repetitious_translation", "dtype": "bool"}, {"name": "source_lang", "dtype": "string"}, {"name": "source_text", "dtype": "string"}, {"name": "source_url", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "translation_probs", "sequence": "string"}, {"name": "translation_sents", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 50253843, "num_examples": 842}], "download_size": 12279811, "dataset_size": 50253843}], "configs": [{"config_name": "my", "data_files": [{"split": "my", "path": "my/my-*"}]}, {"config_name": "my_refined", "data_files": [{"split": "train", "path": "my_refined/train-*"}]}]}
|
2023-10-16T15:35:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "megawika"
More Information needed
|
[
"# Dataset Card for \"megawika\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"megawika\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"megawika\"\n\nMore Information needed"
] |
bc236dfd526e8fb6349377e406b69dc41f1cc92b
|
# Dataset Card for "msmarco-corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
namespace-Pt/msmarco-corpus
|
[
"region:us"
] |
2023-10-16T14:00:23+00:00
|
{"dataset_info": {"features": [{"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3243246889, "num_examples": 8841823}], "download_size": 1720789558, "dataset_size": 3243246889}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T14:07:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "msmarco-corpus"
More Information needed
|
[
"# Dataset Card for \"msmarco-corpus\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"msmarco-corpus\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"msmarco-corpus\"\n\nMore Information needed"
] |
38b44fa97cdc774e606e5b20e6e917667682443d
|
# Dataset Card for "data-kalapa-medical-chunked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thangvip/data-kalapa-medical-chunked
|
[
"region:us"
] |
2023-10-16T14:02:39+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 9804125, "num_examples": 4399}], "download_size": 4338224, "dataset_size": 9804125}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T14:02:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data-kalapa-medical-chunked"
More Information needed
|
[
"# Dataset Card for \"data-kalapa-medical-chunked\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data-kalapa-medical-chunked\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data-kalapa-medical-chunked\"\n\nMore Information needed"
] |
c0352791135106755909230a762ab475800b8bce
|
# Dataset Card for "msmarco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
namespace-Pt/msmarco
|
[
"region:us"
] |
2023-10-16T14:10:02+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "dev", "path": "data/dev-*"}]}], "dataset_info": {"features": [{"name": "query", "dtype": "string"}, {"name": "positive", "sequence": "string"}], "splits": [{"name": "dev", "num_bytes": 2962960, "num_examples": 6980}], "download_size": 1925216, "dataset_size": 2962960}}
|
2023-10-16T14:10:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "msmarco"
More Information needed
|
[
"# Dataset Card for \"msmarco\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"msmarco\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"msmarco\"\n\nMore Information needed"
] |
bec55f4bb43695688a381e4b7ba4941a8ff9caa7
|
265718 parallel sentences between russian and Qarachay-Malqar languages.
Because of dialects of Qarachay-Malqar language and diphthong change some letter on latin:
b - б/п/ф
w - ў
q - къ
g - гъ
n - нг
Taken from: Alan nart epose, Qarachay-Malqar folklore set, films, Kuliev's poems, phrasebook, Uzden codex of the Qarachay-Malqar, Koran, gospel, psalter, book of the prophet Jonah, book of the prophet Daniel, Ruth, Esther, Qarachay-Malqar dictionary.
|
TSjB/qm_ru_265718
|
[
"language:krc",
"language:ru",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-10-16T14:13:12+00:00
|
{"language": ["krc", "ru"], "license": "cc-by-nc-sa-4.0"}
|
2023-10-16T14:14:39+00:00
|
[] |
[
"krc",
"ru"
] |
TAGS
#language-Karachay-Balkar #language-Russian #license-cc-by-nc-sa-4.0 #region-us
|
265718 parallel sentences between russian and Qarachay-Malqar languages.
Because of dialects of Qarachay-Malqar language and diphthong change some letter on latin:
b - б/п/ф
w - ў
q - къ
g - гъ
n - нг
Taken from: Alan nart epose, Qarachay-Malqar folklore set, films, Kuliev's poems, phrasebook, Uzden codex of the Qarachay-Malqar, Koran, gospel, psalter, book of the prophet Jonah, book of the prophet Daniel, Ruth, Esther, Qarachay-Malqar dictionary.
|
[] |
[
"TAGS\n#language-Karachay-Balkar #language-Russian #license-cc-by-nc-sa-4.0 #region-us \n"
] |
[
33
] |
[
"passage: TAGS\n#language-Karachay-Balkar #language-Russian #license-cc-by-nc-sa-4.0 #region-us \n"
] |
b10bc144f44bb4c94c75ecbd7579901a263beb9c
|
# Dataset Card for "sample_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ailearningcorner/sample_data
|
[
"region:us"
] |
2023-10-16T14:16:04+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "ID", "dtype": "int64"}, {"name": " Student", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 128.1, "num_examples": 7}, {"name": "test", "num_bytes": 54.9, "num_examples": 3}], "download_size": 2655, "dataset_size": 183.0}}
|
2023-10-16T14:19:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sample_data"
More Information needed
|
[
"# Dataset Card for \"sample_data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sample_data\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sample_data\"\n\nMore Information needed"
] |
9e26180a24989faff97b945d4e0dedfd55162ed6
|
Faceset of the youtuber MrBeast, 5252 images (JPG)
|
Pampkinus/Mr-Beast
|
[
"license:openrail",
"region:us"
] |
2023-10-16T14:16:33+00:00
|
{"license": "openrail"}
|
2023-10-16T14:18:56+00:00
|
[] |
[] |
TAGS
#license-openrail #region-us
|
Faceset of the youtuber MrBeast, 5252 images (JPG)
|
[] |
[
"TAGS\n#license-openrail #region-us \n"
] |
[
12
] |
[
"passage: TAGS\n#license-openrail #region-us \n"
] |
15791e6964500ccb604a8862a4d72361090b3694
|
# BigEarthNet - HDF5 version
This repository contains an export of the existing BigEarthNet dataset in HDF5 format. All Sentinel-2 acquisitions are exported according to TorchGeo's dataset (120x120 pixels resolution).
Sentinel-1 is not contained in this repository for the moment.
CSV files contain for each satellite acquisition the corresponding HDF5 file and the index.
A PyTorch dataset class which can be used to iterate over this dataset can be found [here](https://github.com/lccol/bigearthnet-conversion), as well as the script used to convert it into HDF5 format.
|
lc-col/bigearthnet
|
[
"task_categories:image-classification",
"size_categories:100K<n<1M",
"region:us"
] |
2023-10-16T14:18:25+00:00
|
{"size_categories": ["100K<n<1M"], "task_categories": ["image-classification"], "pretty_name": "BigEarthNet"}
|
2023-10-23T11:36:06+00:00
|
[] |
[] |
TAGS
#task_categories-image-classification #size_categories-100K<n<1M #region-us
|
# BigEarthNet - HDF5 version
This repository contains an export of the existing BigEarthNet dataset in HDF5 format. All Sentinel-2 acquisitions are exported according to TorchGeo's dataset (120x120 pixels resolution).
Sentinel-1 is not contained in this repository for the moment.
CSV files contain for each satellite acquisition the corresponding HDF5 file and the index.
A PyTorch dataset class which can be used to iterate over this dataset can be found here, as well as the script used to convert it into HDF5 format.
|
[
"# BigEarthNet - HDF5 version\nThis repository contains an export of the existing BigEarthNet dataset in HDF5 format. All Sentinel-2 acquisitions are exported according to TorchGeo's dataset (120x120 pixels resolution).\nSentinel-1 is not contained in this repository for the moment.\n\nCSV files contain for each satellite acquisition the corresponding HDF5 file and the index.\nA PyTorch dataset class which can be used to iterate over this dataset can be found here, as well as the script used to convert it into HDF5 format."
] |
[
"TAGS\n#task_categories-image-classification #size_categories-100K<n<1M #region-us \n",
"# BigEarthNet - HDF5 version\nThis repository contains an export of the existing BigEarthNet dataset in HDF5 format. All Sentinel-2 acquisitions are exported according to TorchGeo's dataset (120x120 pixels resolution).\nSentinel-1 is not contained in this repository for the moment.\n\nCSV files contain for each satellite acquisition the corresponding HDF5 file and the index.\nA PyTorch dataset class which can be used to iterate over this dataset can be found here, as well as the script used to convert it into HDF5 format."
] |
[
29,
137
] |
[
"passage: TAGS\n#task_categories-image-classification #size_categories-100K<n<1M #region-us \n# BigEarthNet - HDF5 version\nThis repository contains an export of the existing BigEarthNet dataset in HDF5 format. All Sentinel-2 acquisitions are exported according to TorchGeo's dataset (120x120 pixels resolution).\nSentinel-1 is not contained in this repository for the moment.\n\nCSV files contain for each satellite acquisition the corresponding HDF5 file and the index.\nA PyTorch dataset class which can be used to iterate over this dataset can be found here, as well as the script used to convert it into HDF5 format."
] |
1a9e2dd05c392c4920bd6a5725a093ee78bf4fa2
|
# Dataset Card for "ML4SE23_G8_CodeSearchNet-Python"
Dataset used to finetune [WizardCoder-1B-V1.0](https://huggingface.co/WizardLM/WizardCoder-1B-V1.0) on the Code Summarization task.
The dataset is a cleaned version of the Python subset from the [CodeXGLUE CodeSearchNet code-to-text dataset](https://huggingface.co/datasets/code_x_glue_ct_code_to_text).
The original Python subset included the docstring in the `code` column. This dataset has a cleaned `code` column, which contains the original code with the docstring removed.
See https://github.com/ML4SE2023/G8-Codex for more details.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AISE-TUDelft/ML4SE23_G8_CodeSearchNet-Python
|
[
"license:c-uda",
"region:us"
] |
2023-10-16T14:27:53+00:00
|
{"license": "c-uda", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "repo", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "original_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "code_tokens", "sequence": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "sha", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 752373428, "num_examples": 251820}, {"name": "validation", "num_bytes": 43293612, "num_examples": 13914}, {"name": "test", "num_bytes": 46733051, "num_examples": 14918}], "download_size": 297684501, "dataset_size": 842400091}}
|
2023-11-06T14:36:36+00:00
|
[] |
[] |
TAGS
#license-c-uda #region-us
|
# Dataset Card for "ML4SE23_G8_CodeSearchNet-Python"
Dataset used to finetune WizardCoder-1B-V1.0 on the Code Summarization task.
The dataset is a cleaned version of the Python subset from the CodeXGLUE CodeSearchNet code-to-text dataset.
The original Python subset included the docstring in the 'code' column. This dataset has a cleaned 'code' column, which contains the original code with the docstring removed.
See URL for more details.
More Information needed
|
[
"# Dataset Card for \"ML4SE23_G8_CodeSearchNet-Python\"\n\nDataset used to finetune WizardCoder-1B-V1.0 on the Code Summarization task.\n\nThe dataset is a cleaned version of the Python subset from the CodeXGLUE CodeSearchNet code-to-text dataset.\nThe original Python subset included the docstring in the 'code' column. This dataset has a cleaned 'code' column, which contains the original code with the docstring removed.\n\nSee URL for more details.\n\nMore Information needed"
] |
[
"TAGS\n#license-c-uda #region-us \n",
"# Dataset Card for \"ML4SE23_G8_CodeSearchNet-Python\"\n\nDataset used to finetune WizardCoder-1B-V1.0 on the Code Summarization task.\n\nThe dataset is a cleaned version of the Python subset from the CodeXGLUE CodeSearchNet code-to-text dataset.\nThe original Python subset included the docstring in the 'code' column. This dataset has a cleaned 'code' column, which contains the original code with the docstring removed.\n\nSee URL for more details.\n\nMore Information needed"
] |
[
13,
127
] |
[
"passage: TAGS\n#license-c-uda #region-us \n# Dataset Card for \"ML4SE23_G8_CodeSearchNet-Python\"\n\nDataset used to finetune WizardCoder-1B-V1.0 on the Code Summarization task.\n\nThe dataset is a cleaned version of the Python subset from the CodeXGLUE CodeSearchNet code-to-text dataset.\nThe original Python subset included the docstring in the 'code' column. This dataset has a cleaned 'code' column, which contains the original code with the docstring removed.\n\nSee URL for more details.\n\nMore Information needed"
] |
06c4ac193bb6657dacd2ab7391038da66a6ede2a
|
# Dataset Card for Evaluation run of ajibawa-2023/carl-33b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ajibawa-2023/carl-33b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [ajibawa-2023/carl-33b](https://huggingface.co/ajibawa-2023/carl-33b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ajibawa-2023__carl-33b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T06:29:50.391928](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__carl-33b/blob/main/results_2023-10-25T06-29-50.391928.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.4434773489932886,
"em_stderr": 0.005087644945149476,
"f1": 0.48920616610738366,
"f1_stderr": 0.004915552047694347,
"acc": 0.4130577743896054,
"acc_stderr": 0.009343755992304432
},
"harness|drop|3": {
"em": 0.4434773489932886,
"em_stderr": 0.005087644945149476,
"f1": 0.48920616610738366,
"f1_stderr": 0.004915552047694347
},
"harness|gsm8k|5": {
"acc": 0.06368460955269144,
"acc_stderr": 0.006726213078805715
},
"harness|winogrande|5": {
"acc": 0.7624309392265194,
"acc_stderr": 0.011961298905803146
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_ajibawa-2023__carl-33b
|
[
"region:us"
] |
2023-10-16T14:30:46+00:00
|
{"pretty_name": "Evaluation run of ajibawa-2023/carl-33b", "dataset_summary": "Dataset automatically created during the evaluation run of model [ajibawa-2023/carl-33b](https://huggingface.co/ajibawa-2023/carl-33b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ajibawa-2023__carl-33b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T06:29:50.391928](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__carl-33b/blob/main/results_2023-10-25T06-29-50.391928.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4434773489932886,\n \"em_stderr\": 0.005087644945149476,\n \"f1\": 0.48920616610738366,\n \"f1_stderr\": 0.004915552047694347,\n \"acc\": 0.4130577743896054,\n \"acc_stderr\": 0.009343755992304432\n },\n \"harness|drop|3\": {\n \"em\": 0.4434773489932886,\n \"em_stderr\": 0.005087644945149476,\n \"f1\": 0.48920616610738366,\n \"f1_stderr\": 0.004915552047694347\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.06368460955269144,\n \"acc_stderr\": 0.006726213078805715\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7624309392265194,\n \"acc_stderr\": 0.011961298905803146\n }\n}\n```", "repo_url": "https://huggingface.co/ajibawa-2023/carl-33b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T15_30_43.173459", "path": ["**/details_harness|drop|3_2023-10-16T15-30-43.173459.parquet"]}, {"split": "2023_10_25T06_29_50.391928", "path": ["**/details_harness|drop|3_2023-10-25T06-29-50.391928.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T06-29-50.391928.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T15_30_43.173459", "path": ["**/details_harness|gsm8k|5_2023-10-16T15-30-43.173459.parquet"]}, {"split": "2023_10_25T06_29_50.391928", "path": ["**/details_harness|gsm8k|5_2023-10-25T06-29-50.391928.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T06-29-50.391928.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T15_30_43.173459", "path": ["**/details_harness|winogrande|5_2023-10-16T15-30-43.173459.parquet"]}, {"split": "2023_10_25T06_29_50.391928", "path": ["**/details_harness|winogrande|5_2023-10-25T06-29-50.391928.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T06-29-50.391928.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T15_30_43.173459", "path": ["results_2023-10-16T15-30-43.173459.parquet"]}, {"split": "2023_10_25T06_29_50.391928", "path": ["results_2023-10-25T06-29-50.391928.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T06-29-50.391928.parquet"]}]}]}
|
2023-10-25T05:29:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of ajibawa-2023/carl-33b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model ajibawa-2023/carl-33b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T06:29:50.391928(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of ajibawa-2023/carl-33b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model ajibawa-2023/carl-33b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T06:29:50.391928(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of ajibawa-2023/carl-33b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model ajibawa-2023/carl-33b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T06:29:50.391928(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
19,
31,
167,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of ajibawa-2023/carl-33b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model ajibawa-2023/carl-33b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T06:29:50.391928(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
27011e270c83b50ae979c2a10a5fe8fdab1972cc
|
# Dataset Card for "new_sft_summarize"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sayan1101/new_sft_summarize
|
[
"region:us"
] |
2023-10-16T15:11:20+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1264287802, "num_examples": 287113}, {"name": "validation", "num_bytes": 57852724, "num_examples": 13368}, {"name": "test", "num_bytes": 50029142, "num_examples": 11490}], "download_size": 801958229, "dataset_size": 1372169668}}
|
2023-10-16T15:16:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "new_sft_summarize"
More Information needed
|
[
"# Dataset Card for \"new_sft_summarize\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"new_sft_summarize\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"new_sft_summarize\"\n\nMore Information needed"
] |
ec1886e13be69b4d45caa6a277f28da1642172c9
|
# Dataset Card for Evaluation run of lmsys/longchat-7b-v1.5-32k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/lmsys/longchat-7b-v1.5-32k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [lmsys/longchat-7b-v1.5-32k](https://huggingface.co/lmsys/longchat-7b-v1.5-32k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lmsys__longchat-7b-v1.5-32k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T16:20:33.188247](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__longchat-7b-v1.5-32k/blob/main/results_2023-10-16T16-20-33.188247.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.08252936241610738,
"em_stderr": 0.0028179934761829416,
"f1": 0.1372829278523486,
"f1_stderr": 0.0030245592633561815,
"acc": 0.3672124310289838,
"acc_stderr": 0.009455449816488642
},
"harness|drop|3": {
"em": 0.08252936241610738,
"em_stderr": 0.0028179934761829416,
"f1": 0.1372829278523486,
"f1_stderr": 0.0030245592633561815
},
"harness|gsm8k|5": {
"acc": 0.047763457164518575,
"acc_stderr": 0.005874387536229305
},
"harness|winogrande|5": {
"acc": 0.6866614048934491,
"acc_stderr": 0.01303651209674798
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_lmsys__longchat-7b-v1.5-32k
|
[
"region:us"
] |
2023-10-16T15:20:37+00:00
|
{"pretty_name": "Evaluation run of lmsys/longchat-7b-v1.5-32k", "dataset_summary": "Dataset automatically created during the evaluation run of model [lmsys/longchat-7b-v1.5-32k](https://huggingface.co/lmsys/longchat-7b-v1.5-32k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lmsys__longchat-7b-v1.5-32k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-16T16:20:33.188247](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__longchat-7b-v1.5-32k/blob/main/results_2023-10-16T16-20-33.188247.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.08252936241610738,\n \"em_stderr\": 0.0028179934761829416,\n \"f1\": 0.1372829278523486,\n \"f1_stderr\": 0.0030245592633561815,\n \"acc\": 0.3672124310289838,\n \"acc_stderr\": 0.009455449816488642\n },\n \"harness|drop|3\": {\n \"em\": 0.08252936241610738,\n \"em_stderr\": 0.0028179934761829416,\n \"f1\": 0.1372829278523486,\n \"f1_stderr\": 0.0030245592633561815\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.047763457164518575,\n \"acc_stderr\": 0.005874387536229305\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6866614048934491,\n \"acc_stderr\": 0.01303651209674798\n }\n}\n```", "repo_url": "https://huggingface.co/lmsys/longchat-7b-v1.5-32k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T16_20_33.188247", "path": ["**/details_harness|drop|3_2023-10-16T16-20-33.188247.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-16T16-20-33.188247.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T16_20_33.188247", "path": ["**/details_harness|gsm8k|5_2023-10-16T16-20-33.188247.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-16T16-20-33.188247.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T16_20_33.188247", "path": ["**/details_harness|winogrande|5_2023-10-16T16-20-33.188247.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-16T16-20-33.188247.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T16_20_33.188247", "path": ["results_2023-10-16T16-20-33.188247.parquet"]}, {"split": "latest", "path": ["results_2023-10-16T16-20-33.188247.parquet"]}]}]}
|
2023-10-16T15:20:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of lmsys/longchat-7b-v1.5-32k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model lmsys/longchat-7b-v1.5-32k on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-16T16:20:33.188247(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of lmsys/longchat-7b-v1.5-32k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model lmsys/longchat-7b-v1.5-32k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-16T16:20:33.188247(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of lmsys/longchat-7b-v1.5-32k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model lmsys/longchat-7b-v1.5-32k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-16T16:20:33.188247(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
22,
31,
170,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of lmsys/longchat-7b-v1.5-32k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model lmsys/longchat-7b-v1.5-32k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-16T16:20:33.188247(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
5e8955d3c14249b2394b5856c7d235091349de5e
|
# Dataset Card for Evaluation run of porkorbeef/Llama-2-13b-12_153950
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/porkorbeef/Llama-2-13b-12_153950
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [porkorbeef/Llama-2-13b-12_153950](https://huggingface.co/porkorbeef/Llama-2-13b-12_153950) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_porkorbeef__Llama-2-13b-12_153950",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T19:51:17.489031](https://huggingface.co/datasets/open-llm-leaderboard/details_porkorbeef__Llama-2-13b-12_153950/blob/main/results_2023-10-25T19-51-17.489031.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 5.76761744966443e-05,
"f1_stderr": 1.4707528558078046e-05,
"acc": 0.26558800315706393,
"acc_stderr": 0.007012571320319756
},
"harness|drop|3": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 5.76761744966443e-05,
"f1_stderr": 1.4707528558078046e-05
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5311760063141279,
"acc_stderr": 0.014025142640639513
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_porkorbeef__Llama-2-13b-12_153950
|
[
"region:us"
] |
2023-10-16T15:23:07+00:00
|
{"pretty_name": "Evaluation run of porkorbeef/Llama-2-13b-12_153950", "dataset_summary": "Dataset automatically created during the evaluation run of model [porkorbeef/Llama-2-13b-12_153950](https://huggingface.co/porkorbeef/Llama-2-13b-12_153950) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_porkorbeef__Llama-2-13b-12_153950\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T19:51:17.489031](https://huggingface.co/datasets/open-llm-leaderboard/details_porkorbeef__Llama-2-13b-12_153950/blob/main/results_2023-10-25T19-51-17.489031.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 5.76761744966443e-05,\n \"f1_stderr\": 1.4707528558078046e-05,\n \"acc\": 0.26558800315706393,\n \"acc_stderr\": 0.007012571320319756\n },\n \"harness|drop|3\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 5.76761744966443e-05,\n \"f1_stderr\": 1.4707528558078046e-05\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5311760063141279,\n \"acc_stderr\": 0.014025142640639513\n }\n}\n```", "repo_url": "https://huggingface.co/porkorbeef/Llama-2-13b-12_153950", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T16_23_02.920553", "path": ["**/details_harness|drop|3_2023-10-16T16-23-02.920553.parquet"]}, {"split": "2023_10_25T19_51_17.489031", "path": ["**/details_harness|drop|3_2023-10-25T19-51-17.489031.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T19-51-17.489031.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T16_23_02.920553", "path": ["**/details_harness|gsm8k|5_2023-10-16T16-23-02.920553.parquet"]}, {"split": "2023_10_25T19_51_17.489031", "path": ["**/details_harness|gsm8k|5_2023-10-25T19-51-17.489031.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T19-51-17.489031.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T16_23_02.920553", "path": ["**/details_harness|winogrande|5_2023-10-16T16-23-02.920553.parquet"]}, {"split": "2023_10_25T19_51_17.489031", "path": ["**/details_harness|winogrande|5_2023-10-25T19-51-17.489031.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T19-51-17.489031.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T16_23_02.920553", "path": ["results_2023-10-16T16-23-02.920553.parquet"]}, {"split": "2023_10_25T19_51_17.489031", "path": ["results_2023-10-25T19-51-17.489031.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T19-51-17.489031.parquet"]}]}]}
|
2023-10-25T18:51:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of porkorbeef/Llama-2-13b-12_153950
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model porkorbeef/Llama-2-13b-12_153950 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T19:51:17.489031(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of porkorbeef/Llama-2-13b-12_153950",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model porkorbeef/Llama-2-13b-12_153950 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T19:51:17.489031(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of porkorbeef/Llama-2-13b-12_153950",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model porkorbeef/Llama-2-13b-12_153950 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T19:51:17.489031(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of porkorbeef/Llama-2-13b-12_153950## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model porkorbeef/Llama-2-13b-12_153950 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T19:51:17.489031(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
823f346149584429522a5bc1f0bb859837ee7c37
|
# Dataset Card for Evaluation run of bigscience/bloom-1b7
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/bigscience/bloom-1b7
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bigscience__bloom-1b7",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-04T13:06:13.491181](https://huggingface.co/datasets/open-llm-leaderboard/details_bigscience__bloom-1b7/blob/main/results_2023-12-04T13-06-13.491181.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.008339651250947688,
"acc_stderr": 0.0025049422268605148
},
"harness|gsm8k|5": {
"acc": 0.008339651250947688,
"acc_stderr": 0.0025049422268605148
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_bigscience__bloom-1b7
|
[
"region:us"
] |
2023-10-16T15:35:31+00:00
|
{"pretty_name": "Evaluation run of bigscience/bloom-1b7", "dataset_summary": "Dataset automatically created during the evaluation run of model [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bigscience__bloom-1b7\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-04T13:06:13.491181](https://huggingface.co/datasets/open-llm-leaderboard/details_bigscience__bloom-1b7/blob/main/results_2023-12-04T13-06-13.491181.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.008339651250947688,\n \"acc_stderr\": 0.0025049422268605148\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.008339651250947688,\n \"acc_stderr\": 0.0025049422268605148\n }\n}\n```", "repo_url": "https://huggingface.co/bigscience/bloom-1b7", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T16_35_28.358737", "path": ["**/details_harness|drop|3_2023-10-16T16-35-28.358737.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-16T16-35-28.358737.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T16_35_28.358737", "path": ["**/details_harness|gsm8k|5_2023-10-16T16-35-28.358737.parquet"]}, {"split": "2023_12_03T16_04_08.979472", "path": ["**/details_harness|gsm8k|5_2023-12-03T16-04-08.979472.parquet"]}, {"split": "2023_12_04T09_54_54.675804", "path": ["**/details_harness|gsm8k|5_2023-12-04T09-54-54.675804.parquet"]}, {"split": "2023_12_04T13_06_13.491181", "path": ["**/details_harness|gsm8k|5_2023-12-04T13-06-13.491181.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-04T13-06-13.491181.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T16_35_28.358737", "path": ["**/details_harness|winogrande|5_2023-10-16T16-35-28.358737.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-16T16-35-28.358737.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T16_35_28.358737", "path": ["results_2023-10-16T16-35-28.358737.parquet"]}, {"split": "2023_12_03T16_04_08.979472", "path": ["results_2023-12-03T16-04-08.979472.parquet"]}, {"split": "2023_12_04T09_54_54.675804", "path": ["results_2023-12-04T09-54-54.675804.parquet"]}, {"split": "2023_12_04T13_06_13.491181", "path": ["results_2023-12-04T13-06-13.491181.parquet"]}, {"split": "latest", "path": ["results_2023-12-04T13-06-13.491181.parquet"]}]}]}
|
2023-12-04T13:06:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of bigscience/bloom-1b7
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model bigscience/bloom-1b7 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-04T13:06:13.491181(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of bigscience/bloom-1b7",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bigscience/bloom-1b7 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-04T13:06:13.491181(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of bigscience/bloom-1b7",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bigscience/bloom-1b7 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-04T13:06:13.491181(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
17,
31,
166,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of bigscience/bloom-1b7## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model bigscience/bloom-1b7 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-04T13:06:13.491181(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
416c59f5cfec93b4293d2338403a547d94b79f18
|
# Dataset Card for "flan2021_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DataProvenanceInitiative/flan2021_submix_original
|
[
"region:us"
] |
2023-10-16T16:28:22+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "task_name", "dtype": "string"}, {"name": "template_type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8988026240, "num_examples": 5362361}], "download_size": 5486287486, "dataset_size": 8988026240}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T16:30:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "flan2021_submix_original"
More Information needed
|
[
"# Dataset Card for \"flan2021_submix_original\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"flan2021_submix_original\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"flan2021_submix_original\"\n\nMore Information needed"
] |
87744174e77ee27cac01245aed544ff4dad0ad0f
|
# Dataset Card for "cot_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DataProvenanceInitiative/cot_submix_original
|
[
"region:us"
] |
2023-10-16T16:31:52+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "task_name", "dtype": "string"}, {"name": "template_type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 209004809, "num_examples": 183848}], "download_size": 100293074, "dataset_size": 209004809}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T16:31:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cot_submix_original"
More Information needed
|
[
"# Dataset Card for \"cot_submix_original\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cot_submix_original\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cot_submix_original\"\n\nMore Information needed"
] |
482f3415752146874c7298acc8d938549a0c5344
|
# Dataset Card for "niv2_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DataProvenanceInitiative/niv2_submix_original
|
[
"region:us"
] |
2023-10-16T16:32:45+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "task_name", "dtype": "string"}, {"name": "template_type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13104211362, "num_examples": 10066896}], "download_size": 7612945130, "dataset_size": 13104211362}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T16:35:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "niv2_submix_original"
More Information needed
|
[
"# Dataset Card for \"niv2_submix_original\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"niv2_submix_original\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"niv2_submix_original\"\n\nMore Information needed"
] |
7c989c9ecb6d84cd9308aa571378e95a59d95b05
|
# Dataset Card for "dialog_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DataProvenanceInitiative/dialog_submix_original
|
[
"region:us"
] |
2023-10-16T16:37:44+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "task_name", "dtype": "string"}, {"name": "template_type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1024507265, "num_examples": 553869}], "download_size": 583008075, "dataset_size": 1024507265}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T16:38:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dialog_submix_original"
More Information needed
|
[
"# Dataset Card for \"dialog_submix_original\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dialog_submix_original\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dialog_submix_original\"\n\nMore Information needed"
] |
7dbb22584f15a7a5ef9d0138a8cc52e673c0fd7a
|
# Dataset Card for "t0_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DataProvenanceInitiative/t0_submix_original
|
[
"region:us"
] |
2023-10-16T16:39:08+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "task_name", "dtype": "string"}, {"name": "template_type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4602180562, "num_examples": 1650308}], "download_size": 2734694485, "dataset_size": 4602180562}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T16:40:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "t0_submix_original"
More Information needed
|
[
"# Dataset Card for \"t0_submix_original\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"t0_submix_original\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"t0_submix_original\"\n\nMore Information needed"
] |
db53c18e9c06c8c3813e940ae79f34bb328900d8
|
# Dataset Card for "cai-conversation-new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vwxyzjn/cai-conversation-new
|
[
"region:us"
] |
2023-10-16T16:40:26+00:00
|
{"dataset_info": {"features": [{"name": "init_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "init_response", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "critic_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "critic_response", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "revision_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "revision_response", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 129500, "num_examples": 100}], "download_size": 67693, "dataset_size": 129500}}
|
2023-10-20T13:53:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cai-conversation-new"
More Information needed
|
[
"# Dataset Card for \"cai-conversation-new\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cai-conversation-new\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cai-conversation-new\"\n\nMore Information needed"
] |
dad00e718f128c1740c28f47df0e81fabab37799
|
# ABOUT
Wanted to train a model to classify question, if they are open ore boolean. So I merged SQuAD with BoolQ, the dataset contains 5000 question of each dataset, labeled with "true" (the boolean question) and with "false" (the open questions). Didn't add questions that don't fall into these categories. May be a flaw, we'll see:).
For some reason the dataset viewer isn't working, sorry for that one, but here's a snippet of the json structure:
{
"question": "are there fiber optic cables under the ocean",
"type": "true"
},
{
"question": "are dollar general and dollar tree owned by the same company",
"type": "true"
},
|
FranzderPapst/squad_x_boolq
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] |
2023-10-16T16:58:05+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "warrgalbhalble"}
|
2023-10-16T18:05:42+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-mit #region-us
|
# ABOUT
Wanted to train a model to classify question, if they are open ore boolean. So I merged SQuAD with BoolQ, the dataset contains 5000 question of each dataset, labeled with "true" (the boolean question) and with "false" (the open questions). Didn't add questions that don't fall into these categories. May be a flaw, we'll see:).
For some reason the dataset viewer isn't working, sorry for that one, but here's a snippet of the json structure:
{
"question": "are there fiber optic cables under the ocean",
"type": "true"
},
{
"question": "are dollar general and dollar tree owned by the same company",
"type": "true"
},
|
[
"# ABOUT\nWanted to train a model to classify question, if they are open ore boolean. So I merged SQuAD with BoolQ, the dataset contains 5000 question of each dataset, labeled with \"true\" (the boolean question) and with \"false\" (the open questions). Didn't add questions that don't fall into these categories. May be a flaw, we'll see:).\nFor some reason the dataset viewer isn't working, sorry for that one, but here's a snippet of the json structure:\n\n {\n \"question\": \"are there fiber optic cables under the ocean\",\n \"type\": \"true\"\n },\n\n {\n \"question\": \"are dollar general and dollar tree owned by the same company\",\n \"type\": \"true\"\n },"
] |
[
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-mit #region-us \n",
"# ABOUT\nWanted to train a model to classify question, if they are open ore boolean. So I merged SQuAD with BoolQ, the dataset contains 5000 question of each dataset, labeled with \"true\" (the boolean question) and with \"false\" (the open questions). Didn't add questions that don't fall into these categories. May be a flaw, we'll see:).\nFor some reason the dataset viewer isn't working, sorry for that one, but here's a snippet of the json structure:\n\n {\n \"question\": \"are there fiber optic cables under the ocean\",\n \"type\": \"true\"\n },\n\n {\n \"question\": \"are dollar general and dollar tree owned by the same company\",\n \"type\": \"true\"\n },"
] |
[
38,
190
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-mit #region-us \n# ABOUT\nWanted to train a model to classify question, if they are open ore boolean. So I merged SQuAD with BoolQ, the dataset contains 5000 question of each dataset, labeled with \"true\" (the boolean question) and with \"false\" (the open questions). Didn't add questions that don't fall into these categories. May be a flaw, we'll see:).\nFor some reason the dataset viewer isn't working, sorry for that one, but here's a snippet of the json structure:\n\n {\n \"question\": \"are there fiber optic cables under the ocean\",\n \"type\": \"true\"\n },\n\n {\n \"question\": \"are dollar general and dollar tree owned by the same company\",\n \"type\": \"true\"\n },"
] |
836fc0ca1995661d3190fe4d24659a4019bde593
|
# Dataset Card for "dreambooth_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kewu93/dreambooth_full
|
[
"region:us"
] |
2023-10-16T17:27:18+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "subject_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 111680598.0, "num_examples": 158}], "download_size": 111587177, "dataset_size": 111680598.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T17:27:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dreambooth_full"
More Information needed
|
[
"# Dataset Card for \"dreambooth_full\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth_full\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dreambooth_full\"\n\nMore Information needed"
] |
4a786909bf552182571d3334d59eb40a005d2859
|
# Dataset Card for "eli5_dataset_title_text_20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Safeer143/eli5_dataset_title_text_20k
|
[
"region:us"
] |
2023-10-16T17:38:41+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 78426671, "num_examples": 20000}], "download_size": 84756340, "dataset_size": 78426671}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T17:40:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "eli5_dataset_title_text_20k"
More Information needed
|
[
"# Dataset Card for \"eli5_dataset_title_text_20k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"eli5_dataset_title_text_20k\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"eli5_dataset_title_text_20k\"\n\nMore Information needed"
] |
6b5766018214c71533ce0c8cc4c91b8d6844824d
|
This dataset contains chunked extracts (of ~300 tokens) from papers related to (and including) the [Llama 2 research paper](https://arxiv.org/abs/2307.09288). Related papers were identified by following a trail of references, extracting those papers with the [`arxiv-bot`](https://github.com/aurelio-labs/arxiv-bot) package, and repeating.
|
c-aero/test
|
[
"language:en",
"arxiv:2307.09288",
"region:us"
] |
2023-10-16T17:53:23+00:00
|
{"language": ["en"], "pretty_name": "Chunked Arxiv Papers for Llama 2"}
|
2023-10-16T19:03:06+00:00
|
[
"2307.09288"
] |
[
"en"
] |
TAGS
#language-English #arxiv-2307.09288 #region-us
|
This dataset contains chunked extracts (of ~300 tokens) from papers related to (and including) the Llama 2 research paper. Related papers were identified by following a trail of references, extracting those papers with the 'arxiv-bot' package, and repeating.
|
[] |
[
"TAGS\n#language-English #arxiv-2307.09288 #region-us \n"
] |
[
18
] |
[
"passage: TAGS\n#language-English #arxiv-2307.09288 #region-us \n"
] |
d4f767191007e2d2c28eccab4976d4e31bb8359b
|
# Dataset Card for "merged-pad-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shossain/merged-pad-16384
|
[
"region:us"
] |
2023-10-16T18:14:35+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2084670148, "num_examples": 9787}], "download_size": 484608278, "dataset_size": 2084670148}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T18:15:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "merged-pad-16384"
More Information needed
|
[
"# Dataset Card for \"merged-pad-16384\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"merged-pad-16384\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"merged-pad-16384\"\n\nMore Information needed"
] |
aec3af1210a8e6936c249eb33262aedc00be5872
|
This is the table to evaluate RAGAS using 50 questions from the whole dataset
|
stepkurniawan/knowledge_base_experiment_results
|
[
"language:en",
"license:mit",
"climate",
"region:us"
] |
2023-10-16T18:17:32+00:00
|
{"language": ["en"], "license": "mit", "tags": ["climate"], "dataset_info": [{"config_name": "KB_suswiki", "features": [{"name": "question", "dtype": "string"}, {"name": "ground_truths", "dtype": "string"}, {"name": "contexts", "dtype": "string"}, {"name": "context_precision", "dtype": "float64"}, {"name": "context_recall", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 157861, "num_examples": 50}], "download_size": 106080, "dataset_size": 157861}, {"config_name": "KB_wikipedia", "features": [{"name": "question", "dtype": "string"}, {"name": "ground_truths", "dtype": "string"}, {"name": "contexts", "dtype": "string"}, {"name": "context_precision", "dtype": "float64"}, {"name": "context_recall", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 132283, "num_examples": 50}], "download_size": 95621, "dataset_size": 132283}], "configs": [{"config_name": "KB_suswiki", "data_files": [{"split": "train", "path": "KB_suswiki/train-*"}]}, {"config_name": "KB_wikipedia", "data_files": [{"split": "train", "path": "KB_wikipedia/train-*"}]}]}
|
2024-02-11T22:46:50+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #license-mit #climate #region-us
|
This is the table to evaluate RAGAS using 50 questions from the whole dataset
|
[] |
[
"TAGS\n#language-English #license-mit #climate #region-us \n"
] |
[
19
] |
[
"passage: TAGS\n#language-English #license-mit #climate #region-us \n"
] |
130d5f700c7d275b690e7b782723b44336bcced7
|
# Dataset Card for "russia-ukraine-aljazeera"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
orgcatorg/russia-ukraine-aljazeera
|
[
"region:us"
] |
2023-10-16T18:31:51+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "headline", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "articleBody", "dtype": "string"}, {"name": "imageCaption", "dtype": "string"}, {"name": "datePublished", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37433318.0, "num_examples": 464}], "download_size": 36575736, "dataset_size": 37433318.0}}
|
2023-10-16T18:47:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "russia-ukraine-aljazeera"
More Information needed
|
[
"# Dataset Card for \"russia-ukraine-aljazeera\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"russia-ukraine-aljazeera\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"russia-ukraine-aljazeera\"\n\nMore Information needed"
] |
061ca1525717eebaaa9bada240f6cbb31eb3aa87
|
# Daily News Dikgang
[](https://arxiv.org/abs/2310.09141)
Give Feedback 📑: [DSFSI Resource Feedback Form](https://docs.google.com/forms/d/e/1FAIpQLSf7S36dyAUPx2egmXbFpnTBuzoRulhL5Elu-N1eoMhaO7v10w/formResponse)
## About dataset
The dataset contains annotated categorised data from Dikgang - Daily News [https://dailynews.gov.bw/news-list/srccategory/10](https://dailynews.gov.bw/news-list/srccategory/10). The data is in setswana.
See the [Data Statement](DataStatementPuoBERTaDailyNewsDikgang.pdf) for foll details.
Disclaimer
-------
This dataset contains machine-readable data extracted from online news articles, from [https://dailynews.gov.bw/news-list/srccategory/10](https://dailynews.gov.bw/news-list/srccategory/10), provided by the Botswana Government. While efforts were made to ensure the accuracy and completeness of this data, there may be errors or discrepancies between the original publications and this dataset. No warranties, guarantees or representations are given in relation to the information contained in the dataset. The members of the Data Science for Societal Impact Research Group bear no responsibility and/or liability for any such errors or discrepancies in this dataset. The Botswana Government bears no responsibility and/or liability for any such errors or discrepancies in this dataset. It is recommended that users verify all information contained herein before making decisions based upon this information.
Authors
-------
- Vukosi Marivate - [@vukosi](https://twitter.com/vukosi)
- Valencia Wagner
Citation
--------
Bibtex Reference
```
@inproceedings{marivate2023puoberta,
title = {PuoBERTa: Training and evaluation of a curated language model for Setswana},
author = {Vukosi Marivate and Moseli Mots'Oehli and Valencia Wagner and Richard Lastrucci and Isheanesu Dzingirai},
year = {2023},
booktitle= {SACAIR 2023 (To Appear)},
keywords = {NLP},
preprint_url = {https://arxiv.org/abs/2310.09141},
dataset_url = {https://github.com/dsfsi/PuoBERTa},
software_url = {https://huggingface.co/dsfsi/PuoBERTa}
}
```
Licences
-------
The license of the News Categorisation dataset is in CC-BY-SA-4.0. the monolingual data have difference licenses depending on the news website license
* License for Data - [CC-BY-SA-4.0](LICENSE.data.md)
|
dsfsi/daily-news-dikgang
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:tn",
"license:cc-by-sa-4.0",
"arxiv:2310.09141",
"region:us"
] |
2023-10-16T18:32:56+00:00
|
{"language": ["tn"], "license": "cc-by-sa-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]}
|
2023-10-26T06:21:04+00:00
|
[
"2310.09141"
] |
[
"tn"
] |
TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-Tswana #license-cc-by-sa-4.0 #arxiv-2310.09141 #region-us
|
# Daily News Dikgang
 to discuss your requirements, learn about the price and buy the dataset.
# Content
For each question, we extracted:
- **id**: number of the question,
- **subject**: SAT subject (**World History or US History**),
- **prompt**: text of the question,
- **A**: answer A,
- **B**: answer B,
- **C**: answer C,
- **D**: answer D,
- **E**: answer E,
- **answer**: letter of the correct answer to the question
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=sat-history-questions-and-answers)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro**
|
TrainingDataPro/sat-questions-and-answers-for-llm
|
[
"task_categories:text-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"region:us"
] |
2023-10-16T18:33:58+00:00
|
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["text-classification"], "tags": ["code"]}
|
2023-10-16T18:37:51+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #language-English #license-cc-by-nc-nd-4.0 #code #region-us
|
# SAT History Questions and Answers ️
This dataset contains a collection of questions and answers for the SAT Subject Test in World History and US History. Each question is accompanied by a corresponding answers and the correct response.
The dataset includes questions from various *topics, time periods, and regions* on both World History and US History.
# Get the dataset
### This is just an example of the data
Leave a request on URL to discuss your requirements, learn about the price and buy the dataset.
# Content
For each question, we extracted:
- id: number of the question,
- subject: SAT subject (World History or US History),
- prompt: text of the question,
- A: answer A,
- B: answer B,
- C: answer C,
- D: answer D,
- E: answer E,
- answer: letter of the correct answer to the question
## TrainingData provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: URL
TrainingData's GitHub: URL
|
[
"# SAT History Questions and Answers ️\n\nThis dataset contains a collection of questions and answers for the SAT Subject Test in World History and US History. Each question is accompanied by a corresponding answers and the correct response.\n\nThe dataset includes questions from various *topics, time periods, and regions* on both World History and US History.",
"# Get the dataset",
"### This is just an example of the data\n\nLeave a request on URL to discuss your requirements, learn about the price and buy the dataset.",
"# Content\nFor each question, we extracted:\n- id: number of the question,\n- subject: SAT subject (World History or US History),\n- prompt: text of the question,\n- A: answer A,\n- B: answer B,\n- C: answer C,\n- D: answer D,\n- E: answer E,\n- answer: letter of the correct answer to the question",
"## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL"
] |
[
"TAGS\n#task_categories-text-classification #language-English #license-cc-by-nc-nd-4.0 #code #region-us \n",
"# SAT History Questions and Answers ️\n\nThis dataset contains a collection of questions and answers for the SAT Subject Test in World History and US History. Each question is accompanied by a corresponding answers and the correct response.\n\nThe dataset includes questions from various *topics, time periods, and regions* on both World History and US History.",
"# Get the dataset",
"### This is just an example of the data\n\nLeave a request on URL to discuss your requirements, learn about the price and buy the dataset.",
"# Content\nFor each question, we extracted:\n- id: number of the question,\n- subject: SAT subject (World History or US History),\n- prompt: text of the question,\n- A: answer A,\n- B: answer B,\n- C: answer C,\n- D: answer D,\n- E: answer E,\n- answer: letter of the correct answer to the question",
"## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL"
] |
[
36,
79,
5,
30,
79,
39
] |
[
"passage: TAGS\n#task_categories-text-classification #language-English #license-cc-by-nc-nd-4.0 #code #region-us \n# SAT History Questions and Answers ️\n\nThis dataset contains a collection of questions and answers for the SAT Subject Test in World History and US History. Each question is accompanied by a corresponding answers and the correct response.\n\nThe dataset includes questions from various *topics, time periods, and regions* on both World History and US History.# Get the dataset### This is just an example of the data\n\nLeave a request on URL to discuss your requirements, learn about the price and buy the dataset.# Content\nFor each question, we extracted:\n- id: number of the question,\n- subject: SAT subject (World History or US History),\n- prompt: text of the question,\n- A: answer A,\n- B: answer B,\n- C: answer C,\n- D: answer D,\n- E: answer E,\n- answer: letter of the correct answer to the question## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL"
] |
b1827e1192d0a43923aebcbbe69258fc5a0edf7c
|
# Dataset Card for Evaluation run of dvruette/oasst-llama-13b-2-epochs
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/dvruette/oasst-llama-13b-2-epochs
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [dvruette/oasst-llama-13b-2-epochs](https://huggingface.co/dvruette/oasst-llama-13b-2-epochs) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_dvruette__oasst-llama-13b-2-epochs",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T19:35:51.495118](https://huggingface.co/datasets/open-llm-leaderboard/details_dvruette__oasst-llama-13b-2-epochs/blob/main/results_2023-10-16T19-35-51.495118.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.04299496644295302,
"em_stderr": 0.002077330365557692,
"f1": 0.10806732382550314,
"f1_stderr": 0.0024113571936826787,
"acc": 0.42569171474168144,
"acc_stderr": 0.00971706467249931
},
"harness|drop|3": {
"em": 0.04299496644295302,
"em_stderr": 0.002077330365557692,
"f1": 0.10806732382550314,
"f1_stderr": 0.0024113571936826787
},
"harness|gsm8k|5": {
"acc": 0.08263836239575435,
"acc_stderr": 0.007584089220148114
},
"harness|winogrande|5": {
"acc": 0.7687450670876085,
"acc_stderr": 0.01185004012485051
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_dvruette__oasst-llama-13b-2-epochs
|
[
"region:us"
] |
2023-10-16T18:35:55+00:00
|
{"pretty_name": "Evaluation run of dvruette/oasst-llama-13b-2-epochs", "dataset_summary": "Dataset automatically created during the evaluation run of model [dvruette/oasst-llama-13b-2-epochs](https://huggingface.co/dvruette/oasst-llama-13b-2-epochs) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_dvruette__oasst-llama-13b-2-epochs\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-16T19:35:51.495118](https://huggingface.co/datasets/open-llm-leaderboard/details_dvruette__oasst-llama-13b-2-epochs/blob/main/results_2023-10-16T19-35-51.495118.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.04299496644295302,\n \"em_stderr\": 0.002077330365557692,\n \"f1\": 0.10806732382550314,\n \"f1_stderr\": 0.0024113571936826787,\n \"acc\": 0.42569171474168144,\n \"acc_stderr\": 0.00971706467249931\n },\n \"harness|drop|3\": {\n \"em\": 0.04299496644295302,\n \"em_stderr\": 0.002077330365557692,\n \"f1\": 0.10806732382550314,\n \"f1_stderr\": 0.0024113571936826787\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08263836239575435,\n \"acc_stderr\": 0.007584089220148114\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7687450670876085,\n \"acc_stderr\": 0.01185004012485051\n }\n}\n```", "repo_url": "https://huggingface.co/dvruette/oasst-llama-13b-2-epochs", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T19_35_51.495118", "path": ["**/details_harness|drop|3_2023-10-16T19-35-51.495118.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-16T19-35-51.495118.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T19_35_51.495118", "path": ["**/details_harness|gsm8k|5_2023-10-16T19-35-51.495118.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-16T19-35-51.495118.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T19_35_51.495118", "path": ["**/details_harness|winogrande|5_2023-10-16T19-35-51.495118.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-16T19-35-51.495118.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T19_35_51.495118", "path": ["results_2023-10-16T19-35-51.495118.parquet"]}, {"split": "latest", "path": ["results_2023-10-16T19-35-51.495118.parquet"]}]}]}
|
2023-10-16T18:36:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of dvruette/oasst-llama-13b-2-epochs
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model dvruette/oasst-llama-13b-2-epochs on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-16T19:35:51.495118(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of dvruette/oasst-llama-13b-2-epochs",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model dvruette/oasst-llama-13b-2-epochs on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-16T19:35:51.495118(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of dvruette/oasst-llama-13b-2-epochs",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model dvruette/oasst-llama-13b-2-epochs on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-16T19:35:51.495118(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
27,
31,
175,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of dvruette/oasst-llama-13b-2-epochs## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model dvruette/oasst-llama-13b-2-epochs on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-16T19:35:51.495118(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4460e93ca94bfb96840b706cb85682f91da86e14
|
# Dataset Card for "MixAtis_for_DecoderOnly"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chirunder/MixAtis_for_DecoderOnly
|
[
"region:us"
] |
2023-10-16T18:43:51+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14859636, "num_examples": 20003}], "download_size": 3586352, "dataset_size": 14859636}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-18T05:10:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "MixAtis_for_DecoderOnly"
More Information needed
|
[
"# Dataset Card for \"MixAtis_for_DecoderOnly\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"MixAtis_for_DecoderOnly\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"MixAtis_for_DecoderOnly\"\n\nMore Information needed"
] |
0707cd3468a177abd9e11b9ec6e77c6ff56db948
|
# Dataset Card for "MixSnips_for_DecoderOnly"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chirunder/MixSnips_for_DecoderOnly
|
[
"region:us"
] |
2023-10-16T18:44:19+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39494364, "num_examples": 50003}], "download_size": 13938820, "dataset_size": 39494364}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-18T05:10:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "MixSnips_for_DecoderOnly"
More Information needed
|
[
"# Dataset Card for \"MixSnips_for_DecoderOnly\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"MixSnips_for_DecoderOnly\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"MixSnips_for_DecoderOnly\"\n\nMore Information needed"
] |
ece0ca1c07b9df234a438d607db08f2e395fcd29
|
# AstroClassification and Redshifts Datasets
<!-- Provide a quick summary of the dataset. -->
This dataset was used for the AstroClassification and Redshifts introduced in [Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations](). This is a dataset of simulated astronomical time-series (e.g., supernovae, active galactic nuclei), and the task is to classify the object type (AstroClassification) or predict the object's redshift (Redshifts).
- **Repository:** https://github.com/helenqu/connect-later
- **Paper:** will be updated
- **Point of Contact: Helen Qu (<[email protected]>)**
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
- **object_id**: unique object identifier
- **times_wv**: 2D array of shape (N, 2) containing the observation times (modified Julian days, MJD) and filter (wavelength in nm) for each observation, N=number of observations
- **lightcurve**: 2D array of shape (N, 2) containing the flux (arbitrary units) and flux error for each observation
- **label**: integer representing the class of the object (see below for details)
- **redshift**: redshift of the object
## Dataset Creation
### Source Data
This is a modified version of the dataset from the 2018 Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC) Kaggle competition
The original Kaggle competition can be found [here](https://www.kaggle.com/c/PLAsTiCC-2018). [This note](https://arxiv.org/abs/1810.00001) from the competition describes the dataset in detail. Astronomers may be interested in [this paper](https://arxiv.org/abs/1903.11756) describing the simulations used to generate the data.
- **Train**: 80% of the original PLAsTiCC training set augmented using the redshifting targeted augmentation described in the Connect Later paper
- **Validation**: Remaining 20% of the original PLAsTiCC training set, *not* augmented or modified
- **Test**: Subset of 10,000 objects randomly selected from the PLAsTiCC test set
### Object Types
```
0: microlens-single
1: tidal disruption event (TDE)
2: eclipsing binary (EB)
3: type II supernova (SNII)
4: peculiar type Ia supernova (SNIax)
5: Mira variable
6: type Ibc supernova(SNIbc)
7: kilonova (KN)
8: M-dwarf
9: peculiar type Ia supernova (SNIa-91bg)
10: active galactic nuclei (AGN)
11: type Ia supernova (SNIa)
12: RR-Lyrae (RRL)
13: superluminous supernova (SLSN-I)
14: 5 "anomalous" types that are not present in training set: microlens-binary, intermediate luminosity optical transient (ILOT), calcium-rich transient (CaRT), pair instability supernova (PISN), microlens-string
```
## Citation
will be updated
|
helenqu/astro-classification-redshifts
|
[
"size_categories:100K<n<1M",
"license:mit",
"time series",
"astrophysics",
"pretraining",
"connect-later",
"arxiv:1810.00001",
"arxiv:1903.11756",
"region:us"
] |
2023-10-16T19:33:04+00:00
|
{"license": "mit", "size_categories": ["100K<n<1M"], "tags": ["time series", "astrophysics", "pretraining", "connect-later"]}
|
2023-10-16T20:33:53+00:00
|
[
"1810.00001",
"1903.11756"
] |
[] |
TAGS
#size_categories-100K<n<1M #license-mit #time series #astrophysics #pretraining #connect-later #arxiv-1810.00001 #arxiv-1903.11756 #region-us
|
# AstroClassification and Redshifts Datasets
This dataset was used for the AstroClassification and Redshifts introduced in [Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations](). This is a dataset of simulated astronomical time-series (e.g., supernovae, active galactic nuclei), and the task is to classify the object type (AstroClassification) or predict the object's redshift (Redshifts).
- Repository: URL
- Paper: will be updated
- Point of Contact: Helen Qu (<helenqu@URL>)
## Dataset Structure
- object_id: unique object identifier
- times_wv: 2D array of shape (N, 2) containing the observation times (modified Julian days, MJD) and filter (wavelength in nm) for each observation, N=number of observations
- lightcurve: 2D array of shape (N, 2) containing the flux (arbitrary units) and flux error for each observation
- label: integer representing the class of the object (see below for details)
- redshift: redshift of the object
## Dataset Creation
### Source Data
This is a modified version of the dataset from the 2018 Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC) Kaggle competition
The original Kaggle competition can be found here. This note from the competition describes the dataset in detail. Astronomers may be interested in this paper describing the simulations used to generate the data.
- Train: 80% of the original PLAsTiCC training set augmented using the redshifting targeted augmentation described in the Connect Later paper
- Validation: Remaining 20% of the original PLAsTiCC training set, *not* augmented or modified
- Test: Subset of 10,000 objects randomly selected from the PLAsTiCC test set
### Object Types
will be updated
|
[
"# AstroClassification and Redshifts Datasets\n\n\n\nThis dataset was used for the AstroClassification and Redshifts introduced in [Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations](). This is a dataset of simulated astronomical time-series (e.g., supernovae, active galactic nuclei), and the task is to classify the object type (AstroClassification) or predict the object's redshift (Redshifts).\n\n- Repository: URL\n- Paper: will be updated\n- Point of Contact: Helen Qu (<helenqu@URL>)",
"## Dataset Structure\n\n\n- object_id: unique object identifier\n- times_wv: 2D array of shape (N, 2) containing the observation times (modified Julian days, MJD) and filter (wavelength in nm) for each observation, N=number of observations\n- lightcurve: 2D array of shape (N, 2) containing the flux (arbitrary units) and flux error for each observation\n- label: integer representing the class of the object (see below for details)\n- redshift: redshift of the object",
"## Dataset Creation",
"### Source Data\n\nThis is a modified version of the dataset from the 2018 Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC) Kaggle competition\nThe original Kaggle competition can be found here. This note from the competition describes the dataset in detail. Astronomers may be interested in this paper describing the simulations used to generate the data.\n\n- Train: 80% of the original PLAsTiCC training set augmented using the redshifting targeted augmentation described in the Connect Later paper\n- Validation: Remaining 20% of the original PLAsTiCC training set, *not* augmented or modified\n- Test: Subset of 10,000 objects randomly selected from the PLAsTiCC test set",
"### Object Types\n\n\nwill be updated"
] |
[
"TAGS\n#size_categories-100K<n<1M #license-mit #time series #astrophysics #pretraining #connect-later #arxiv-1810.00001 #arxiv-1903.11756 #region-us \n",
"# AstroClassification and Redshifts Datasets\n\n\n\nThis dataset was used for the AstroClassification and Redshifts introduced in [Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations](). This is a dataset of simulated astronomical time-series (e.g., supernovae, active galactic nuclei), and the task is to classify the object type (AstroClassification) or predict the object's redshift (Redshifts).\n\n- Repository: URL\n- Paper: will be updated\n- Point of Contact: Helen Qu (<helenqu@URL>)",
"## Dataset Structure\n\n\n- object_id: unique object identifier\n- times_wv: 2D array of shape (N, 2) containing the observation times (modified Julian days, MJD) and filter (wavelength in nm) for each observation, N=number of observations\n- lightcurve: 2D array of shape (N, 2) containing the flux (arbitrary units) and flux error for each observation\n- label: integer representing the class of the object (see below for details)\n- redshift: redshift of the object",
"## Dataset Creation",
"### Source Data\n\nThis is a modified version of the dataset from the 2018 Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC) Kaggle competition\nThe original Kaggle competition can be found here. This note from the competition describes the dataset in detail. Astronomers may be interested in this paper describing the simulations used to generate the data.\n\n- Train: 80% of the original PLAsTiCC training set augmented using the redshifting targeted augmentation described in the Connect Later paper\n- Validation: Remaining 20% of the original PLAsTiCC training set, *not* augmented or modified\n- Test: Subset of 10,000 objects randomly selected from the PLAsTiCC test set",
"### Object Types\n\n\nwill be updated"
] |
[
54,
144,
129,
5,
163,
8
] |
[
"passage: TAGS\n#size_categories-100K<n<1M #license-mit #time series #astrophysics #pretraining #connect-later #arxiv-1810.00001 #arxiv-1903.11756 #region-us \n# AstroClassification and Redshifts Datasets\n\n\n\nThis dataset was used for the AstroClassification and Redshifts introduced in [Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations](). This is a dataset of simulated astronomical time-series (e.g., supernovae, active galactic nuclei), and the task is to classify the object type (AstroClassification) or predict the object's redshift (Redshifts).\n\n- Repository: URL\n- Paper: will be updated\n- Point of Contact: Helen Qu (<helenqu@URL>)## Dataset Structure\n\n\n- object_id: unique object identifier\n- times_wv: 2D array of shape (N, 2) containing the observation times (modified Julian days, MJD) and filter (wavelength in nm) for each observation, N=number of observations\n- lightcurve: 2D array of shape (N, 2) containing the flux (arbitrary units) and flux error for each observation\n- label: integer representing the class of the object (see below for details)\n- redshift: redshift of the object## Dataset Creation### Source Data\n\nThis is a modified version of the dataset from the 2018 Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC) Kaggle competition\nThe original Kaggle competition can be found here. This note from the competition describes the dataset in detail. Astronomers may be interested in this paper describing the simulations used to generate the data.\n\n- Train: 80% of the original PLAsTiCC training set augmented using the redshifting targeted augmentation described in the Connect Later paper\n- Validation: Remaining 20% of the original PLAsTiCC training set, *not* augmented or modified\n- Test: Subset of 10,000 objects randomly selected from the PLAsTiCC test set### Object Types\n\n\nwill be updated"
] |
c60d4913c8c9dab875113f42f28fc4cbff611ff3
|
# Dataset Card for "asr_xh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lucas-meyer/asr_xh
|
[
"region:us"
] |
2023-10-16T20:07:38+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3767023248.632, "num_examples": 2506}, {"name": "validation", "num_bytes": 287475823.0, "num_examples": 338}, {"name": "test", "num_bytes": 596246711.0, "num_examples": 627}], "download_size": 2040812826, "dataset_size": 4650745782.632}}
|
2023-10-16T20:54:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "asr_xh"
More Information needed
|
[
"# Dataset Card for \"asr_xh\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"asr_xh\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"asr_xh\"\n\nMore Information needed"
] |
bead1e4a2b6f11a920647f9aebae8a35701729c1
|
# Dataset Card for "flores_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
khalidalt/flores_text
|
[
"region:us"
] |
2023-10-16T20:45:54+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "URL", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "has_image", "dtype": "int32"}, {"name": "has_hyperlink", "dtype": "int32"}, {"name": "sentence_arb_Arab", "dtype": "string"}, {"name": "sentence_eng_Latn", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "dev", "num_bytes": 816795, "num_examples": 997}], "download_size": 435355, "dataset_size": 816795}, "configs": [{"config_name": "default", "data_files": [{"split": "dev", "path": "data/dev-*"}]}]}
|
2023-10-16T20:45:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "flores_text"
More Information needed
|
[
"# Dataset Card for \"flores_text\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"flores_text\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"flores_text\"\n\nMore Information needed"
] |
e83ff38e336441c83a1848c2ea4e46ab271e70fc
|
# Dataset Card for "seizure_eeg_train"
```python
from datasets import load_dataset
dataset_name = "JLB-JLB/seizure_eeg_train"
dataset = load_dataset(
dataset_name,
split="train",
)
display(dataset)
# create train and test/val split
train_testvalid = dataset.train_test_split(test_size=0.1, shuffle=True, seed=12071998)
display(train_testvalid)
# get the number of different labels in the train, test and validation set
display(train_testvalid["train"].features["label"])
display(train_testvalid["test"].features["label"].num_classes)
# check how many labels/number of classes
num_classes = len(set(train_testvalid["train"]['label']))
labels = train_testvalid["train"].features['label']
print(um_classes, labels)
display(train_testvalid["train"][0]['image'])
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JLB-JLB/seizure_eeg_train
|
[
"region:us"
] |
2023-10-16T21:30:04+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "epoch", "dtype": "int64"}, {"name": "label_str", "dtype": {"class_label": {"names": {"0": "No Event", "1": "bckg", "2": "seiz"}}}}, {"name": "label", "dtype": {"class_label": {"names": {"0": "No Event", "1": "bckg", "2": "seiz"}}}}], "splits": [{"name": "train", "num_bytes": 23742147634.792, "num_examples": 814568}], "download_size": 24165936927, "dataset_size": 23742147634.792}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T12:32:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "seizure_eeg_train"
More Information needed
|
[
"# Dataset Card for \"seizure_eeg_train\"\n\n\n\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"seizure_eeg_train\"\n\n\n\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"seizure_eeg_train\"\n\n\n\n\nMore Information needed"
] |
0c7215e1ef6939ad8c737762d41cec5ce51be86a
|
# Dataset Card for Evaluation run of ajibawa-2023/scarlett-33b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ajibawa-2023/scarlett-33b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [ajibawa-2023/scarlett-33b](https://huggingface.co/ajibawa-2023/scarlett-33b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ajibawa-2023__scarlett-33b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T22:35:33.432949](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__scarlett-33b/blob/main/results_2023-10-16T22-35-33.432949.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.3665058724832215,
"em_stderr": 0.004934593891762348,
"f1": 0.43883598993288797,
"f1_stderr": 0.004751167980569885,
"acc": 0.39800367765635275,
"acc_stderr": 0.008206189612832142
},
"harness|drop|3": {
"em": 0.3665058724832215,
"em_stderr": 0.004934593891762348,
"f1": 0.43883598993288797,
"f1_stderr": 0.004751167980569885
},
"harness|gsm8k|5": {
"acc": 0.028051554207733132,
"acc_stderr": 0.00454822953383635
},
"harness|winogrande|5": {
"acc": 0.7679558011049724,
"acc_stderr": 0.011864149691827933
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_ajibawa-2023__scarlett-33b
|
[
"region:us"
] |
2023-10-16T21:35:37+00:00
|
{"pretty_name": "Evaluation run of ajibawa-2023/scarlett-33b", "dataset_summary": "Dataset automatically created during the evaluation run of model [ajibawa-2023/scarlett-33b](https://huggingface.co/ajibawa-2023/scarlett-33b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ajibawa-2023__scarlett-33b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-16T22:35:33.432949](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__scarlett-33b/blob/main/results_2023-10-16T22-35-33.432949.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.3665058724832215,\n \"em_stderr\": 0.004934593891762348,\n \"f1\": 0.43883598993288797,\n \"f1_stderr\": 0.004751167980569885,\n \"acc\": 0.39800367765635275,\n \"acc_stderr\": 0.008206189612832142\n },\n \"harness|drop|3\": {\n \"em\": 0.3665058724832215,\n \"em_stderr\": 0.004934593891762348,\n \"f1\": 0.43883598993288797,\n \"f1_stderr\": 0.004751167980569885\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.028051554207733132,\n \"acc_stderr\": 0.00454822953383635\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7679558011049724,\n \"acc_stderr\": 0.011864149691827933\n }\n}\n```", "repo_url": "https://huggingface.co/ajibawa-2023/scarlett-33b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T22_35_33.432949", "path": ["**/details_harness|drop|3_2023-10-16T22-35-33.432949.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-16T22-35-33.432949.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T22_35_33.432949", "path": ["**/details_harness|gsm8k|5_2023-10-16T22-35-33.432949.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-16T22-35-33.432949.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T22_35_33.432949", "path": ["**/details_harness|winogrande|5_2023-10-16T22-35-33.432949.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-16T22-35-33.432949.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T22_35_33.432949", "path": ["results_2023-10-16T22-35-33.432949.parquet"]}, {"split": "latest", "path": ["results_2023-10-16T22-35-33.432949.parquet"]}]}]}
|
2023-10-16T21:35:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of ajibawa-2023/scarlett-33b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model ajibawa-2023/scarlett-33b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-16T22:35:33.432949(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of ajibawa-2023/scarlett-33b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model ajibawa-2023/scarlett-33b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-16T22:35:33.432949(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of ajibawa-2023/scarlett-33b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model ajibawa-2023/scarlett-33b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-16T22:35:33.432949(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
19,
31,
167,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of ajibawa-2023/scarlett-33b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model ajibawa-2023/scarlett-33b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-16T22:35:33.432949(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
149869cf3369c1d6eba85ab4f25ab265af248578
|
# Dataset Card for "science_qa_input_label_prep"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
KonstantyM/science_qa_input_label_prep
|
[
"region:us"
] |
2023-10-16T21:49:09+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14836491177, "num_examples": 4281664}], "download_size": 8551603528, "dataset_size": 14836491177}}
|
2023-10-16T22:09:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "science_qa_input_label_prep"
More Information needed
|
[
"# Dataset Card for \"science_qa_input_label_prep\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"science_qa_input_label_prep\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"science_qa_input_label_prep\"\n\nMore Information needed"
] |
3a110062534879bacba25ce710ea065f2d0d851a
|
# Dataset Card for "8f19fe4c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/8f19fe4c
|
[
"region:us"
] |
2023-10-16T21:58:42+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 198, "num_examples": 10}], "download_size": 1374, "dataset_size": 198}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T21:58:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "8f19fe4c"
More Information needed
|
[
"# Dataset Card for \"8f19fe4c\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"8f19fe4c\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"8f19fe4c\"\n\nMore Information needed"
] |
0e7e5df41884d3feb97720779b79afe3666b5355
|
# Dataset Card for "c3d9b753"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/c3d9b753
|
[
"region:us"
] |
2023-10-16T22:04:27+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 202, "num_examples": 10}], "download_size": 1389, "dataset_size": 202}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T22:04:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "c3d9b753"
More Information needed
|
[
"# Dataset Card for \"c3d9b753\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"c3d9b753\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"c3d9b753\"\n\nMore Information needed"
] |
fc67c3faa80507a3d3ba42360e6cf9824ba748ce
|
# Dataset Card for Evaluation run of Gryphe/MythoMax-L2-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Gryphe/MythoMax-L2-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Gryphe/MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Gryphe__MythoMax-L2-13b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T23:19:17.622542](https://huggingface.co/datasets/open-llm-leaderboard/details_Gryphe__MythoMax-L2-13b/blob/main/results_2023-10-16T23-19-17.622542.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.13433305369127516,
"em_stderr": 0.00349225954139751,
"f1": 0.20734689597315364,
"f1_stderr": 0.003631918882586114,
"acc": 0.42119517249261446,
"acc_stderr": 0.010012961564157645
},
"harness|drop|3": {
"em": 0.13433305369127516,
"em_stderr": 0.00349225954139751,
"f1": 0.20734689597315364,
"f1_stderr": 0.003631918882586114
},
"harness|gsm8k|5": {
"acc": 0.09021986353297953,
"acc_stderr": 0.00789153710844994
},
"harness|winogrande|5": {
"acc": 0.7521704814522494,
"acc_stderr": 0.01213438601986535
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Gryphe__MythoMax-L2-13b
|
[
"region:us"
] |
2023-10-16T22:19:21+00:00
|
{"pretty_name": "Evaluation run of Gryphe/MythoMax-L2-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [Gryphe/MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Gryphe__MythoMax-L2-13b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-16T23:19:17.622542](https://huggingface.co/datasets/open-llm-leaderboard/details_Gryphe__MythoMax-L2-13b/blob/main/results_2023-10-16T23-19-17.622542.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.13433305369127516,\n \"em_stderr\": 0.00349225954139751,\n \"f1\": 0.20734689597315364,\n \"f1_stderr\": 0.003631918882586114,\n \"acc\": 0.42119517249261446,\n \"acc_stderr\": 0.010012961564157645\n },\n \"harness|drop|3\": {\n \"em\": 0.13433305369127516,\n \"em_stderr\": 0.00349225954139751,\n \"f1\": 0.20734689597315364,\n \"f1_stderr\": 0.003631918882586114\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09021986353297953,\n \"acc_stderr\": 0.00789153710844994\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7521704814522494,\n \"acc_stderr\": 0.01213438601986535\n }\n}\n```", "repo_url": "https://huggingface.co/Gryphe/MythoMax-L2-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T23_19_17.622542", "path": ["**/details_harness|drop|3_2023-10-16T23-19-17.622542.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-16T23-19-17.622542.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T23_19_17.622542", "path": ["**/details_harness|gsm8k|5_2023-10-16T23-19-17.622542.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-16T23-19-17.622542.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T23_19_17.622542", "path": ["**/details_harness|winogrande|5_2023-10-16T23-19-17.622542.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-16T23-19-17.622542.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T23_19_17.622542", "path": ["results_2023-10-16T23-19-17.622542.parquet"]}, {"split": "latest", "path": ["results_2023-10-16T23-19-17.622542.parquet"]}]}]}
|
2023-10-16T22:19:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Gryphe/MythoMax-L2-13b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Gryphe/MythoMax-L2-13b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-16T23:19:17.622542(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Gryphe/MythoMax-L2-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Gryphe/MythoMax-L2-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-16T23:19:17.622542(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Gryphe/MythoMax-L2-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Gryphe/MythoMax-L2-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-16T23:19:17.622542(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
20,
31,
168,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Gryphe/MythoMax-L2-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Gryphe/MythoMax-L2-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-16T23:19:17.622542(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
f762eb7960b546f694ecf8a5f774a0e4879d0832
|
# Dataset Card for "mmlu_all_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liyucheng/mmlu_test
|
[
"region:us"
] |
2023-10-16T22:28:24+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "A", "dtype": "string"}, {"name": "B", "dtype": "string"}, {"name": "C", "dtype": "string"}, {"name": "D", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "in-context examples", "dtype": "string"}, {"name": "testing input", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "task", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 90449824, "num_examples": 13987}], "download_size": 14673865, "dataset_size": 90449824}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-12-13T07:41:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mmlu_all_test"
More Information needed
|
[
"# Dataset Card for \"mmlu_all_test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mmlu_all_test\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mmlu_all_test\"\n\nMore Information needed"
] |
5d85ccc5bca67b0c4f6718e47140e64144a7e35c
|
# Dataset Card for "6k-longform-summ"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vgoldberg/6k-longform-summ
|
[
"region:us"
] |
2023-10-16T22:33:39+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 331108572.999906, "num_examples": 6711}], "download_size": 108573322, "dataset_size": 331108572.999906}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T22:33:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "6k-longform-summ"
More Information needed
|
[
"# Dataset Card for \"6k-longform-summ\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"6k-longform-summ\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"6k-longform-summ\"\n\nMore Information needed"
] |
7240386ac4a12b57b9e3583bd750405e7dbec483
|
# Dataset Card for "longtest_benchmark"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nuprl-staging/longtest_benchmark
|
[
"region:us"
] |
2023-10-16T22:36:15+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "target_tests", "dtype": "string"}, {"name": "canonical_prompt", "dtype": "string"}, {"name": "canonical_solution", "dtype": "string"}, {"name": "size", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 623100, "num_examples": 24}], "download_size": 0, "dataset_size": 623100}}
|
2023-10-16T22:39:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "longtest_benchmark"
More Information needed
|
[
"# Dataset Card for \"longtest_benchmark\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"longtest_benchmark\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"longtest_benchmark\"\n\nMore Information needed"
] |
0aaeca10baf848264e7a01a79af85983042c0c1a
|
# Dataset Card for "chai-dpo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chargoddard/chai-dpo
|
[
"region:us"
] |
2023-10-16T22:50:30+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}, {"config_name": "unrolled", "data_files": [{"split": "train", "path": "unrolled/train-*"}]}], "dataset_info": [{"config_name": "default", "features": [{"name": "history", "list": [{"name": "sender", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "rejected", "sequence": "string"}, {"name": "accepted", "dtype": "string"}, {"name": "thumbs_up", "dtype": "bool"}, {"name": "submission_id", "dtype": "string"}, {"name": "model_name", "dtype": "string"}, {"name": "bot_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 223007429, "num_examples": 113263}], "download_size": 60868294, "dataset_size": 223007429}, {"config_name": "unrolled", "features": [{"name": "history", "list": [{"name": "sender", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "rejected", "dtype": "string"}, {"name": "accepted", "dtype": "string"}, {"name": "thumbs_up", "dtype": "bool"}, {"name": "submission_id", "dtype": "string"}, {"name": "model_name", "dtype": "string"}, {"name": "bot_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 361172645, "num_examples": 198719}], "download_size": 61083616, "dataset_size": 361172645}]}
|
2023-10-16T22:54:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chai-dpo"
More Information needed
|
[
"# Dataset Card for \"chai-dpo\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chai-dpo\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chai-dpo\"\n\nMore Information needed"
] |
9e3d805747e1d8d02f1aafe26093610bf0baf3a0
|
# Dataset Card for "hdbscan_generated_sample_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
iara-project/hdbscan_generated_sample_64
|
[
"region:us"
] |
2023-10-16T23:15:51+00:00
|
{"dataset_info": {"features": [{"name": "news_id", "dtype": "string"}, {"name": "embeddings", "sequence": "float64"}, {"name": "sentence", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "labels", "dtype": "int64"}, {"name": "probs", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 11420251.188968917, "num_examples": 1216}], "download_size": 8611519, "dataset_size": 11420251.188968917}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T23:15:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "hdbscan_generated_sample_64"
More Information needed
|
[
"# Dataset Card for \"hdbscan_generated_sample_64\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"hdbscan_generated_sample_64\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"hdbscan_generated_sample_64\"\n\nMore Information needed"
] |
d6cd52ac8417a05241a4a76fe3e5a60bbc5d5311
|
### Description
- **medical_pretrain_tw.json**: This dataset contains a total of 360,000 entries sourced from medical encyclopedia data from FreedomIntelligence/huatuo_encyclopedia_qa. These entries are a combination of questions and answers, forming text fields with coherent sentences. The dataset is intended for pre-training purposes to inject medical knowledge.
- **medical_book_zh.json**: This dataset includes 8,475 entries sourced from text data in medical textbooks. The data source is [here](https://github.com/jind11/MedQA), and the original dataset was obtained from Google Drive. It has been processed to split long paragraphs into small sections, each containing a maximum of 2048 characters.
### Data Format
- **medical_pretrain_tw.json.json**: JSON format with text fields.
- **medical_book_zh.json**: JSON format with text fields.
### License
Please refer to the respective data sources for licensing information.
### Dataset Citation
If you use this dataset in your research or work, please consider citing the original data sources as specified above.
|
DavidLanz/medical_pretrain
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:zh",
"language:en",
"license:apache-2.0",
"text-generation",
"region:us"
] |
2023-10-16T23:46:34+00:00
|
{"language": ["zh", "en"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "pretty_name": "medical", "tags": ["text-generation"]}
|
2023-10-16T23:49:44+00:00
|
[] |
[
"zh",
"en"
] |
TAGS
#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #language-English #license-apache-2.0 #text-generation #region-us
|
### Description
- medical_pretrain_tw.json: This dataset contains a total of 360,000 entries sourced from medical encyclopedia data from FreedomIntelligence/huatuo_encyclopedia_qa. These entries are a combination of questions and answers, forming text fields with coherent sentences. The dataset is intended for pre-training purposes to inject medical knowledge.
- medical_book_zh.json: This dataset includes 8,475 entries sourced from text data in medical textbooks. The data source is here, and the original dataset was obtained from Google Drive. It has been processed to split long paragraphs into small sections, each containing a maximum of 2048 characters.
### Data Format
- medical_pretrain_tw.URL: JSON format with text fields.
- medical_book_zh.json: JSON format with text fields.
### License
Please refer to the respective data sources for licensing information.
### Dataset Citation
If you use this dataset in your research or work, please consider citing the original data sources as specified above.
|
[
"### Description\n\n- medical_pretrain_tw.json: This dataset contains a total of 360,000 entries sourced from medical encyclopedia data from FreedomIntelligence/huatuo_encyclopedia_qa. These entries are a combination of questions and answers, forming text fields with coherent sentences. The dataset is intended for pre-training purposes to inject medical knowledge.\n\n- medical_book_zh.json: This dataset includes 8,475 entries sourced from text data in medical textbooks. The data source is here, and the original dataset was obtained from Google Drive. It has been processed to split long paragraphs into small sections, each containing a maximum of 2048 characters.",
"### Data Format\n\n- medical_pretrain_tw.URL: JSON format with text fields.\n\n- medical_book_zh.json: JSON format with text fields.",
"### License\n\nPlease refer to the respective data sources for licensing information.",
"### Dataset Citation\n\nIf you use this dataset in your research or work, please consider citing the original data sources as specified above."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #language-English #license-apache-2.0 #text-generation #region-us \n",
"### Description\n\n- medical_pretrain_tw.json: This dataset contains a total of 360,000 entries sourced from medical encyclopedia data from FreedomIntelligence/huatuo_encyclopedia_qa. These entries are a combination of questions and answers, forming text fields with coherent sentences. The dataset is intended for pre-training purposes to inject medical knowledge.\n\n- medical_book_zh.json: This dataset includes 8,475 entries sourced from text data in medical textbooks. The data source is here, and the original dataset was obtained from Google Drive. It has been processed to split long paragraphs into small sections, each containing a maximum of 2048 characters.",
"### Data Format\n\n- medical_pretrain_tw.URL: JSON format with text fields.\n\n- medical_book_zh.json: JSON format with text fields.",
"### License\n\nPlease refer to the respective data sources for licensing information.",
"### Dataset Citation\n\nIf you use this dataset in your research or work, please consider citing the original data sources as specified above."
] |
[
51,
158,
40,
16,
31
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #language-English #license-apache-2.0 #text-generation #region-us \n### Description\n\n- medical_pretrain_tw.json: This dataset contains a total of 360,000 entries sourced from medical encyclopedia data from FreedomIntelligence/huatuo_encyclopedia_qa. These entries are a combination of questions and answers, forming text fields with coherent sentences. The dataset is intended for pre-training purposes to inject medical knowledge.\n\n- medical_book_zh.json: This dataset includes 8,475 entries sourced from text data in medical textbooks. The data source is here, and the original dataset was obtained from Google Drive. It has been processed to split long paragraphs into small sections, each containing a maximum of 2048 characters.### Data Format\n\n- medical_pretrain_tw.URL: JSON format with text fields.\n\n- medical_book_zh.json: JSON format with text fields.### License\n\nPlease refer to the respective data sources for licensing information.### Dataset Citation\n\nIf you use this dataset in your research or work, please consider citing the original data sources as specified above."
] |
bae2b13b808bff213bcb56c50fb3405699ff2f00
|
# Dataset Card for "biomedqa_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pbaoo2705/biomedqa_processed
|
[
"region:us"
] |
2023-10-16T23:49:20+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "answer", "dtype": "string"}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7624914, "num_examples": 2992}], "download_size": 1320802, "dataset_size": 7624914}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T08:59:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biomedqa_processed"
More Information needed
|
[
"# Dataset Card for \"biomedqa_processed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biomedqa_processed\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biomedqa_processed\"\n\nMore Information needed"
] |
96ebe3372bea4588167276f12578fb531ba19ba2
|
# Dataset Card for "biomedqa_processed_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pbaoo2705/biomedqa_processed_eval
|
[
"region:us"
] |
2023-10-16T23:49:22+00:00
|
{"dataset_info": {"features": [{"name": "Unnamed: 0.1", "dtype": "int64"}, {"name": "Unnamed: 0", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "answer", "dtype": "string"}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 347583, "num_examples": 100}], "download_size": 124060, "dataset_size": 347583}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T08:59:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "biomedqa_processed_eval"
More Information needed
|
[
"# Dataset Card for \"biomedqa_processed_eval\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"biomedqa_processed_eval\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"biomedqa_processed_eval\"\n\nMore Information needed"
] |
fe6c15b28119cc9247b64398b361d0dc74db80a8
|
## Chinese Medical Dialogue Dataset
### Description
- **medical_reward_tw.json**: This dataset consists of 4,000 entries derived from the Chinese Medical Dialogue dataset (Toyhom/Chinese-medical-dialogue-data). The questions in this dataset are randomly selected from the Chinese Medical Dialogue dataset. The "response_chosen" field contains responses from medical professionals in the Chinese Medical Dialogue dataset, while the "response_rejected" field contains responses from the herbal medicine model SCIR-HI/Huatuo-Llama-Med-Chinese.
### Data Format
- **medical_reward_tw.json**: JSON format with fields including "question," "response_chosen," and "response_rejected."
### License
Please refer to the respective data sources for licensing information.
### Dataset Citation
If you use this dataset in your research or work, please consider citing the original data sources as specified above.
|
DavidLanz/medical_reward
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:zh",
"language:en",
"license:apache-2.0",
"text-generation",
"region:us"
] |
2023-10-16T23:54:02+00:00
|
{"language": ["zh", "en"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "pretty_name": "medical", "tags": ["text-generation"]}
|
2023-10-16T23:55:06+00:00
|
[] |
[
"zh",
"en"
] |
TAGS
#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #language-English #license-apache-2.0 #text-generation #region-us
|
## Chinese Medical Dialogue Dataset
### Description
- medical_reward_tw.json: This dataset consists of 4,000 entries derived from the Chinese Medical Dialogue dataset (Toyhom/Chinese-medical-dialogue-data). The questions in this dataset are randomly selected from the Chinese Medical Dialogue dataset. The "response_chosen" field contains responses from medical professionals in the Chinese Medical Dialogue dataset, while the "response_rejected" field contains responses from the herbal medicine model SCIR-HI/Huatuo-Llama-Med-Chinese.
### Data Format
- medical_reward_tw.json: JSON format with fields including "question," "response_chosen," and "response_rejected."
### License
Please refer to the respective data sources for licensing information.
### Dataset Citation
If you use this dataset in your research or work, please consider citing the original data sources as specified above.
|
[
"## Chinese Medical Dialogue Dataset",
"### Description\n\n- medical_reward_tw.json: This dataset consists of 4,000 entries derived from the Chinese Medical Dialogue dataset (Toyhom/Chinese-medical-dialogue-data). The questions in this dataset are randomly selected from the Chinese Medical Dialogue dataset. The \"response_chosen\" field contains responses from medical professionals in the Chinese Medical Dialogue dataset, while the \"response_rejected\" field contains responses from the herbal medicine model SCIR-HI/Huatuo-Llama-Med-Chinese.",
"### Data Format\n\n- medical_reward_tw.json: JSON format with fields including \"question,\" \"response_chosen,\" and \"response_rejected.\"",
"### License\n\nPlease refer to the respective data sources for licensing information.",
"### Dataset Citation\n\nIf you use this dataset in your research or work, please consider citing the original data sources as specified above."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #language-English #license-apache-2.0 #text-generation #region-us \n",
"## Chinese Medical Dialogue Dataset",
"### Description\n\n- medical_reward_tw.json: This dataset consists of 4,000 entries derived from the Chinese Medical Dialogue dataset (Toyhom/Chinese-medical-dialogue-data). The questions in this dataset are randomly selected from the Chinese Medical Dialogue dataset. The \"response_chosen\" field contains responses from medical professionals in the Chinese Medical Dialogue dataset, while the \"response_rejected\" field contains responses from the herbal medicine model SCIR-HI/Huatuo-Llama-Med-Chinese.",
"### Data Format\n\n- medical_reward_tw.json: JSON format with fields including \"question,\" \"response_chosen,\" and \"response_rejected.\"",
"### License\n\nPlease refer to the respective data sources for licensing information.",
"### Dataset Citation\n\nIf you use this dataset in your research or work, please consider citing the original data sources as specified above."
] |
[
51,
7,
131,
43,
16,
31
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #language-English #license-apache-2.0 #text-generation #region-us \n## Chinese Medical Dialogue Dataset### Description\n\n- medical_reward_tw.json: This dataset consists of 4,000 entries derived from the Chinese Medical Dialogue dataset (Toyhom/Chinese-medical-dialogue-data). The questions in this dataset are randomly selected from the Chinese Medical Dialogue dataset. The \"response_chosen\" field contains responses from medical professionals in the Chinese Medical Dialogue dataset, while the \"response_rejected\" field contains responses from the herbal medicine model SCIR-HI/Huatuo-Llama-Med-Chinese.### Data Format\n\n- medical_reward_tw.json: JSON format with fields including \"question,\" \"response_chosen,\" and \"response_rejected.\"### License\n\nPlease refer to the respective data sources for licensing information.### Dataset Citation\n\nIf you use this dataset in your research or work, please consider citing the original data sources as specified above."
] |
aa63b3e49c21894e1a361582820b063e6f96fd24
|
# Dataset Card for Evaluation run of MBZUAI/LaMini-GPT-774M
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/MBZUAI/LaMini-GPT-774M
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [MBZUAI/LaMini-GPT-774M](https://huggingface.co/MBZUAI/LaMini-GPT-774M) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MBZUAI__LaMini-GPT-774M",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T01:05:23.378180](https://huggingface.co/datasets/open-llm-leaderboard/details_MBZUAI__LaMini-GPT-774M/blob/main/results_2023-10-17T01-05-23.378180.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.03544463087248322,
"em_stderr": 0.0018935573437954087,
"f1": 0.12509857382550346,
"f1_stderr": 0.0025549304231766066,
"acc": 0.2829518547750592,
"acc_stderr": 0.006964941277847027
},
"harness|drop|3": {
"em": 0.03544463087248322,
"em_stderr": 0.0018935573437954087,
"f1": 0.12509857382550346,
"f1_stderr": 0.0025549304231766066
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5659037095501184,
"acc_stderr": 0.013929882555694054
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_MBZUAI__LaMini-GPT-774M
|
[
"region:us"
] |
2023-10-17T00:05:26+00:00
|
{"pretty_name": "Evaluation run of MBZUAI/LaMini-GPT-774M", "dataset_summary": "Dataset automatically created during the evaluation run of model [MBZUAI/LaMini-GPT-774M](https://huggingface.co/MBZUAI/LaMini-GPT-774M) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MBZUAI__LaMini-GPT-774M\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-17T01:05:23.378180](https://huggingface.co/datasets/open-llm-leaderboard/details_MBZUAI__LaMini-GPT-774M/blob/main/results_2023-10-17T01-05-23.378180.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.03544463087248322,\n \"em_stderr\": 0.0018935573437954087,\n \"f1\": 0.12509857382550346,\n \"f1_stderr\": 0.0025549304231766066,\n \"acc\": 0.2829518547750592,\n \"acc_stderr\": 0.006964941277847027\n },\n \"harness|drop|3\": {\n \"em\": 0.03544463087248322,\n \"em_stderr\": 0.0018935573437954087,\n \"f1\": 0.12509857382550346,\n \"f1_stderr\": 0.0025549304231766066\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5659037095501184,\n \"acc_stderr\": 0.013929882555694054\n }\n}\n```", "repo_url": "https://huggingface.co/MBZUAI/LaMini-GPT-774M", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_17T01_05_23.378180", "path": ["**/details_harness|drop|3_2023-10-17T01-05-23.378180.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-17T01-05-23.378180.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_17T01_05_23.378180", "path": ["**/details_harness|gsm8k|5_2023-10-17T01-05-23.378180.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-17T01-05-23.378180.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_17T01_05_23.378180", "path": ["**/details_harness|winogrande|5_2023-10-17T01-05-23.378180.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-17T01-05-23.378180.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_17T01_05_23.378180", "path": ["results_2023-10-17T01-05-23.378180.parquet"]}, {"split": "latest", "path": ["results_2023-10-17T01-05-23.378180.parquet"]}]}]}
|
2023-10-17T00:05:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of MBZUAI/LaMini-GPT-774M
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model MBZUAI/LaMini-GPT-774M on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-17T01:05:23.378180(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of MBZUAI/LaMini-GPT-774M",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model MBZUAI/LaMini-GPT-774M on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T01:05:23.378180(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of MBZUAI/LaMini-GPT-774M",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model MBZUAI/LaMini-GPT-774M on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T01:05:23.378180(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
21,
31,
169,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of MBZUAI/LaMini-GPT-774M## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model MBZUAI/LaMini-GPT-774M on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-17T01:05:23.378180(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6e530db1df398d8b40030176367f5646bfb89296
|
# Dataset Card for "hdbscan_generated_sample_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
iara-project/hdbscan_generated_sample_8
|
[
"region:us"
] |
2023-10-17T00:14:03+00:00
|
{"dataset_info": {"features": [{"name": "news_id", "dtype": "string"}, {"name": "embeddings", "sequence": "float64"}, {"name": "sentence", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "labels", "dtype": "int64"}, {"name": "probs", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1429357.7779391108, "num_examples": 152}], "download_size": 1226866, "dataset_size": 1429357.7779391108}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T00:14:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "hdbscan_generated_sample_8"
More Information needed
|
[
"# Dataset Card for \"hdbscan_generated_sample_8\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"hdbscan_generated_sample_8\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"hdbscan_generated_sample_8\"\n\nMore Information needed"
] |
0239794850e984f8904d6582ce28aecb3714b9d2
|
# Dataset Card for "A_QthenA_4096"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Minglii/A_QthenA_4096
|
[
"region:us"
] |
2023-10-17T00:24:48+00:00
|
{"dataset_info": {"features": [{"name": "data", "struct": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "id", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 359881748, "num_examples": 52002}], "download_size": 119164182, "dataset_size": 359881748}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T00:31:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "A_QthenA_4096"
More Information needed
|
[
"# Dataset Card for \"A_QthenA_4096\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"A_QthenA_4096\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"A_QthenA_4096\"\n\nMore Information needed"
] |
9476ec842e9b07c8b468543bef1da22bf74e43e9
|
# Dataset Card for "SECOND_KOWIKI_RETRIEVE_300"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jjonhwa/SECOND_KOWIKI_RETRIEVE_300
|
[
"region:us"
] |
2023-10-17T00:46:30+00:00
|
{"dataset_info": {"features": [{"name": "ctxs", "list": [{"name": "score", "dtype": "float64"}, {"name": "text", "dtype": "string"}]}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 181303604, "num_examples": 15504}], "download_size": 95733044, "dataset_size": 181303604}}
|
2023-10-17T00:46:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "SECOND_KOWIKI_RETRIEVE_300"
More Information needed
|
[
"# Dataset Card for \"SECOND_KOWIKI_RETRIEVE_300\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"SECOND_KOWIKI_RETRIEVE_300\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"SECOND_KOWIKI_RETRIEVE_300\"\n\nMore Information needed"
] |
11de7ccdcc96c5cdc86d0c33afeaf6a975d97a18
|
# Dataset Card for "textbooks_grounded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
open-phi/textbooks_grounded
|
[
"region:us"
] |
2023-10-17T01:35:07+00:00
|
{"dataset_info": {"features": [{"name": "topic", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "concepts", "sequence": "string"}, {"name": "outline", "sequence": "string"}, {"name": "markdown", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9661917, "num_examples": 85}], "download_size": 3742034, "dataset_size": 9661917}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T01:35:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "textbooks_grounded"
More Information needed
|
[
"# Dataset Card for \"textbooks_grounded\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"textbooks_grounded\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"textbooks_grounded\"\n\nMore Information needed"
] |
0f33f3a1a227147d2ef60c1579f821a390dffa2d
|
# Dataset Card for Evaluation run of Aeala/Enterredaas-33b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Aeala/Enterredaas-33b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Aeala/Enterredaas-33b](https://huggingface.co/Aeala/Enterredaas-33b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Aeala__Enterredaas-33b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T04:24:16.762833](https://huggingface.co/datasets/open-llm-leaderboard/details_Aeala__Enterredaas-33b/blob/main/results_2023-10-17T04-24-16.762833.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001572986577181208,
"em_stderr": 0.00040584511324177344,
"f1": 0.06232487416107388,
"f1_stderr": 0.0013590473373823627,
"acc": 0.47496578750374735,
"acc_stderr": 0.010824257783821654
},
"harness|drop|3": {
"em": 0.001572986577181208,
"em_stderr": 0.00040584511324177344,
"f1": 0.06232487416107388,
"f1_stderr": 0.0013590473373823627
},
"harness|gsm8k|5": {
"acc": 0.16224412433661864,
"acc_stderr": 0.010155130880393526
},
"harness|winogrande|5": {
"acc": 0.7876874506708761,
"acc_stderr": 0.011493384687249784
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Aeala__Enterredaas-33b
|
[
"region:us"
] |
2023-10-17T03:24:21+00:00
|
{"pretty_name": "Evaluation run of Aeala/Enterredaas-33b", "dataset_summary": "Dataset automatically created during the evaluation run of model [Aeala/Enterredaas-33b](https://huggingface.co/Aeala/Enterredaas-33b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Aeala__Enterredaas-33b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-17T04:24:16.762833](https://huggingface.co/datasets/open-llm-leaderboard/details_Aeala__Enterredaas-33b/blob/main/results_2023-10-17T04-24-16.762833.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.00040584511324177344,\n \"f1\": 0.06232487416107388,\n \"f1_stderr\": 0.0013590473373823627,\n \"acc\": 0.47496578750374735,\n \"acc_stderr\": 0.010824257783821654\n },\n \"harness|drop|3\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.00040584511324177344,\n \"f1\": 0.06232487416107388,\n \"f1_stderr\": 0.0013590473373823627\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.16224412433661864,\n \"acc_stderr\": 0.010155130880393526\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7876874506708761,\n \"acc_stderr\": 0.011493384687249784\n }\n}\n```", "repo_url": "https://huggingface.co/Aeala/Enterredaas-33b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_17T04_24_16.762833", "path": ["**/details_harness|drop|3_2023-10-17T04-24-16.762833.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-17T04-24-16.762833.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_17T04_24_16.762833", "path": ["**/details_harness|gsm8k|5_2023-10-17T04-24-16.762833.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-17T04-24-16.762833.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_17T04_24_16.762833", "path": ["**/details_harness|winogrande|5_2023-10-17T04-24-16.762833.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-17T04-24-16.762833.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_17T04_24_16.762833", "path": ["results_2023-10-17T04-24-16.762833.parquet"]}, {"split": "latest", "path": ["results_2023-10-17T04-24-16.762833.parquet"]}]}]}
|
2023-10-17T03:24:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Aeala/Enterredaas-33b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Aeala/Enterredaas-33b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-17T04:24:16.762833(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Aeala/Enterredaas-33b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Aeala/Enterredaas-33b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T04:24:16.762833(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Aeala/Enterredaas-33b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Aeala/Enterredaas-33b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T04:24:16.762833(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
19,
31,
167,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Aeala/Enterredaas-33b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Aeala/Enterredaas-33b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-17T04:24:16.762833(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
143355a4f9a79d4ab7f0cd4a30e3631ad1f3231d
|
# Dataset Card for "6a8bc094"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/6a8bc094
|
[
"region:us"
] |
2023-10-17T03:30:55+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 208, "num_examples": 10}], "download_size": 1383, "dataset_size": 208}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T03:30:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "6a8bc094"
More Information needed
|
[
"# Dataset Card for \"6a8bc094\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"6a8bc094\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"6a8bc094\"\n\nMore Information needed"
] |
62fb7e94688f7eb33827558c63a2bc4a1d05b5bf
|
**➢** **Product Name – [Matrix Portable Heater](https://www.facebook.com/people/Matrix-Portable-Heater/61552593236365/)**
**➢ Rating -** ★★★★★ (4.9)
**➢ Where to Buy (Sale Live) – [Click Here](https://www.glitco.com/matrix-portable-heater/)**
### [Matrix Portable Heater: Your Compact Solution for Efficient Heating](https://www.glitco.com/matrix-portable-heater/)
**Brand:** [Matrix Industrial Products](https://www.facebook.com/people/Matrix-Portable-Heater/61552593236365/)
**Special feature:** Portable
**Power source:** Battery Powered
**Heating method:** Convection
**Mounting type:** Floor Mount
**Burner type:** Radiant
**Item weight:** 3.24 Pounds
**Heat output:** 5200 British Thermal Units
[.png)](https://www.glitco.com/matrix-portable-heater/)
### **[Matrix Portable Heater Reviews](https://www.glitco.com/matrix-portable-heater/)**
The Matrix Portable Heater is revolutionizing the way we think about personal heating solutions. In an age where portability and energy efficiency are paramount, this sleek and compact heater offers a reliable, energy-efficient, and convenient way to stay warm in any room of your home. In this article, we'll explore the many advantages of the **[Matrix Portable Heater](https://www.facebook.com/people/Matrix-Portable-Heater/61552593236365/)** and why it's becoming a popular choice for those seeking warmth and comfort without the need for a bulky, traditional space heater.
### **Features & Details**
* **Fast Heating-** This Handy Heater comes with Ceramic heating element creates energy effective warmth snappily in 3 seconds. SaiEllin Heater room warmer is excellent for close range warming.
* **Overheat protection PTC ceramic element is tone-** regulating With the design ofover-heat protection for thermal control. It's an air cracker heater which has an inbuilt addict to throw air.
* **Compact Design-** It's compact enough to take anywhere, Great for the trailer or give it to the kiddies for their council dorm apartments. Mini Room Heater has malleable temperature and speed.
* Megahit for hot room and cracker for downtime is a small room heater which comes with LED Screen and buttons to help set the Temperature both in Celsius and Fahrenheit and help acclimate addict speed.
### **Benefits Of [Matrix Portable Heater](https://www.glitco.com/matrix-portable-heater/)**
malleable Temperature These heaters frequently come with temperature controls, allowing you to set the asked position of warmth.
Safety Features numerous movable heaters have safety features like overheat protection and tip- over switches to help accidents.
Energy Efficiency Some models are designed to be energy-effective, which can help reduce heating costs.
Portability These heaters are easy to move around, thanks to their compact size and occasionally erected- in handles.
Different Heating styles movable heaters can use colorful heating styles, including ceramic, radiant, or convection heating.
[.png)](https://www.glitco.com/matrix-portable-heater/)
### **Compact Design**
One of the standout features of the [**Matrix Portable Heater**](https://www.glitco.com/matrix-portable-heater/) is its compact design. Measuring just a few inches in height and width, this heater is easily transportable, making it ideal for use in various rooms within your home. Whether you're looking to heat your bedroom, living room, or even a home office, the Matrix Portable Heater can seamlessly fit into your space without being obtrusive. Its sleek design allows it to blend into any decor, making it an unobtrusive addition to your home.
### **Efficient Heating**
Matrix Portable Heater doesn't compromise on heating efficiency. Despite its small size, it packs a punch when it comes to producing warmth. With multiple heating settings and an adjustable thermostat, you can customize the temperature to your liking. This heater employs innovative heating technology to distribute warmth evenly, ensuring that every corner of your room is heated effectively. You'll no longer have to huddle near a traditional, inefficient space heater to stay warm.
### **Energy Efficiency**
Energy efficiency is a significant concern for today's environmentally-conscious consumers. The **[Matrix Portable Heater](https://www.glitco.com/matrix-portable-heater/)** is designed with this in mind. It utilizes the latest energy-saving technology, ensuring that you can enjoy a warm and comfortable space without the guilt of high energy bills. By efficiently heating the area you need, it eliminates the waste associated with heating an entire room. The heater also features a built-in timer, which allows you to schedule heating periods, further reducing energy consumption.
### **Safety Features**
Safety is a top priority when it comes to any heating device, and the **[Matrix Portable Heater](https://www.facebook.com/people/Matrix-Portable-Heater/61552593236365/)** doesn't disappoint. It comes equipped with various safety features, including overheat protection and a tip-over switch. These mechanisms ensure that the heater automatically shuts off if it overheats or gets accidentally knocked over. This not only prevents accidents but also provides peace of mind for users.
### **Noise-Free Operation**
Traditional space heaters can be noisy and disruptive, making it difficult to concentrate, relax, or sleep in a quiet environment. The **[Matrix Portable Heater](https://www.glitco.com/matrix-portable-heater/)** is designed with silent operation in mind. It operates virtually noise-free, allowing you to enjoy the warmth it provides without any distractions. Whether you're working, reading, or watching TV, this heater won't disrupt your peace and quiet.
[.png)](https://www.glitco.com/matrix-portable-heater/)
### **Conclusion**
The **[Matrix Portable Heater](https://www.glitco.com/matrix-portable-heater/)** offers a modern and efficient solution for personal heating needs. Its compact design, energy efficiency, and safety features make it a top choice for anyone seeking a convenient and effective way to stay warm during the colder months. Whether you want to heat your bedroom, living room, or workspace, this portable heater offers a versatile and unobtrusive solution. Say goodbye to bulky space heaters and hello to the future of heating with the Matrix Portable Heater. Stay warm, stay comfortable, and stay energy-efficient.
|
matrixportable/matrixportableheaterbenefit
|
[
"region:us"
] |
2023-10-17T04:07:20+00:00
|
{}
|
2023-10-17T04:08:33+00:00
|
[] |
[] |
TAGS
#region-us
|
Product Name – Matrix Portable Heater
Rating - (4.9)
Where to Buy (Sale Live) – Click Here
### Matrix Portable Heater: Your Compact Solution for Efficient Heating
Brand: Matrix Industrial Products
Special feature: Portable
Power source: Battery Powered
Heating method: Convection
Mounting type: Floor Mount
Burner type: Radiant
Item weight: 3.24 Pounds
Heat output: 5200 British Thermal Units
](URL
### Compact Design
One of the standout features of the Matrix Portable Heater is its compact design. Measuring just a few inches in height and width, this heater is easily transportable, making it ideal for use in various rooms within your home. Whether you're looking to heat your bedroom, living room, or even a home office, the Matrix Portable Heater can seamlessly fit into your space without being obtrusive. Its sleek design allows it to blend into any decor, making it an unobtrusive addition to your home.
### Efficient Heating
Matrix Portable Heater doesn't compromise on heating efficiency. Despite its small size, it packs a punch when it comes to producing warmth. With multiple heating settings and an adjustable thermostat, you can customize the temperature to your liking. This heater employs innovative heating technology to distribute warmth evenly, ensuring that every corner of your room is heated effectively. You'll no longer have to huddle near a traditional, inefficient space heater to stay warm.
### Energy Efficiency
Energy efficiency is a significant concern for today's environmentally-conscious consumers. The Matrix Portable Heater is designed with this in mind. It utilizes the latest energy-saving technology, ensuring that you can enjoy a warm and comfortable space without the guilt of high energy bills. By efficiently heating the area you need, it eliminates the waste associated with heating an entire room. The heater also features a built-in timer, which allows you to schedule heating periods, further reducing energy consumption.
### Safety Features
Safety is a top priority when it comes to any heating device, and the Matrix Portable Heater doesn't disappoint. It comes equipped with various safety features, including overheat protection and a tip-over switch. These mechanisms ensure that the heater automatically shuts off if it overheats or gets accidentally knocked over. This not only prevents accidents but also provides peace of mind for users.
### Noise-Free Operation
Traditional space heaters can be noisy and disruptive, making it difficult to concentrate, relax, or sleep in a quiet environment. The Matrix Portable Heater is designed with silent operation in mind. It operates virtually noise-free, allowing you to enjoy the warmth it provides without any distractions. Whether you're working, reading, or watching TV, this heater won't disrupt your peace and quiet.
](URL",
"### Matrix Portable Heater Reviews\n\nThe Matrix Portable Heater is revolutionizing the way we think about personal heating solutions. In an age where portability and energy efficiency are paramount, this sleek and compact heater offers a reliable, energy-efficient, and convenient way to stay warm in any room of your home. In this article, we'll explore the many advantages of the Matrix Portable Heater and why it's becoming a popular choice for those seeking warmth and comfort without the need for a bulky, traditional space heater.",
"### Features & Details\n\n* Fast Heating- This Handy Heater comes with Ceramic heating element creates energy effective warmth snappily in 3 seconds. SaiEllin Heater room warmer is excellent for close range warming.\n\n* Overheat protection PTC ceramic element is tone- regulating With the design ofover-heat protection for thermal control. It's an air cracker heater which has an inbuilt addict to throw air.\n\n* Compact Design- It's compact enough to take anywhere, Great for the trailer or give it to the kiddies for their council dorm apartments. Mini Room Heater has malleable temperature and speed.\n\n* Megahit for hot room and cracker for downtime is a small room heater which comes with LED Screen and buttons to help set the Temperature both in Celsius and Fahrenheit and help acclimate addict speed.",
"### Benefits Of Matrix Portable Heater\n\nmalleable Temperature These heaters frequently come with temperature controls, allowing you to set the asked position of warmth.\n\nSafety Features numerous movable heaters have safety features like overheat protection and tip- over switches to help accidents.\n\nEnergy Efficiency Some models are designed to be energy-effective, which can help reduce heating costs.\n\nPortability These heaters are easy to move around, thanks to their compact size and occasionally erected- in handles.\n\nDifferent Heating styles movable heaters can use colorful heating styles, including ceramic, radiant, or convection heating.\n\n](URL",
"### Conclusion\n\nThe Matrix Portable Heater offers a modern and efficient solution for personal heating needs. Its compact design, energy efficiency, and safety features make it a top choice for anyone seeking a convenient and effective way to stay warm during the colder months. Whether you want to heat your bedroom, living room, or workspace, this portable heater offers a versatile and unobtrusive solution. Say goodbye to bulky space heaters and hello to the future of heating with the Matrix Portable Heater. Stay warm, stay comfortable, and stay energy-efficient."
] |
[
"TAGS\n#region-us \n",
"### Matrix Portable Heater: Your Compact Solution for Efficient Heating\n\nBrand: Matrix Industrial Products\n\nSpecial feature: Portable\n\nPower source: Battery Powered\n\nHeating method: Convection\n\nMounting type: Floor Mount\n\nBurner type: Radiant\n\nItem weight: 3.24 Pounds\n\nHeat output: 5200 British Thermal Units\n\n](URL",
"### Compact Design\n\nOne of the standout features of the Matrix Portable Heater is its compact design. Measuring just a few inches in height and width, this heater is easily transportable, making it ideal for use in various rooms within your home. Whether you're looking to heat your bedroom, living room, or even a home office, the Matrix Portable Heater can seamlessly fit into your space without being obtrusive. Its sleek design allows it to blend into any decor, making it an unobtrusive addition to your home.",
"### Efficient Heating\n\nMatrix Portable Heater doesn't compromise on heating efficiency. Despite its small size, it packs a punch when it comes to producing warmth. With multiple heating settings and an adjustable thermostat, you can customize the temperature to your liking. This heater employs innovative heating technology to distribute warmth evenly, ensuring that every corner of your room is heated effectively. You'll no longer have to huddle near a traditional, inefficient space heater to stay warm.",
"### Energy Efficiency\n\nEnergy efficiency is a significant concern for today's environmentally-conscious consumers. The Matrix Portable Heater is designed with this in mind. It utilizes the latest energy-saving technology, ensuring that you can enjoy a warm and comfortable space without the guilt of high energy bills. By efficiently heating the area you need, it eliminates the waste associated with heating an entire room. The heater also features a built-in timer, which allows you to schedule heating periods, further reducing energy consumption.",
"### Safety Features\n\nSafety is a top priority when it comes to any heating device, and the Matrix Portable Heater doesn't disappoint. It comes equipped with various safety features, including overheat protection and a tip-over switch. These mechanisms ensure that the heater automatically shuts off if it overheats or gets accidentally knocked over. This not only prevents accidents but also provides peace of mind for users.",
"### Noise-Free Operation\n\nTraditional space heaters can be noisy and disruptive, making it difficult to concentrate, relax, or sleep in a quiet environment. The Matrix Portable Heater is designed with silent operation in mind. It operates virtually noise-free, allowing you to enjoy the warmth it provides without any distractions. Whether you're working, reading, or watching TV, this heater won't disrupt your peace and quiet.\n\n](URL### Matrix Portable Heater Reviews\n\nThe Matrix Portable Heater is revolutionizing the way we think about personal heating solutions. In an age where portability and energy efficiency are paramount, this sleek and compact heater offers a reliable, energy-efficient, and convenient way to stay warm in any room of your home. In this article, we'll explore the many advantages of the Matrix Portable Heater and why it's becoming a popular choice for those seeking warmth and comfort without the need for a bulky, traditional space heater.### Features & Details\n\n* Fast Heating- This Handy Heater comes with Ceramic heating element creates energy effective warmth snappily in 3 seconds. SaiEllin Heater room warmer is excellent for close range warming.\n\n* Overheat protection PTC ceramic element is tone- regulating With the design ofover-heat protection for thermal control. It's an air cracker heater which has an inbuilt addict to throw air.\n\n* Compact Design- It's compact enough to take anywhere, Great for the trailer or give it to the kiddies for their council dorm apartments. Mini Room Heater has malleable temperature and speed.\n\n* Megahit for hot room and cracker for downtime is a small room heater which comes with LED Screen and buttons to help set the Temperature both in Celsius and Fahrenheit and help acclimate addict speed.",
"passage: ### Benefits Of Matrix Portable Heater\n\nmalleable Temperature These heaters frequently come with temperature controls, allowing you to set the asked position of warmth.\n\nSafety Features numerous movable heaters have safety features like overheat protection and tip- over switches to help accidents.\n\nEnergy Efficiency Some models are designed to be energy-effective, which can help reduce heating costs.\n\nPortability These heaters are easy to move around, thanks to their compact size and occasionally erected- in handles.\n\nDifferent Heating styles movable heaters can use colorful heating styles, including ceramic, radiant, or convection heating.\n\n on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_meta-llama__Llama-2-70b-chat-hf",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T05:07:42.486452](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Llama-2-70b-chat-hf/blob/main/results_2023-10-17T05-07-42.486452.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.040373322147651006,
"em_stderr": 0.0020157564185176837,
"f1": 0.1050272651006715,
"f1_stderr": 0.0023756238577676155,
"acc": 0.5359600711595986,
"acc_stderr": 0.011658939983913113
},
"harness|drop|3": {
"em": 0.040373322147651006,
"em_stderr": 0.0020157564185176837,
"f1": 0.1050272651006715,
"f1_stderr": 0.0023756238577676155
},
"harness|gsm8k|5": {
"acc": 0.266868840030326,
"acc_stderr": 0.012183780551887957
},
"harness|winogrande|5": {
"acc": 0.8050513022888713,
"acc_stderr": 0.011134099415938268
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_meta-llama__Llama-2-70b-chat-hf
|
[
"region:us"
] |
2023-10-17T04:07:46+00:00
|
{"pretty_name": "Evaluation run of meta-llama/Llama-2-70b-chat-hf", "dataset_summary": "Dataset automatically created during the evaluation run of model [meta-llama/Llama-2-70b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_meta-llama__Llama-2-70b-chat-hf\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-17T05:07:42.486452](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Llama-2-70b-chat-hf/blob/main/results_2023-10-17T05-07-42.486452.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.040373322147651006,\n \"em_stderr\": 0.0020157564185176837,\n \"f1\": 0.1050272651006715,\n \"f1_stderr\": 0.0023756238577676155,\n \"acc\": 0.5359600711595986,\n \"acc_stderr\": 0.011658939983913113\n },\n \"harness|drop|3\": {\n \"em\": 0.040373322147651006,\n \"em_stderr\": 0.0020157564185176837,\n \"f1\": 0.1050272651006715,\n \"f1_stderr\": 0.0023756238577676155\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.266868840030326,\n \"acc_stderr\": 0.012183780551887957\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8050513022888713,\n \"acc_stderr\": 0.011134099415938268\n }\n}\n```", "repo_url": "https://huggingface.co/meta-llama/Llama-2-70b-chat-hf", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_17T05_07_42.486452", "path": ["**/details_harness|drop|3_2023-10-17T05-07-42.486452.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-17T05-07-42.486452.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_17T05_07_42.486452", "path": ["**/details_harness|gsm8k|5_2023-10-17T05-07-42.486452.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-17T05-07-42.486452.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_17T05_07_42.486452", "path": ["**/details_harness|winogrande|5_2023-10-17T05-07-42.486452.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-17T05-07-42.486452.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_17T05_07_42.486452", "path": ["results_2023-10-17T05-07-42.486452.parquet"]}, {"split": "latest", "path": ["results_2023-10-17T05-07-42.486452.parquet"]}]}]}
|
2023-10-17T04:07:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of meta-llama/Llama-2-70b-chat-hf
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model meta-llama/Llama-2-70b-chat-hf on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-17T05:07:42.486452(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of meta-llama/Llama-2-70b-chat-hf",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model meta-llama/Llama-2-70b-chat-hf on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T05:07:42.486452(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of meta-llama/Llama-2-70b-chat-hf",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model meta-llama/Llama-2-70b-chat-hf on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T05:07:42.486452(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
172,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of meta-llama/Llama-2-70b-chat-hf## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model meta-llama/Llama-2-70b-chat-hf on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-17T05:07:42.486452(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9e8901f5495f8dfd122baae8f91e6e9018bd8d54
|
# Dataset Card for "eval_tag_nq_test_v12_middle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/eval_tag_nq_test_v12_middle
|
[
"region:us"
] |
2023-10-17T04:21:17+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "id", "dtype": "string"}, {"name": "titles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5226, "num_examples": 10}, {"name": "validation", "num_bytes": 1980914, "num_examples": 3610}], "download_size": 1119335, "dataset_size": 1986140}}
|
2023-10-17T04:21:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "eval_tag_nq_test_v12_middle"
More Information needed
|
[
"# Dataset Card for \"eval_tag_nq_test_v12_middle\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"eval_tag_nq_test_v12_middle\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"eval_tag_nq_test_v12_middle\"\n\nMore Information needed"
] |
f3cd044347bdbf696a1670f91b02f8c5982339a5
|
# Dataset Card for "random-seals-Ant-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HumanCompatibleAI/random-seals-Ant-v1
|
[
"region:us"
] |
2023-10-17T04:33:58+00:00
|
{"dataset_info": {"features": [{"name": "obs", "sequence": {"sequence": "float64"}}, {"name": "acts", "sequence": {"sequence": "float32"}}, {"name": "infos", "sequence": "string"}, {"name": "terminal", "dtype": "bool"}, {"name": "rews", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 167669182, "num_examples": 100}], "download_size": 73426727, "dataset_size": 167669182}}
|
2023-10-17T04:36:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "random-seals-Ant-v1"
More Information needed
|
[
"# Dataset Card for \"random-seals-Ant-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"random-seals-Ant-v1\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"random-seals-Ant-v1\"\n\nMore Information needed"
] |
f3b6325518d395f43682059977c9c10843c76aa3
|
# Dataset Card for "random-seals-HalfCheetah-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HumanCompatibleAI/random-seals-HalfCheetah-v1
|
[
"region:us"
] |
2023-10-17T04:37:48+00:00
|
{"dataset_info": {"features": [{"name": "obs", "sequence": {"sequence": "float64"}}, {"name": "acts", "sequence": {"sequence": "float32"}}, {"name": "infos", "sequence": "string"}, {"name": "terminal", "dtype": "bool"}, {"name": "rews", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 109003139, "num_examples": 100}], "download_size": 46825772, "dataset_size": 109003139}}
|
2023-10-17T04:38:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "random-seals-HalfCheetah-v1"
More Information needed
|
[
"# Dataset Card for \"random-seals-HalfCheetah-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"random-seals-HalfCheetah-v1\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"random-seals-HalfCheetah-v1\"\n\nMore Information needed"
] |
8b5876d7c64ca5abcdcc7bd8454a24a2ec4251ed
|
# Dataset Card for "random-seals-Hopper-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HumanCompatibleAI/random-seals-Hopper-v1
|
[
"region:us"
] |
2023-10-17T04:39:04+00:00
|
{"dataset_info": {"features": [{"name": "obs", "sequence": {"sequence": "float64"}}, {"name": "acts", "sequence": {"sequence": "float32"}}, {"name": "infos", "sequence": "string"}, {"name": "terminal", "dtype": "bool"}, {"name": "rews", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 68885506, "num_examples": 100}], "download_size": 31758126, "dataset_size": 68885506}}
|
2023-10-17T04:39:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "random-seals-Hopper-v1"
More Information needed
|
[
"# Dataset Card for \"random-seals-Hopper-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"random-seals-Hopper-v1\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"random-seals-Hopper-v1\"\n\nMore Information needed"
] |
a642c0425db479497d8e1770a532b43bc5facf48
|
# Dataset Card for "random-seals-Swimmer-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HumanCompatibleAI/random-seals-Swimmer-v1
|
[
"region:us"
] |
2023-10-17T04:40:37+00:00
|
{"dataset_info": {"features": [{"name": "obs", "sequence": {"sequence": "float64"}}, {"name": "acts", "sequence": {"sequence": "float32"}}, {"name": "infos", "sequence": "string"}, {"name": "terminal", "dtype": "bool"}, {"name": "rews", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 138046530, "num_examples": 100}], "download_size": 36347782, "dataset_size": 138046530}}
|
2023-10-17T04:41:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "random-seals-Swimmer-v1"
More Information needed
|
[
"# Dataset Card for \"random-seals-Swimmer-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"random-seals-Swimmer-v1\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"random-seals-Swimmer-v1\"\n\nMore Information needed"
] |
e0fa534fdcef9a198579e40e68e8227422abf565
|
# Dataset Card for "random-seals-Walker2d-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HumanCompatibleAI/random-seals-Walker2d-v1
|
[
"region:us"
] |
2023-10-17T04:41:57+00:00
|
{"dataset_info": {"features": [{"name": "obs", "sequence": {"sequence": "float64"}}, {"name": "acts", "sequence": {"sequence": "float32"}}, {"name": "infos", "sequence": "string"}, {"name": "terminal", "dtype": "bool"}, {"name": "rews", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 81303050, "num_examples": 100}], "download_size": 41495120, "dataset_size": 81303050}}
|
2023-10-17T04:42:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "random-seals-Walker2d-v1"
More Information needed
|
[
"# Dataset Card for \"random-seals-Walker2d-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"random-seals-Walker2d-v1\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"random-seals-Walker2d-v1\"\n\nMore Information needed"
] |
fa20f84ebbd1d42816c8798ade83451206321545
|
# Dataset Card for "ksc500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dhcppc0/ksc500
|
[
"region:us"
] |
2023-10-17T04:52:17+00:00
|
{"dataset_info": {"features": [{"name": "array", "sequence": "float32"}, {"name": "sampling_rate", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 23715007, "num_examples": 50}, {"name": "train", "num_bytes": 248781258, "num_examples": 500}], "download_size": 273740677, "dataset_size": 272496265}}
|
2023-10-17T04:56:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ksc500"
More Information needed
|
[
"# Dataset Card for \"ksc500\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ksc500\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ksc500\"\n\nMore Information needed"
] |
d1a0148da1ef71c32dcb49cefb937ae6f0be1aa8
|
# Dataset Card for "squad_title_v4_train_30_eval_10_deduped"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_title_v4_train_30_eval_10_deduped
|
[
"region:us"
] |
2023-10-17T05:06:25+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 300178.52173913043, "num_examples": 199}, {"name": "validation", "num_bytes": 50807, "num_examples": 50}], "download_size": 98978, "dataset_size": 350985.52173913043}}
|
2023-10-17T05:06:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_title_v4_train_30_eval_10_deduped"
More Information needed
|
[
"# Dataset Card for \"squad_title_v4_train_30_eval_10_deduped\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_title_v4_train_30_eval_10_deduped\"\n\nMore Information needed"
] |
[
6,
31
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_title_v4_train_30_eval_10_deduped\"\n\nMore Information needed"
] |
53a525857dc462fe3c6d33bc2b90412dc699b8a3
|
MRebel Dataset adapted to sharegpt
|
artivus/rebel-sharegpt
|
[
"license:apache-2.0",
"region:us"
] |
2023-10-17T05:31:46+00:00
|
{"license": "apache-2.0"}
|
2023-10-28T09:36:30+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
MRebel Dataset adapted to sharegpt
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
677d3b26891ed2d70508f1d3ba4c36fc09ebc860
|
# Dataset Card for Evaluation run of bofenghuang/vigogne-33b-instruct
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/bofenghuang/vigogne-33b-instruct
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [bofenghuang/vigogne-33b-instruct](https://huggingface.co/bofenghuang/vigogne-33b-instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bofenghuang__vigogne-33b-instruct",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T06:48:17.282592](https://huggingface.co/datasets/open-llm-leaderboard/details_bofenghuang__vigogne-33b-instruct/blob/main/results_2023-10-17T06-48-17.282592.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.4092911073825503,
"em_stderr": 0.005035499534676373,
"f1": 0.47988779362416334,
"f1_stderr": 0.004806379711128169,
"acc": 0.4499623916853611,
"acc_stderr": 0.010072884519008809
},
"harness|drop|3": {
"em": 0.4092911073825503,
"em_stderr": 0.005035499534676373,
"f1": 0.47988779362416334,
"f1_stderr": 0.004806379711128169
},
"harness|gsm8k|5": {
"acc": 0.11144806671721001,
"acc_stderr": 0.008668021353794433
},
"harness|winogrande|5": {
"acc": 0.7884767166535123,
"acc_stderr": 0.011477747684223187
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_bofenghuang__vigogne-33b-instruct
|
[
"region:us"
] |
2023-10-17T05:48:21+00:00
|
{"pretty_name": "Evaluation run of bofenghuang/vigogne-33b-instruct", "dataset_summary": "Dataset automatically created during the evaluation run of model [bofenghuang/vigogne-33b-instruct](https://huggingface.co/bofenghuang/vigogne-33b-instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bofenghuang__vigogne-33b-instruct\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-17T06:48:17.282592](https://huggingface.co/datasets/open-llm-leaderboard/details_bofenghuang__vigogne-33b-instruct/blob/main/results_2023-10-17T06-48-17.282592.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4092911073825503,\n \"em_stderr\": 0.005035499534676373,\n \"f1\": 0.47988779362416334,\n \"f1_stderr\": 0.004806379711128169,\n \"acc\": 0.4499623916853611,\n \"acc_stderr\": 0.010072884519008809\n },\n \"harness|drop|3\": {\n \"em\": 0.4092911073825503,\n \"em_stderr\": 0.005035499534676373,\n \"f1\": 0.47988779362416334,\n \"f1_stderr\": 0.004806379711128169\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.11144806671721001,\n \"acc_stderr\": 0.008668021353794433\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7884767166535123,\n \"acc_stderr\": 0.011477747684223187\n }\n}\n```", "repo_url": "https://huggingface.co/bofenghuang/vigogne-33b-instruct", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_17T06_48_17.282592", "path": ["**/details_harness|drop|3_2023-10-17T06-48-17.282592.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-17T06-48-17.282592.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_17T06_48_17.282592", "path": ["**/details_harness|gsm8k|5_2023-10-17T06-48-17.282592.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-17T06-48-17.282592.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_17T06_48_17.282592", "path": ["**/details_harness|winogrande|5_2023-10-17T06-48-17.282592.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-17T06-48-17.282592.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_17T06_48_17.282592", "path": ["results_2023-10-17T06-48-17.282592.parquet"]}, {"split": "latest", "path": ["results_2023-10-17T06-48-17.282592.parquet"]}]}]}
|
2023-10-17T05:48:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of bofenghuang/vigogne-33b-instruct
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model bofenghuang/vigogne-33b-instruct on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-17T06:48:17.282592(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of bofenghuang/vigogne-33b-instruct",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bofenghuang/vigogne-33b-instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T06:48:17.282592(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of bofenghuang/vigogne-33b-instruct",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bofenghuang/vigogne-33b-instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T06:48:17.282592(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of bofenghuang/vigogne-33b-instruct## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model bofenghuang/vigogne-33b-instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-17T06:48:17.282592(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
87cd60ac1ad8483f788bd50c4f59848b1861e966
|
# Dataset Card for "CSAW_dense_30_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Phaedrus/CSAW_dense_30_train
|
[
"region:us"
] |
2023-10-17T05:49:50+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 308923702.0, "num_examples": 264}], "download_size": 45246242, "dataset_size": 308923702.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T05:50:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CSAW_dense_30_train"
More Information needed
|
[
"# Dataset Card for \"CSAW_dense_30_train\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CSAW_dense_30_train\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CSAW_dense_30_train\"\n\nMore Information needed"
] |
fcf0a065868e0bb8d2ae2aa9897d0abf85481d4b
|
# Dataset Card for "squad_title_v4_train_30_eval_10_permute3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_title_v4_train_30_eval_10_permute3
|
[
"region:us"
] |
2023-10-17T06:28:30+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 493463.5595794392, "num_examples": 319}, {"name": "validation", "num_bytes": 50807, "num_examples": 50}], "download_size": 100594, "dataset_size": 544270.5595794392}}
|
2023-10-17T08:06:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_title_v4_train_30_eval_10_permute3"
More Information needed
|
[
"# Dataset Card for \"squad_title_v4_train_30_eval_10_permute3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_title_v4_train_30_eval_10_permute3\"\n\nMore Information needed"
] |
[
6,
32
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_title_v4_train_30_eval_10_permute3\"\n\nMore Information needed"
] |
c0171f30f3ef0ef4b3dca78adc37f850e666f6ae
|
# Dataset Card for "nils-GPT_dataset_20231017_075324"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tr416/nils-GPT_dataset_20231017_075324
|
[
"region:us"
] |
2023-10-17T06:53:24+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 75203880.0, "num_examples": 29285}, {"name": "test", "num_bytes": 760128.0, "num_examples": 296}], "download_size": 12803835, "dataset_size": 75964008.0}}
|
2023-10-17T06:53:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "nils-GPT_dataset_20231017_075324"
More Information needed
|
[
"# Dataset Card for \"nils-GPT_dataset_20231017_075324\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"nils-GPT_dataset_20231017_075324\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"nils-GPT_dataset_20231017_075324\"\n\nMore Information needed"
] |
4906f2aa534049b2635b41f0d81e1b718af466f2
|
# Dataset Card for "nils_dataset_20231017_075623"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tr416/nils_dataset_20231017_075623
|
[
"region:us"
] |
2023-10-17T06:56:23+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 75203880.0, "num_examples": 29285}, {"name": "test", "num_bytes": 760128.0, "num_examples": 296}], "download_size": 12781175, "dataset_size": 75964008.0}}
|
2023-10-17T06:56:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "nils_dataset_20231017_075623"
More Information needed
|
[
"# Dataset Card for \"nils_dataset_20231017_075623\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"nils_dataset_20231017_075623\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"nils_dataset_20231017_075623\"\n\nMore Information needed"
] |
0daab0d5490ee84d9bdadecefd48aaaf3ebb3919
|
# Dataset Card for "rbrt_test_val_lrg2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/rbrt_test_val_lrg2
|
[
"region:us"
] |
2023-10-17T07:02:15+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 148079605, "num_examples": 104550}], "download_size": 32715970, "dataset_size": 148079605}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T07:05:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "rbrt_test_val_lrg2"
More Information needed
|
[
"# Dataset Card for \"rbrt_test_val_lrg2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"rbrt_test_val_lrg2\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"rbrt_test_val_lrg2\"\n\nMore Information needed"
] |
c2653ecaec5a9f8b484ed512ef582613bac87a8c
|
# Dataset Card for Evaluation run of Aspik101/30B-Lazarus-instruct-PL-lora_unload
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Aspik101/30B-Lazarus-instruct-PL-lora_unload
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Aspik101/30B-Lazarus-instruct-PL-lora_unload](https://huggingface.co/Aspik101/30B-Lazarus-instruct-PL-lora_unload) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Aspik101__30B-Lazarus-instruct-PL-lora_unload",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T08:13:24.195120](https://huggingface.co/datasets/open-llm-leaderboard/details_Aspik101__30B-Lazarus-instruct-PL-lora_unload/blob/main/results_2023-10-17T08-13-24.195120.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.01164010067114094,
"em_stderr": 0.0010984380734032925,
"f1": 0.07800545302013438,
"f1_stderr": 0.0017935902090569574,
"acc": 0.4522835158298991,
"acc_stderr": 0.010087630088457804
},
"harness|drop|3": {
"em": 0.01164010067114094,
"em_stderr": 0.0010984380734032925,
"f1": 0.07800545302013438,
"f1_stderr": 0.0017935902090569574
},
"harness|gsm8k|5": {
"acc": 0.11372251705837756,
"acc_stderr": 0.008744810131034036
},
"harness|winogrande|5": {
"acc": 0.7908445146014207,
"acc_stderr": 0.01143045004588157
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Aspik101__30B-Lazarus-instruct-PL-lora_unload
|
[
"region:us"
] |
2023-10-17T07:13:28+00:00
|
{"pretty_name": "Evaluation run of Aspik101/30B-Lazarus-instruct-PL-lora_unload", "dataset_summary": "Dataset automatically created during the evaluation run of model [Aspik101/30B-Lazarus-instruct-PL-lora_unload](https://huggingface.co/Aspik101/30B-Lazarus-instruct-PL-lora_unload) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Aspik101__30B-Lazarus-instruct-PL-lora_unload\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-17T08:13:24.195120](https://huggingface.co/datasets/open-llm-leaderboard/details_Aspik101__30B-Lazarus-instruct-PL-lora_unload/blob/main/results_2023-10-17T08-13-24.195120.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.01164010067114094,\n \"em_stderr\": 0.0010984380734032925,\n \"f1\": 0.07800545302013438,\n \"f1_stderr\": 0.0017935902090569574,\n \"acc\": 0.4522835158298991,\n \"acc_stderr\": 0.010087630088457804\n },\n \"harness|drop|3\": {\n \"em\": 0.01164010067114094,\n \"em_stderr\": 0.0010984380734032925,\n \"f1\": 0.07800545302013438,\n \"f1_stderr\": 0.0017935902090569574\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.11372251705837756,\n \"acc_stderr\": 0.008744810131034036\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7908445146014207,\n \"acc_stderr\": 0.01143045004588157\n }\n}\n```", "repo_url": "https://huggingface.co/Aspik101/30B-Lazarus-instruct-PL-lora_unload", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_17T08_13_24.195120", "path": ["**/details_harness|drop|3_2023-10-17T08-13-24.195120.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-17T08-13-24.195120.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_17T08_13_24.195120", "path": ["**/details_harness|gsm8k|5_2023-10-17T08-13-24.195120.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-17T08-13-24.195120.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_17T08_13_24.195120", "path": ["**/details_harness|winogrande|5_2023-10-17T08-13-24.195120.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-17T08-13-24.195120.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_17T08_13_24.195120", "path": ["results_2023-10-17T08-13-24.195120.parquet"]}, {"split": "latest", "path": ["results_2023-10-17T08-13-24.195120.parquet"]}]}]}
|
2023-10-17T07:13:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Aspik101/30B-Lazarus-instruct-PL-lora_unload
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Aspik101/30B-Lazarus-instruct-PL-lora_unload on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-17T08:13:24.195120(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Aspik101/30B-Lazarus-instruct-PL-lora_unload",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Aspik101/30B-Lazarus-instruct-PL-lora_unload on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T08:13:24.195120(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Aspik101/30B-Lazarus-instruct-PL-lora_unload",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Aspik101/30B-Lazarus-instruct-PL-lora_unload on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T08:13:24.195120(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
29,
31,
177,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Aspik101/30B-Lazarus-instruct-PL-lora_unload## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Aspik101/30B-Lazarus-instruct-PL-lora_unload on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-17T08:13:24.195120(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
060a0b1f93991e65ed35ee9de253e9d7c1b62191
|
# Dataset Card for Evaluation run of teknium/OpenHermes-2-Mistral-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/teknium/OpenHermes-2-Mistral-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [teknium/OpenHermes-2-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2-Mistral-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_teknium__OpenHermes-2-Mistral-7B_public",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T08:19:50.329623](https://huggingface.co/datasets/open-llm-leaderboard/details_teknium__OpenHermes-2-Mistral-7B_public/blob/main/results_2023-10-17T08-19-50.329623.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6340923864588642,
"acc_stderr": 0.03292343427112481,
"acc_norm": 0.6379910781883433,
"acc_norm_stderr": 0.03290093486621529,
"mc1": 0.3329253365973072,
"mc1_stderr": 0.016497402382012052,
"mc2": 0.5024236235238323,
"mc2_stderr": 0.015034918880371569
},
"harness|arc:challenge|25": {
"acc": 0.6006825938566553,
"acc_stderr": 0.014312094557946716,
"acc_norm": 0.6305460750853242,
"acc_norm_stderr": 0.014104578366491887
},
"harness|hellaswag|10": {
"acc": 0.6379207329217288,
"acc_stderr": 0.004796193584930074,
"acc_norm": 0.8380800637323242,
"acc_norm_stderr": 0.0036762448867232607
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.04218506215368879,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.04218506215368879
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7171052631578947,
"acc_stderr": 0.03665349695640767,
"acc_norm": 0.7171052631578947,
"acc_norm_stderr": 0.03665349695640767
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6754716981132075,
"acc_stderr": 0.028815615713432115,
"acc_norm": 0.6754716981132075,
"acc_norm_stderr": 0.028815615713432115
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.0358687928008034,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.0358687928008034
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6011560693641619,
"acc_stderr": 0.037336266553835096,
"acc_norm": 0.6011560693641619,
"acc_norm_stderr": 0.037336266553835096
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5617021276595745,
"acc_stderr": 0.03243618636108101,
"acc_norm": 0.5617021276595745,
"acc_norm_stderr": 0.03243618636108101
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.04697085136647863,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.04697085136647863
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5310344827586206,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.5310344827586206,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.025355741263055266,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.025355741263055266
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.0442626668137991,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.0442626668137991
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7516129032258064,
"acc_stderr": 0.02458002892148101,
"acc_norm": 0.7516129032258064,
"acc_norm_stderr": 0.02458002892148101
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267042,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267042
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8756476683937824,
"acc_stderr": 0.02381447708659355,
"acc_norm": 0.8756476683937824,
"acc_norm_stderr": 0.02381447708659355
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5974358974358974,
"acc_stderr": 0.02486499515976775,
"acc_norm": 0.5974358974358974,
"acc_norm_stderr": 0.02486499515976775
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.02831753349606648,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.02831753349606648
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6218487394957983,
"acc_stderr": 0.03149930577784906,
"acc_norm": 0.6218487394957983,
"acc_norm_stderr": 0.03149930577784906
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.038020397601079024,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.038020397601079024
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8348623853211009,
"acc_stderr": 0.01591955782997604,
"acc_norm": 0.8348623853211009,
"acc_norm_stderr": 0.01591955782997604
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.026756401538078962,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.026756401538078962
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.02553010046023349,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.02553010046023349
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.030769352008229146,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.030769352008229146
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596914,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596914
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7914110429447853,
"acc_stderr": 0.031921934489347235,
"acc_norm": 0.7914110429447853,
"acc_norm_stderr": 0.031921934489347235
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5178571428571429,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.5178571428571429,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.020930193185179333,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.020930193185179333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8326947637292464,
"acc_stderr": 0.013347327202920332,
"acc_norm": 0.8326947637292464,
"acc_norm_stderr": 0.013347327202920332
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7341040462427746,
"acc_stderr": 0.023786203255508297,
"acc_norm": 0.7341040462427746,
"acc_norm_stderr": 0.023786203255508297
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3642458100558659,
"acc_stderr": 0.016094338768474596,
"acc_norm": 0.3642458100558659,
"acc_norm_stderr": 0.016094338768474596
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7418300653594772,
"acc_stderr": 0.02505850331695814,
"acc_norm": 0.7418300653594772,
"acc_norm_stderr": 0.02505850331695814
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.02600330111788514,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.02600330111788514
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7191358024691358,
"acc_stderr": 0.025006469755799208,
"acc_norm": 0.7191358024691358,
"acc_norm_stderr": 0.025006469755799208
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5,
"acc_stderr": 0.029827499313594685,
"acc_norm": 0.5,
"acc_norm_stderr": 0.029827499313594685
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46870925684485004,
"acc_stderr": 0.01274520462608314,
"acc_norm": 0.46870925684485004,
"acc_norm_stderr": 0.01274520462608314
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.029029422815681393,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.029029422815681393
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6617647058823529,
"acc_stderr": 0.01913994374848703,
"acc_norm": 0.6617647058823529,
"acc_norm_stderr": 0.01913994374848703
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.04607582090719976,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.04607582090719976
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578337,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578337
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.03265986323710906,
"acc_norm": 0.88,
"acc_norm_stderr": 0.03265986323710906
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3329253365973072,
"mc1_stderr": 0.016497402382012052,
"mc2": 0.5024236235238323,
"mc2_stderr": 0.015034918880371569
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_teknium__OpenHermes-2-Mistral-7B
|
[
"region:us"
] |
2023-10-17T07:22:35+00:00
|
{"pretty_name": "Evaluation run of teknium/OpenHermes-2-Mistral-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [teknium/OpenHermes-2-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2-Mistral-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_teknium__OpenHermes-2-Mistral-7B_public\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-17T08:19:50.329623](https://huggingface.co/datasets/open-llm-leaderboard/details_teknium__OpenHermes-2-Mistral-7B_public/blob/main/results_2023-10-17T08-19-50.329623.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6340923864588642,\n \"acc_stderr\": 0.03292343427112481,\n \"acc_norm\": 0.6379910781883433,\n \"acc_norm_stderr\": 0.03290093486621529,\n \"mc1\": 0.3329253365973072,\n \"mc1_stderr\": 0.016497402382012052,\n \"mc2\": 0.5024236235238323,\n \"mc2_stderr\": 0.015034918880371569\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6006825938566553,\n \"acc_stderr\": 0.014312094557946716,\n \"acc_norm\": 0.6305460750853242,\n \"acc_norm_stderr\": 0.014104578366491887\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6379207329217288,\n \"acc_stderr\": 0.004796193584930074,\n \"acc_norm\": 0.8380800637323242,\n \"acc_norm_stderr\": 0.0036762448867232607\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n \"acc_stderr\": 0.04218506215368879,\n \"acc_norm\": 0.6074074074074074,\n \"acc_norm_stderr\": 0.04218506215368879\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7171052631578947,\n \"acc_stderr\": 0.03665349695640767,\n \"acc_norm\": 0.7171052631578947,\n \"acc_norm_stderr\": 0.03665349695640767\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.028815615713432115,\n \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.028815615713432115\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n \"acc_stderr\": 0.0358687928008034,\n \"acc_norm\": 0.7569444444444444,\n \"acc_norm_stderr\": 0.0358687928008034\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n \"acc_stderr\": 0.037336266553835096,\n \"acc_norm\": 0.6011560693641619,\n \"acc_norm_stderr\": 0.037336266553835096\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5617021276595745,\n \"acc_stderr\": 0.03243618636108101,\n \"acc_norm\": 0.5617021276595745,\n \"acc_norm_stderr\": 0.03243618636108101\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.04697085136647863,\n \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.04697085136647863\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5310344827586206,\n \"acc_stderr\": 0.04158632762097828,\n \"acc_norm\": 0.5310344827586206,\n \"acc_norm_stderr\": 0.04158632762097828\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.025355741263055266,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.025355741263055266\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.0442626668137991,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.0442626668137991\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7516129032258064,\n \"acc_stderr\": 0.02458002892148101,\n \"acc_norm\": 0.7516129032258064,\n \"acc_norm_stderr\": 0.02458002892148101\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267042,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267042\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8756476683937824,\n \"acc_stderr\": 0.02381447708659355,\n \"acc_norm\": 0.8756476683937824,\n \"acc_norm_stderr\": 0.02381447708659355\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5974358974358974,\n \"acc_stderr\": 0.02486499515976775,\n \"acc_norm\": 0.5974358974358974,\n \"acc_norm_stderr\": 0.02486499515976775\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3148148148148148,\n \"acc_stderr\": 0.02831753349606648,\n \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.02831753349606648\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6218487394957983,\n \"acc_stderr\": 0.03149930577784906,\n \"acc_norm\": 0.6218487394957983,\n \"acc_norm_stderr\": 0.03149930577784906\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8348623853211009,\n \"acc_stderr\": 0.01591955782997604,\n \"acc_norm\": 0.8348623853211009,\n \"acc_norm_stderr\": 0.01591955782997604\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.026756401538078962,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.026756401538078962\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n \"acc_stderr\": 0.030769352008229146,\n \"acc_norm\": 0.6995515695067265,\n \"acc_norm_stderr\": 0.030769352008229146\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596914,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596914\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.031921934489347235,\n \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.031921934489347235\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5178571428571429,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.5178571428571429,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n \"acc_stderr\": 0.020930193185179333,\n \"acc_norm\": 0.8846153846153846,\n \"acc_norm_stderr\": 0.020930193185179333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8326947637292464,\n \"acc_stderr\": 0.013347327202920332,\n \"acc_norm\": 0.8326947637292464,\n \"acc_norm_stderr\": 0.013347327202920332\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7341040462427746,\n \"acc_stderr\": 0.023786203255508297,\n \"acc_norm\": 0.7341040462427746,\n \"acc_norm_stderr\": 0.023786203255508297\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3642458100558659,\n \"acc_stderr\": 0.016094338768474596,\n \"acc_norm\": 0.3642458100558659,\n \"acc_norm_stderr\": 0.016094338768474596\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7418300653594772,\n \"acc_stderr\": 0.02505850331695814,\n \"acc_norm\": 0.7418300653594772,\n \"acc_norm_stderr\": 0.02505850331695814\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n \"acc_stderr\": 0.02600330111788514,\n \"acc_norm\": 0.7009646302250804,\n \"acc_norm_stderr\": 0.02600330111788514\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7191358024691358,\n \"acc_stderr\": 0.025006469755799208,\n \"acc_norm\": 0.7191358024691358,\n \"acc_norm_stderr\": 0.025006469755799208\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.029827499313594685,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.029827499313594685\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46870925684485004,\n \"acc_stderr\": 0.01274520462608314,\n \"acc_norm\": 0.46870925684485004,\n \"acc_norm_stderr\": 0.01274520462608314\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.029029422815681393,\n \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.029029422815681393\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6617647058823529,\n \"acc_stderr\": 0.01913994374848703,\n \"acc_norm\": 0.6617647058823529,\n \"acc_norm_stderr\": 0.01913994374848703\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6363636363636364,\n \"acc_stderr\": 0.04607582090719976,\n \"acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.04607582090719976\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n \"acc_stderr\": 0.025538433368578337,\n \"acc_norm\": 0.845771144278607,\n \"acc_norm_stderr\": 0.025538433368578337\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3329253365973072,\n \"mc1_stderr\": 0.016497402382012052,\n \"mc2\": 0.5024236235238323,\n \"mc2_stderr\": 0.015034918880371569\n }\n}\n```", "repo_url": "https://huggingface.co/teknium/OpenHermes-2-Mistral-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|arc:challenge|25_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hellaswag|10_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-17T08-19-50.329623.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-17T08-19-50.329623.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_17T08_19_50.329623", "path": ["results_2023-10-17T08-19-50.329623.parquet"]}, {"split": "latest", "path": ["results_2023-10-17T08-19-50.329623.parquet"]}]}]}
|
2023-10-17T07:22:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of teknium/OpenHermes-2-Mistral-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model teknium/OpenHermes-2-Mistral-7B on the Open LLM Leaderboard.
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-17T08:19:50.329623(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of teknium/OpenHermes-2-Mistral-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model teknium/OpenHermes-2-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T08:19:50.329623(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of teknium/OpenHermes-2-Mistral-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model teknium/OpenHermes-2-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T08:19:50.329623(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
21,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of teknium/OpenHermes-2-Mistral-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model teknium/OpenHermes-2-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-17T08:19:50.329623(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
e424464ef5377d5effc4b05678f060f2e09c0f24
|
# Dataset Card for "guanaco-llama2-100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gamegyu/guanaco-llama2-100
|
[
"region:us"
] |
2023-10-17T07:24:55+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 142459, "num_examples": 100}], "download_size": 91410, "dataset_size": 142459}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T07:24:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "guanaco-llama2-100"
More Information needed
|
[
"# Dataset Card for \"guanaco-llama2-100\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"guanaco-llama2-100\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"guanaco-llama2-100\"\n\nMore Information needed"
] |
9c395939a51cdc185b8b5bc05522e65611e7a96f
|
# Dataset Card for Evaluation run of uukuguy/speechless-hermes-coig-lite-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uukuguy/speechless-hermes-coig-lite-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [uukuguy/speechless-hermes-coig-lite-13b](https://huggingface.co/uukuguy/speechless-hermes-coig-lite-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uukuguy__speechless-hermes-coig-lite-13b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-18T15:01:47.854586](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-hermes-coig-lite-13b/blob/main/results_2023-10-18T15-01-47.854586.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.3490981543624161,
"em_stderr": 0.004881701038810246,
"f1": 0.39497588087248336,
"f1_stderr": 0.004768097534076323,
"acc": 0.44193958375344744,
"acc_stderr": 0.009875116542645869
},
"harness|drop|3": {
"em": 0.3490981543624161,
"em_stderr": 0.004881701038810246,
"f1": 0.39497588087248336,
"f1_stderr": 0.004768097534076323
},
"harness|gsm8k|5": {
"acc": 0.09855951478392722,
"acc_stderr": 0.008210320350946338
},
"harness|winogrande|5": {
"acc": 0.7853196527229677,
"acc_stderr": 0.011539912734345398
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_uukuguy__speechless-hermes-coig-lite-13b
|
[
"region:us"
] |
2023-10-17T07:26:05+00:00
|
{"pretty_name": "Evaluation run of uukuguy/speechless-hermes-coig-lite-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/speechless-hermes-coig-lite-13b](https://huggingface.co/uukuguy/speechless-hermes-coig-lite-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__speechless-hermes-coig-lite-13b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-18T15:01:47.854586](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-hermes-coig-lite-13b/blob/main/results_2023-10-18T15-01-47.854586.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.3490981543624161,\n \"em_stderr\": 0.004881701038810246,\n \"f1\": 0.39497588087248336,\n \"f1_stderr\": 0.004768097534076323,\n \"acc\": 0.44193958375344744,\n \"acc_stderr\": 0.009875116542645869\n },\n \"harness|drop|3\": {\n \"em\": 0.3490981543624161,\n \"em_stderr\": 0.004881701038810246,\n \"f1\": 0.39497588087248336,\n \"f1_stderr\": 0.004768097534076323\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09855951478392722,\n \"acc_stderr\": 0.008210320350946338\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7853196527229677,\n \"acc_stderr\": 0.011539912734345398\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/speechless-hermes-coig-lite-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_17T08_26_01.591650", "path": ["**/details_harness|drop|3_2023-10-17T08-26-01.591650.parquet"]}, {"split": "2023_10_18T15_01_47.854586", "path": ["**/details_harness|drop|3_2023-10-18T15-01-47.854586.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-18T15-01-47.854586.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_17T08_26_01.591650", "path": ["**/details_harness|gsm8k|5_2023-10-17T08-26-01.591650.parquet"]}, {"split": "2023_10_18T15_01_47.854586", "path": ["**/details_harness|gsm8k|5_2023-10-18T15-01-47.854586.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-18T15-01-47.854586.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_17T08_26_01.591650", "path": ["**/details_harness|winogrande|5_2023-10-17T08-26-01.591650.parquet"]}, {"split": "2023_10_18T15_01_47.854586", "path": ["**/details_harness|winogrande|5_2023-10-18T15-01-47.854586.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-18T15-01-47.854586.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_17T08_26_01.591650", "path": ["results_2023-10-17T08-26-01.591650.parquet"]}, {"split": "2023_10_18T15_01_47.854586", "path": ["results_2023-10-18T15-01-47.854586.parquet"]}, {"split": "latest", "path": ["results_2023-10-18T15-01-47.854586.parquet"]}]}]}
|
2023-10-18T14:02:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of uukuguy/speechless-hermes-coig-lite-13b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model uukuguy/speechless-hermes-coig-lite-13b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-18T15:01:47.854586(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of uukuguy/speechless-hermes-coig-lite-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-hermes-coig-lite-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-18T15:01:47.854586(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of uukuguy/speechless-hermes-coig-lite-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-hermes-coig-lite-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-18T15:01:47.854586(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
27,
31,
175,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/speechless-hermes-coig-lite-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-hermes-coig-lite-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-18T15:01:47.854586(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
07e5e86a4d5aeb9e577a57695af3c6ecf18fc46d
|
# Dataset Card for "csProjectStyle1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yoonlee/csProjectStyle1
|
[
"region:us"
] |
2023-10-17T07:27:16+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1431662.0, "num_examples": 5}], "download_size": 0, "dataset_size": 1431662.0}}
|
2023-10-17T08:53:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "csProjectStyle1"
More Information needed
|
[
"# Dataset Card for \"csProjectStyle1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"csProjectStyle1\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"csProjectStyle1\"\n\nMore Information needed"
] |
b947f87ff525fb8e3d47467f7e97b582795730d2
|
# Glaive's Function Calling V2 for Zephyr-7B-alpha
[Glaive's Function Calling V2 dataset](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), formatted according to the chat schema zephyr uses, with all the data that I wasn't able to automatically convert removed.
Adds three new roles: `definition`, `function` and `call`. Here's an example prompt:
```
<|definition|>
{
"name": "generate_password",
"description": "Generate a random password with specified criteria",
"parameters": {
"type": "object",
"properties": {
"length": {
"type": "integer",
"description": "The length of the password"
},
"include_numbers": {
"type": "boolean",
"description": "Include numbers in the password"
},
"include_special_characters": {
"type": "boolean",
"description": "Include special characters in the password"
}
},
"required": [
"length"
]
}
}</s>
<|user|>
I need a new password. Can you generate one for me?</s>
<|assistant|>
Of course! How long would you like your password to be? And do you want it to include numbers and special characters?</s>
<|user|>
I want it to be 12 characters long and yes, it should include both numbers and special characters.</s>
<|function|>
{
"length": 12,
"include_numbers": true,
"include_special_characters": true
}</s>
<|function|>
{"password": "4#7gB6&9L1!0"}</s>
<|assistant|>
Here is your new password: 4#7gB6&9L1!0. Please make sure to save it in a secure place.</s>
```
|
rizerphe/glaive-function-calling-v2-zephyr
|
[
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] |
2023-10-17T07:28:47+00:00
|
{"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "conversational"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 225637684, "num_examples": 101469}], "download_size": 94820543, "dataset_size": 225637684}}
|
2023-10-17T15:36:29+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #language-English #license-cc-by-sa-4.0 #region-us
|
# Glaive's Function Calling V2 for Zephyr-7B-alpha
Glaive's Function Calling V2 dataset, formatted according to the chat schema zephyr uses, with all the data that I wasn't able to automatically convert removed.
Adds three new roles: 'definition', 'function' and 'call'. Here's an example prompt:
|
[
"# Glaive's Function Calling V2 for Zephyr-7B-alpha\n\nGlaive's Function Calling V2 dataset, formatted according to the chat schema zephyr uses, with all the data that I wasn't able to automatically convert removed.\n\nAdds three new roles: 'definition', 'function' and 'call'. Here's an example prompt:"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #language-English #license-cc-by-sa-4.0 #region-us \n",
"# Glaive's Function Calling V2 for Zephyr-7B-alpha\n\nGlaive's Function Calling V2 dataset, formatted according to the chat schema zephyr uses, with all the data that I wasn't able to automatically convert removed.\n\nAdds three new roles: 'definition', 'function' and 'call'. Here's an example prompt:"
] |
[
54,
90
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #language-English #license-cc-by-sa-4.0 #region-us \n# Glaive's Function Calling V2 for Zephyr-7B-alpha\n\nGlaive's Function Calling V2 dataset, formatted according to the chat schema zephyr uses, with all the data that I wasn't able to automatically convert removed.\n\nAdds three new roles: 'definition', 'function' and 'call'. Here's an example prompt:"
] |
318b161df7ffefaee8be254cd1bff472cc8bea95
|
# sharegpt-hyperfiltered-3k-zephyr
[sharegpt-hyperfiltered-3k](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k), formatted to the prompting schema zephyr-7b-alpha uses.
|
rizerphe/sharegpt-hyperfiltered-3k-zephyr
|
[
"task_categories:text-generation",
"task_categories:conversational",
"license:apache-2.0",
"region:us"
] |
2023-10-17T07:37:26+00:00
|
{"license": "apache-2.0", "task_categories": ["text-generation", "conversational"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5601530, "num_examples": 3227}], "download_size": 2773555, "dataset_size": 5601530}}
|
2023-10-17T07:41:56+00:00
|
[] |
[] |
TAGS
#task_categories-text-generation #task_categories-conversational #license-apache-2.0 #region-us
|
# sharegpt-hyperfiltered-3k-zephyr
sharegpt-hyperfiltered-3k, formatted to the prompting schema zephyr-7b-alpha uses.
|
[
"# sharegpt-hyperfiltered-3k-zephyr\n\nsharegpt-hyperfiltered-3k, formatted to the prompting schema zephyr-7b-alpha uses."
] |
[
"TAGS\n#task_categories-text-generation #task_categories-conversational #license-apache-2.0 #region-us \n",
"# sharegpt-hyperfiltered-3k-zephyr\n\nsharegpt-hyperfiltered-3k, formatted to the prompting schema zephyr-7b-alpha uses."
] |
[
35,
43
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #license-apache-2.0 #region-us \n# sharegpt-hyperfiltered-3k-zephyr\n\nsharegpt-hyperfiltered-3k, formatted to the prompting schema zephyr-7b-alpha uses."
] |
8b941ec020029fefdaf2c3b1c7471b020fd13652
|
# Dataset Card for "rbrt_test_val_lrg3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/rbrt_test_val_lrg3
|
[
"region:us"
] |
2023-10-17T07:52:08+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 148079605, "num_examples": 104550}], "download_size": 32715970, "dataset_size": 148079605}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T07:52:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "rbrt_test_val_lrg3"
More Information needed
|
[
"# Dataset Card for \"rbrt_test_val_lrg3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"rbrt_test_val_lrg3\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"rbrt_test_val_lrg3\"\n\nMore Information needed"
] |
9892ef6b6734fb5922fdd8f588b22c230914f36e
|
# Dataset Card for "privacyqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
paoloitaliani/privacyqa
|
[
"region:us"
] |
2023-10-17T07:55:48+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}, {"split": "full", "path": "data/full-*"}]}], "dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "qa_pair", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 335038.44083526684, "num_examples": 344}, {"name": "validation", "num_bytes": 42853.75406032483, "num_examples": 44}, {"name": "test", "num_bytes": 41879.805104408355, "num_examples": 43}, {"name": "full", "num_bytes": 419772, "num_examples": 431}], "download_size": 233651, "dataset_size": 839544.0}}
|
2023-12-05T09:25:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "privacyqa"
More Information needed
|
[
"# Dataset Card for \"privacyqa\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"privacyqa\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"privacyqa\"\n\nMore Information needed"
] |
7a51283f3d12bea6d96e19d0b8b81cd49eb49878
|
# Dataset Card for Evaluation run of quantumaikr/llama-2-70b-fb16-korean
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/quantumaikr/llama-2-70b-fb16-korean
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [quantumaikr/llama-2-70b-fb16-korean](https://huggingface.co/quantumaikr/llama-2-70b-fb16-korean) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-korean",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T08:56:24.573395](https://huggingface.co/datasets/open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-korean/blob/main/results_2023-10-17T08-56-24.573395.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0041946308724832215,
"em_stderr": 0.0006618716168266237,
"f1": 0.07418729026845645,
"f1_stderr": 0.0015820737575191846,
"acc": 0.5583664886878857,
"acc_stderr": 0.011574854481074981
},
"harness|drop|3": {
"em": 0.0041946308724832215,
"em_stderr": 0.0006618716168266237,
"f1": 0.07418729026845645,
"f1_stderr": 0.0015820737575191846
},
"harness|gsm8k|5": {
"acc": 0.29037149355572406,
"acc_stderr": 0.012503592481818962
},
"harness|winogrande|5": {
"acc": 0.8263614838200474,
"acc_stderr": 0.010646116480331
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-korean
|
[
"region:us"
] |
2023-10-17T07:56:28+00:00
|
{"pretty_name": "Evaluation run of quantumaikr/llama-2-70b-fb16-korean", "dataset_summary": "Dataset automatically created during the evaluation run of model [quantumaikr/llama-2-70b-fb16-korean](https://huggingface.co/quantumaikr/llama-2-70b-fb16-korean) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-korean\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-17T08:56:24.573395](https://huggingface.co/datasets/open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-korean/blob/main/results_2023-10-17T08-56-24.573395.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0041946308724832215,\n \"em_stderr\": 0.0006618716168266237,\n \"f1\": 0.07418729026845645,\n \"f1_stderr\": 0.0015820737575191846,\n \"acc\": 0.5583664886878857,\n \"acc_stderr\": 0.011574854481074981\n },\n \"harness|drop|3\": {\n \"em\": 0.0041946308724832215,\n \"em_stderr\": 0.0006618716168266237,\n \"f1\": 0.07418729026845645,\n \"f1_stderr\": 0.0015820737575191846\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.29037149355572406,\n \"acc_stderr\": 0.012503592481818962\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8263614838200474,\n \"acc_stderr\": 0.010646116480331\n }\n}\n```", "repo_url": "https://huggingface.co/quantumaikr/llama-2-70b-fb16-korean", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_17T08_56_24.573395", "path": ["**/details_harness|drop|3_2023-10-17T08-56-24.573395.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-17T08-56-24.573395.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_17T08_56_24.573395", "path": ["**/details_harness|gsm8k|5_2023-10-17T08-56-24.573395.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-17T08-56-24.573395.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_17T08_56_24.573395", "path": ["**/details_harness|winogrande|5_2023-10-17T08-56-24.573395.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-17T08-56-24.573395.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_17T08_56_24.573395", "path": ["results_2023-10-17T08-56-24.573395.parquet"]}, {"split": "latest", "path": ["results_2023-10-17T08-56-24.573395.parquet"]}]}]}
|
2023-10-17T07:56:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of quantumaikr/llama-2-70b-fb16-korean
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model quantumaikr/llama-2-70b-fb16-korean on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-17T08:56:24.573395(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of quantumaikr/llama-2-70b-fb16-korean",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model quantumaikr/llama-2-70b-fb16-korean on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T08:56:24.573395(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of quantumaikr/llama-2-70b-fb16-korean",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model quantumaikr/llama-2-70b-fb16-korean on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T08:56:24.573395(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
25,
31,
173,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of quantumaikr/llama-2-70b-fb16-korean## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model quantumaikr/llama-2-70b-fb16-korean on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-17T08:56:24.573395(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
1b4f6b56e527ed3fe5c94973226839b79772cda3
|
# Dataset Card for "CSAW_combined_264"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Phaedrus/CSAW_combined_264
|
[
"region:us"
] |
2023-10-17T08:28:44+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label1", "dtype": "image"}, {"name": "label2", "dtype": "image"}, {"name": "label3", "dtype": "image"}, {"name": "label4", "dtype": "image"}, {"name": "label5", "dtype": "image"}, {"name": "label6", "dtype": "image"}, {"name": "label7", "dtype": "image"}, {"name": "label8", "dtype": "image"}, {"name": "label9", "dtype": "image"}, {"name": "label10", "dtype": "image"}, {"name": "label11", "dtype": "image"}, {"name": "label12", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3354389158.0, "num_examples": 264}], "download_size": 154024684, "dataset_size": 3354389158.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-17T08:30:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CSAW_combined_264"
More Information needed
|
[
"# Dataset Card for \"CSAW_combined_264\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CSAW_combined_264\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CSAW_combined_264\"\n\nMore Information needed"
] |
192528c8f1bebeb60e850328b276213dd888fbd3
|
# Dataset Card for Evaluation run of chargoddard/platypus-2-22b-relora
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/chargoddard/platypus-2-22b-relora
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [chargoddard/platypus-2-22b-relora](https://huggingface.co/chargoddard/platypus-2-22b-relora) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_chargoddard__platypus-2-22b-relora",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T09:48:53.081759](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__platypus-2-22b-relora/blob/main/results_2023-10-17T09-48-53.081759.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.38443791946308725,
"em_stderr": 0.004981827548218364,
"f1": 0.42459836409396046,
"f1_stderr": 0.004867799120548586,
"acc": 0.4197198614386422,
"acc_stderr": 0.009300550123366277
},
"harness|drop|3": {
"em": 0.38443791946308725,
"em_stderr": 0.004981827548218364,
"f1": 0.42459836409396046,
"f1_stderr": 0.004867799120548586
},
"harness|gsm8k|5": {
"acc": 0.06595905989385899,
"acc_stderr": 0.006836951192034225
},
"harness|winogrande|5": {
"acc": 0.7734806629834254,
"acc_stderr": 0.011764149054698329
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_chargoddard__platypus-2-22b-relora
|
[
"region:us"
] |
2023-10-17T08:48:57+00:00
|
{"pretty_name": "Evaluation run of chargoddard/platypus-2-22b-relora", "dataset_summary": "Dataset automatically created during the evaluation run of model [chargoddard/platypus-2-22b-relora](https://huggingface.co/chargoddard/platypus-2-22b-relora) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chargoddard__platypus-2-22b-relora\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-17T09:48:53.081759](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__platypus-2-22b-relora/blob/main/results_2023-10-17T09-48-53.081759.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.38443791946308725,\n \"em_stderr\": 0.004981827548218364,\n \"f1\": 0.42459836409396046,\n \"f1_stderr\": 0.004867799120548586,\n \"acc\": 0.4197198614386422,\n \"acc_stderr\": 0.009300550123366277\n },\n \"harness|drop|3\": {\n \"em\": 0.38443791946308725,\n \"em_stderr\": 0.004981827548218364,\n \"f1\": 0.42459836409396046,\n \"f1_stderr\": 0.004867799120548586\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.06595905989385899,\n \"acc_stderr\": 0.006836951192034225\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7734806629834254,\n \"acc_stderr\": 0.011764149054698329\n }\n}\n```", "repo_url": "https://huggingface.co/chargoddard/platypus-2-22b-relora", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_17T09_48_53.081759", "path": ["**/details_harness|drop|3_2023-10-17T09-48-53.081759.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-17T09-48-53.081759.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_17T09_48_53.081759", "path": ["**/details_harness|gsm8k|5_2023-10-17T09-48-53.081759.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-17T09-48-53.081759.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_17T09_48_53.081759", "path": ["**/details_harness|winogrande|5_2023-10-17T09-48-53.081759.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-17T09-48-53.081759.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_17T09_48_53.081759", "path": ["results_2023-10-17T09-48-53.081759.parquet"]}, {"split": "latest", "path": ["results_2023-10-17T09-48-53.081759.parquet"]}]}]}
|
2023-10-17T08:49:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of chargoddard/platypus-2-22b-relora
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model chargoddard/platypus-2-22b-relora on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-17T09:48:53.081759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of chargoddard/platypus-2-22b-relora",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/platypus-2-22b-relora on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T09:48:53.081759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of chargoddard/platypus-2-22b-relora",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/platypus-2-22b-relora on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T09:48:53.081759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of chargoddard/platypus-2-22b-relora## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/platypus-2-22b-relora on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-17T09:48:53.081759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
865f5dfbe4959f392051f3ceb93c204b82c5bd17
|
# Dataset Card for "pubchem_bioassay_standardized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
phanvancongthanh/pubchem_bioassay_standardized
|
[
"region:us"
] |
2023-10-17T08:58:45+00:00
|
{"dataset_info": {"features": [{"name": "standardized_smiles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10187907266, "num_examples": 210186056}], "download_size": 4860575313, "dataset_size": 10187907266}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-18T17:31:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pubchem_bioassay_standardized"
More Information needed
|
[
"# Dataset Card for \"pubchem_bioassay_standardized\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pubchem_bioassay_standardized\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pubchem_bioassay_standardized\"\n\nMore Information needed"
] |
3bb78d34df4b2f29a63278741c46dbe4fac2edf0
|
The presented dataset was used to finetune the text classification model `ArGTClass`, available [https://huggingface.co/dru-ac/ArGTClass](here).
The dataset was compiled using samples from the following sources:
- `SANAD` newspapers dataset, available [https://huggingface.co/datasets/arbml/SANAD](here)
- `ARTopicDS-Books`, available [example.com](here)
|
dru-ac/ArBNTopic
|
[
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ar",
"region:us"
] |
2023-10-17T09:03:02+00:00
|
{"language": ["ar"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "zero-shot-classification", "text-generation"]}
|
2023-10-17T09:31:02+00:00
|
[] |
[
"ar"
] |
TAGS
#task_categories-text-classification #task_categories-zero-shot-classification #task_categories-text-generation #size_categories-10K<n<100K #language-Arabic #region-us
|
The presented dataset was used to finetune the text classification model 'ArGTClass', available URL
The dataset was compiled using samples from the following sources:
- 'SANAD' newspapers dataset, available URL
- 'ARTopicDS-Books', available URL
|
[] |
[
"TAGS\n#task_categories-text-classification #task_categories-zero-shot-classification #task_categories-text-generation #size_categories-10K<n<100K #language-Arabic #region-us \n"
] |
[
58
] |
[
"passage: TAGS\n#task_categories-text-classification #task_categories-zero-shot-classification #task_categories-text-generation #size_categories-10K<n<100K #language-Arabic #region-us \n"
] |
194127cfaee10cb89fa869a19591e5c0f6592097
|
# arxiv-abstracts-instructorxl-embeddings
This dataset contains 768-dimensional embeddings generated from the [arxiv](https://arxiv.org/)
paper abstracts using [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) model. Each
vector has an abstract used to create it, along with the DOI (Digital Object Identifier). The
dataset was created using precomputed embeddings exposed by the [Alexandria Index](https://alex.macrocosm.so/download).
## Generation process
The embeddings have been generated using the following instruction:
```text
Represent the Research Paper abstract for retrieval; Input:
```
The following code snippet shows how to generate embeddings using the InstructorXL model:
```python
from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR('hkunlp/instructor-xl')
sentence = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train."
instruction = "Represent the Research Paper abstract for retrieval; Input:"
embeddings = model.encode([[instruction, sentence]])
```
|
Qdrant/arxiv-abstracts-instructorxl-embeddings
|
[
"task_categories:sentence-similarity",
"task_categories:feature-extraction",
"size_categories:1M<n<10M",
"language:en",
"region:us"
] |
2023-10-17T09:21:18+00:00
|
{"language": ["en"], "size_categories": ["1M<n<10M"], "task_categories": ["sentence-similarity", "feature-extraction"], "pretty_name": "InstructorXL embeddings of the Arxiv.org abstracts"}
|
2023-11-03T17:25:26+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-sentence-similarity #task_categories-feature-extraction #size_categories-1M<n<10M #language-English #region-us
|
# arxiv-abstracts-instructorxl-embeddings
This dataset contains 768-dimensional embeddings generated from the arxiv
paper abstracts using InstructorXL model. Each
vector has an abstract used to create it, along with the DOI (Digital Object Identifier). The
dataset was created using precomputed embeddings exposed by the Alexandria Index.
## Generation process
The embeddings have been generated using the following instruction:
The following code snippet shows how to generate embeddings using the InstructorXL model:
|
[
"# arxiv-abstracts-instructorxl-embeddings\n\nThis dataset contains 768-dimensional embeddings generated from the arxiv \npaper abstracts using InstructorXL model. Each \nvector has an abstract used to create it, along with the DOI (Digital Object Identifier). The \ndataset was created using precomputed embeddings exposed by the Alexandria Index.",
"## Generation process\n\nThe embeddings have been generated using the following instruction:\n\n\n\nThe following code snippet shows how to generate embeddings using the InstructorXL model:"
] |
[
"TAGS\n#task_categories-sentence-similarity #task_categories-feature-extraction #size_categories-1M<n<10M #language-English #region-us \n",
"# arxiv-abstracts-instructorxl-embeddings\n\nThis dataset contains 768-dimensional embeddings generated from the arxiv \npaper abstracts using InstructorXL model. Each \nvector has an abstract used to create it, along with the DOI (Digital Object Identifier). The \ndataset was created using precomputed embeddings exposed by the Alexandria Index.",
"## Generation process\n\nThe embeddings have been generated using the following instruction:\n\n\n\nThe following code snippet shows how to generate embeddings using the InstructorXL model:"
] |
[
47,
86,
39
] |
[
"passage: TAGS\n#task_categories-sentence-similarity #task_categories-feature-extraction #size_categories-1M<n<10M #language-English #region-us \n# arxiv-abstracts-instructorxl-embeddings\n\nThis dataset contains 768-dimensional embeddings generated from the arxiv \npaper abstracts using InstructorXL model. Each \nvector has an abstract used to create it, along with the DOI (Digital Object Identifier). The \ndataset was created using precomputed embeddings exposed by the Alexandria Index.## Generation process\n\nThe embeddings have been generated using the following instruction:\n\n\n\nThe following code snippet shows how to generate embeddings using the InstructorXL model:"
] |
a18396ee3d74ac9250eb288f4b829136e7988146
|
# Dataset Card for Evaluation run of chargoddard/Chronorctypus-Limarobormes-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/chargoddard/Chronorctypus-Limarobormes-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [chargoddard/Chronorctypus-Limarobormes-13b](https://huggingface.co/chargoddard/Chronorctypus-Limarobormes-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_chargoddard__Chronorctypus-Limarobormes-13b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T10:27:33.460587](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__Chronorctypus-Limarobormes-13b/blob/main/results_2023-10-17T10-27-33.460587.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.05169882550335571,
"em_stderr": 0.0022675304823078276,
"f1": 0.17888317953020105,
"f1_stderr": 0.0028882183973903902,
"acc": 0.39147173871286817,
"acc_stderr": 0.008785918503769254
},
"harness|drop|3": {
"em": 0.05169882550335571,
"em_stderr": 0.0022675304823078276,
"f1": 0.17888317953020105,
"f1_stderr": 0.0028882183973903902
},
"harness|gsm8k|5": {
"acc": 0.03866565579984837,
"acc_stderr": 0.005310583162098035
},
"harness|winogrande|5": {
"acc": 0.744277821625888,
"acc_stderr": 0.012261253845440473
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_chargoddard__Chronorctypus-Limarobormes-13b
|
[
"region:us"
] |
2023-10-17T09:27:37+00:00
|
{"pretty_name": "Evaluation run of chargoddard/Chronorctypus-Limarobormes-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [chargoddard/Chronorctypus-Limarobormes-13b](https://huggingface.co/chargoddard/Chronorctypus-Limarobormes-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chargoddard__Chronorctypus-Limarobormes-13b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-17T10:27:33.460587](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__Chronorctypus-Limarobormes-13b/blob/main/results_2023-10-17T10-27-33.460587.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.05169882550335571,\n \"em_stderr\": 0.0022675304823078276,\n \"f1\": 0.17888317953020105,\n \"f1_stderr\": 0.0028882183973903902,\n \"acc\": 0.39147173871286817,\n \"acc_stderr\": 0.008785918503769254\n },\n \"harness|drop|3\": {\n \"em\": 0.05169882550335571,\n \"em_stderr\": 0.0022675304823078276,\n \"f1\": 0.17888317953020105,\n \"f1_stderr\": 0.0028882183973903902\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.03866565579984837,\n \"acc_stderr\": 0.005310583162098035\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.744277821625888,\n \"acc_stderr\": 0.012261253845440473\n }\n}\n```", "repo_url": "https://huggingface.co/chargoddard/Chronorctypus-Limarobormes-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_17T10_27_33.460587", "path": ["**/details_harness|drop|3_2023-10-17T10-27-33.460587.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-17T10-27-33.460587.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_17T10_27_33.460587", "path": ["**/details_harness|gsm8k|5_2023-10-17T10-27-33.460587.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-17T10-27-33.460587.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_17T10_27_33.460587", "path": ["**/details_harness|winogrande|5_2023-10-17T10-27-33.460587.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-17T10-27-33.460587.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_17T10_27_33.460587", "path": ["results_2023-10-17T10-27-33.460587.parquet"]}, {"split": "latest", "path": ["results_2023-10-17T10-27-33.460587.parquet"]}]}]}
|
2023-10-17T09:27:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of chargoddard/Chronorctypus-Limarobormes-13b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model chargoddard/Chronorctypus-Limarobormes-13b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-17T10:27:33.460587(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of chargoddard/Chronorctypus-Limarobormes-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/Chronorctypus-Limarobormes-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T10:27:33.460587(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of chargoddard/Chronorctypus-Limarobormes-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/Chronorctypus-Limarobormes-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T10:27:33.460587(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
27,
31,
175,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of chargoddard/Chronorctypus-Limarobormes-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/Chronorctypus-Limarobormes-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-17T10:27:33.460587(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
3bf60d86cd94456c8fadf7da0fb10c8fb21857dd
|
# Dataset Card for "pretrain-chinese-zhtw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
erhwenkuo/pretrain-chinese-zhtw
|
[
"region:us"
] |
2023-10-17T09:49:45+00:00
|
{"dataset_info": {"features": [{"name": "dataType", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "uniqueKey", "dtype": "string"}, {"name": "titleUkey", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 815848804, "num_examples": 416105}], "download_size": 419861369, "dataset_size": 815848804}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-18T06:23:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pretrain-chinese-zhtw"
More Information needed
|
[
"# Dataset Card for \"pretrain-chinese-zhtw\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pretrain-chinese-zhtw\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pretrain-chinese-zhtw\"\n\nMore Information needed"
] |
953ceaf1a4a12a1c9748cb20b9cc54d36ab048c4
|
A 50K sample from the Russian-Tyvan parallel corpus collected at https://tyvan.ru.
|
Agisight/tyvan-russian-parallel-50k
|
[
"task_categories:translation",
"size_categories:10K<n<100K",
"language:ru",
"language:tyv",
"license:cc-by-sa-4.0",
"region:us"
] |
2023-10-17T09:56:36+00:00
|
{"language": ["ru", "tyv"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["translation"]}
|
2023-10-17T12:40:34+00:00
|
[] |
[
"ru",
"tyv"
] |
TAGS
#task_categories-translation #size_categories-10K<n<100K #language-Russian #language-Tuvinian #license-cc-by-sa-4.0 #region-us
|
A 50K sample from the Russian-Tyvan parallel corpus collected at URL.
|
[] |
[
"TAGS\n#task_categories-translation #size_categories-10K<n<100K #language-Russian #language-Tuvinian #license-cc-by-sa-4.0 #region-us \n"
] |
[
49
] |
[
"passage: TAGS\n#task_categories-translation #size_categories-10K<n<100K #language-Russian #language-Tuvinian #license-cc-by-sa-4.0 #region-us \n"
] |
5d1e3cf009704d16e948a7eb9e669a78b309d13a
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: HamdanXI/t5_small_gloss_to_text_merged_dataset
* Dataset: aslg_pc12
* Config: default
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@HamdanXI](https://huggingface.co/HamdanXI) for evaluating this model.
|
autoevaluate/autoeval-eval-aslg_pc12-default-864bef-95687146442
|
[
"autotrain",
"evaluation",
"region:us"
] |
2023-10-17T10:21:05+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["aslg_pc12"], "eval_info": {"task": "translation", "model": "HamdanXI/t5_small_gloss_to_text_merged_dataset", "metrics": ["bertscore"], "dataset_name": "aslg_pc12", "dataset_config": "default", "dataset_split": "train", "col_mapping": {"source": "gloss", "target": "text"}}}
|
2023-10-17T10:25:17+00:00
|
[] |
[] |
TAGS
#autotrain #evaluation #region-us
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Translation
* Model: HamdanXI/t5_small_gloss_to_text_merged_dataset
* Dataset: aslg_pc12
* Config: default
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @HamdanXI for evaluating this model.
|
[
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: HamdanXI/t5_small_gloss_to_text_merged_dataset\n* Dataset: aslg_pc12\n* Config: default\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @HamdanXI for evaluating this model."
] |
[
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: HamdanXI/t5_small_gloss_to_text_merged_dataset\n* Dataset: aslg_pc12\n* Config: default\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @HamdanXI for evaluating this model."
] |
[
13,
99,
16
] |
[
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: HamdanXI/t5_small_gloss_to_text_merged_dataset\n* Dataset: aslg_pc12\n* Config: default\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @HamdanXI for evaluating this model."
] |
fe5306e8fe2e1932ad2739ed4d49a42d7ff3a705
|
# Dataset Card for MergedDataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/ahmadkhan10/mergeddataset
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@ahmadkhan10](https://kaggle.com/ahmadkhan10)
### Licensing Information
The license for this dataset is pddl
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed]
|
ahmadkhan1022/kaggle
|
[
"license:pddl",
"region:us"
] |
2023-10-17T10:33:48+00:00
|
{"license": ["pddl"], "converted_from": "kaggle", "kaggle_id": "ahmadkhan10/mergeddataset"}
|
2023-10-17T10:39:48+00:00
|
[] |
[] |
TAGS
#license-pddl #region-us
|
# Dataset Card for MergedDataset
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
This dataset was shared by @ahmadkhan10
### Licensing Information
The license for this dataset is pddl
### Contributions
|
[
"# Dataset Card for MergedDataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThis dataset was shared by @ahmadkhan10",
"### Licensing Information\n\nThe license for this dataset is pddl",
"### Contributions"
] |
[
"TAGS\n#license-pddl #region-us \n",
"# Dataset Card for MergedDataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThis dataset was shared by @ahmadkhan10",
"### Licensing Information\n\nThe license for this dataset is pddl",
"### Contributions"
] |
[
13,
9,
125,
25,
6,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
17,
16,
5
] |
[
"passage: TAGS\n#license-pddl #region-us \n# Dataset Card for MergedDataset## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators\n\nThis dataset was shared by @ahmadkhan10### Licensing Information\n\nThe license for this dataset is pddl### Contributions"
] |
a40396b12b0b6c7610e7012d69317b02533ea0b0
|
** 以下のrepositoryのforkです。 **
https://huggingface.co/datasets/kunishou/oasst1-89k-ja
instructionとinput、outputにまとめ、kenllmでperplexityのスコアが付与してあります。
perplexityの計算に用いたtokenizerはこちら
https://huggingface.co/if001/sentencepiece_ja
- instruction_ppl: instructionのみのperplexity
- output_ppl: outputのみのperplexity
- full_ppl: instructionとoutputを合わせ、instruction用の文章にしたperplexity
|
if001/oasst1_ja_ppl
|
[
"language:ja",
"license:apache-2.0",
"region:us"
] |
2023-10-17T11:01:55+00:00
|
{"language": ["ja"], "license": "apache-2.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input_ppl", "dtype": "int64"}, {"name": "instruction_ppl", "dtype": "int64"}, {"name": "output_ppl", "dtype": "int64"}, {"name": "full_ppl", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 60856874, "num_examples": 55359}], "download_size": 27216157, "dataset_size": 60856874}}
|
2023-10-23T10:22:05+00:00
|
[] |
[
"ja"
] |
TAGS
#language-Japanese #license-apache-2.0 #region-us
|
以下のrepositoryのforkです。
URL
instructionとinput、outputにまとめ、kenllmでperplexityのスコアが付与してあります。
perplexityの計算に用いたtokenizerはこちら
URL
- instruction_ppl: instructionのみのperplexity
- output_ppl: outputのみのperplexity
- full_ppl: instructionとoutputを合わせ、instruction用の文章にしたperplexity
|
[] |
[
"TAGS\n#language-Japanese #license-apache-2.0 #region-us \n"
] |
[
20
] |
[
"passage: TAGS\n#language-Japanese #license-apache-2.0 #region-us \n"
] |
6e09f77dfd86362442518164cd6f8692f7e6926e
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: HamdanXI/t5_small_aslg_pc12
* Dataset: aslg_pc12
* Config: default
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@HamdanXI](https://huggingface.co/HamdanXI) for evaluating this model.
|
autoevaluate/autoeval-eval-aslg_pc12-default-6f4366-95699146446
|
[
"autotrain",
"evaluation",
"region:us"
] |
2023-10-17T11:04:32+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["aslg_pc12"], "eval_info": {"task": "translation", "model": "HamdanXI/t5_small_aslg_pc12", "metrics": ["bertscore", "comet"], "dataset_name": "aslg_pc12", "dataset_config": "default", "dataset_split": "train", "col_mapping": {"source": "gloss", "target": "text"}}}
|
2023-10-17T11:08:46+00:00
|
[] |
[] |
TAGS
#autotrain #evaluation #region-us
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Translation
* Model: HamdanXI/t5_small_aslg_pc12
* Dataset: aslg_pc12
* Config: default
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @HamdanXI for evaluating this model.
|
[
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: HamdanXI/t5_small_aslg_pc12\n* Dataset: aslg_pc12\n* Config: default\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @HamdanXI for evaluating this model."
] |
[
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: HamdanXI/t5_small_aslg_pc12\n* Dataset: aslg_pc12\n* Config: default\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @HamdanXI for evaluating this model."
] |
[
13,
91,
16
] |
[
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: HamdanXI/t5_small_aslg_pc12\n* Dataset: aslg_pc12\n* Config: default\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @HamdanXI for evaluating this model."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.