sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
c64d3560155d81ae42eae6e5e96aadcacc98e20a
|
# zh-tw-pythia-ta8000-v1-e1-tr_wiki_sg-001
This dataset is a part of the `zh-tw-llm` project.
* Tokenizer: `zh-tw-pythia-tokenizer-a8000-v1`
* Built with: `translations`, `sharegpt`
* Rows: `train` `206319`, `test` `300`
* Max length: `2048`
* Full config:
```json
{"build_with": ["translations", "sharegpt"], "preview_length": 128, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "rows_limit": 100000, "test_size": 100, "test_split_seed": 42}, "sharegpt_settings": {"source_dataset": "zetavg/ShareGPT-Processed", "train_on_inputs": false, "languages": [{"en": 0.4}, "zh_Hant"], "rows_limit": 8000, "test_size": 0.02, "test_split_seed": 42, "test_rows_limit": 100}}
```
|
zh-tw-llm-dv/zh-tw-pythia-ta8000-v1-e1-tr_sg-001
|
[
"region:us"
] |
2023-05-17T23:11:48+00:00
|
{"dataset_info": {"dataset_size": 355884981.0, "download_size": 134906094, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}], "splits": [{"name": "train", "num_bytes": 354002017.0, "num_examples": 206319}, {"name": "test", "num_bytes": 1882964.0, "num_examples": 300}]}}
|
2023-05-17T23:20:30+00:00
|
934295eaa0aeaf5fa4cb1cc39b96b87600dfc82b
|
yangwang825/klue-ynat
|
[
"task_categories:text-classification",
"language:ko",
"region:us"
] |
2023-05-17T23:29:06+00:00
|
{"language": ["ko"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "IT\uacfc\ud559", "1": "\uacbd\uc81c", "2": "\uc0ac\ud68c", "3": "\uc0dd\ud65c\ubb38\ud654", "4": "\uc138\uacc4", "5": "\uc2a4\ud3ec\uce20", "6": "\uc815\uce58"}}}}]}}
|
2023-05-19T01:07:06+00:00
|
|
f689580c14b8dc183bc042cf9da3a577214597ec
|
# Dataset Card for "us-liver"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Yuchong/us-liver
|
[
"region:us"
] |
2023-05-17T23:40:29+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2359828.0, "num_examples": 8}], "download_size": 366395, "dataset_size": 2359828.0}}
|
2023-05-17T23:40:35+00:00
|
79e948be1877f76228cea88581b4102a2c53da20
|
# Dataset Card for "us-vessel"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Yuchong/us-vessel
|
[
"region:us"
] |
2023-05-17T23:41:50+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1182963.0, "num_examples": 4}], "download_size": 185605, "dataset_size": 1182963.0}}
|
2023-05-17T23:41:58+00:00
|
281b4674cee02c11980e215682a57b096549788a
|
# Dataset Card for "clevr-full-v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
erkam/clevr-full-v3
|
[
"region:us"
] |
2023-05-17T23:49:01+00:00
|
{"dataset_info": {"features": [{"name": "target_img", "dtype": "image"}, {"name": "source_img", "dtype": "image"}, {"name": "target_layout", "dtype": "image"}, {"name": "target_obj", "sequence": "int64"}, {"name": "source_obj", "sequence": "int64"}, {"name": "target_box", "sequence": {"sequence": "float32"}}, {"name": "source_box", "sequence": {"sequence": "float32"}}, {"name": "target_tri", "sequence": {"sequence": "int64"}}, {"name": "source_tri", "sequence": {"sequence": "int64"}}], "splits": [{"name": "test", "num_bytes": 15623406.0, "num_examples": 119}, {"name": "train", "num_bytes": 126305058.0, "num_examples": 960}, {"name": "val", "num_bytes": 15746167.0, "num_examples": 119}], "download_size": 156084556, "dataset_size": 157674631.0}}
|
2023-05-23T22:25:50+00:00
|
d7c1317b702ac89f8e3aed5f5823727d8c04a906
|
# Dataset Card for "endo_instrument"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Yuchong/endo_instrument
|
[
"region:us"
] |
2023-05-17T23:54:21+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 69362308.0, "num_examples": 194}], "download_size": 12168988, "dataset_size": 69362308.0}}
|
2023-05-17T23:54:33+00:00
|
e89f5acdc30d842ef2801759f2b093b5c3d3f953
|
# Dataset Card for "endo_ployp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Yuchong/endo_ployp
|
[
"region:us"
] |
2023-05-18T00:07:16+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 248141216.0, "num_examples": 725}], "download_size": 67190427, "dataset_size": 248141216.0}}
|
2023-05-18T00:07:34+00:00
|
45e275b34b26eba370e2a356022aab2b52863993
|
# typescript-chunks
A dataset of TypeScript snippets, processed from the typescript subset of [the-stack-smol](https://huggingface.co/datasets/bigcode/the-stack-smol).
# Processing
- Each source file is parsed with the TypeScript AST and queried for 'semantic chunks' of the following types.
```
FunctionDeclaration ---- 8205
ArrowFunction --------- 33890
ClassDeclaration ------- 5325
InterfaceDeclaration -- 12884
EnumDeclaration --------- 518
TypeAliasDeclaration --- 3580
MethodDeclaration ----- 24713
```
- Leading comments are added to the front of `content`
- Removed all chunks over max sequence length (2048)
- Deduplicated / cleaned up
- Generated instructions / summaries with `gpt-3.5-turbo` (in progress)
# Dataset Structure
```python
from datasets import load_dataset
load_dataset("bleugreen/typescript-chunks")
DatasetDict({
train: Dataset({
features: ['type', 'content', 'repo', 'path', 'language'],
num_rows: 89115
})
})
```
|
bleugreen/typescript-chunks
|
[
"task_categories:text-classification",
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"region:us"
] |
2023-05-18T01:13:26+00:00
|
{"language": ["en"], "task_categories": ["text-classification", "text2text-generation", "summarization"]}
|
2023-05-18T03:27:24+00:00
|
bd5448a190003ce07b71f4875529b882d6fc2f6f
|
xiemoxiaoshaso/xiemoloras
|
[
"license:openrail",
"region:us"
] |
2023-05-18T01:28:51+00:00
|
{"license": "openrail"}
|
2023-05-18T07:20:55+00:00
|
|
01be236e6bf60cde69f19fa6bcff631fe058542b
|
RengJEY/Fast_Food_classification
|
[
"license:openrail",
"region:us"
] |
2023-05-18T02:19:02+00:00
|
{"license": "openrail"}
|
2023-05-18T02:19:02+00:00
|
|
2ca0c1e3c3301a10550eaaa87f659f7e7e7daa7b
|
# Dataset Card for "e07e0041"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/e07e0041
|
[
"region:us"
] |
2023-05-18T02:28:30+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 188, "num_examples": 10}], "download_size": 1334, "dataset_size": 188}}
|
2023-05-18T02:28:32+00:00
|
454fa38fb546fc667a3d565929dcb2d8cb0d7402
|
sled-umich/MindCraft
|
[
"license:gpl-2.0",
"region:us"
] |
2023-05-18T02:39:18+00:00
|
{"license": "gpl-2.0"}
|
2023-05-18T06:31:55+00:00
|
|
e7244b1f2a3eaf3c7b1983b476b5fac8479175b4
|
# Dataset Card for "dbd31000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/dbd31000
|
[
"region:us"
] |
2023-05-18T02:41:53+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1336, "dataset_size": 186}}
|
2023-05-18T02:41:57+00:00
|
0a1a2c082c5868f6071f4ac956a2ce8e74bba9be
|
# Dataset Card for "fa1be0f1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/fa1be0f1
|
[
"region:us"
] |
2023-05-18T02:41:58+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1336, "dataset_size": 186}}
|
2023-05-18T02:42:01+00:00
|
b0da14bb1324c04f3f07a461244d189ce4ec1cf0
|
xiemoxiaoshaso/ceshi
|
[
"license:openrail",
"region:us"
] |
2023-05-18T02:42:16+00:00
|
{"license": "openrail"}
|
2023-06-06T05:00:14+00:00
|
|
2b65c6d7ce6d71079453e5eebf14d0d3aeffbc14
|
# Dataset Card for "hieunguyen_news_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vietgpt/hieunguyen_news_vi
|
[
"region:us"
] |
2023-05-18T03:02:51+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "extra_metadata", "struct": [{"name": "date", "dtype": "string"}, {"name": "id", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 38122567923, "num_examples": 7694518}], "download_size": 19876311847, "dataset_size": 38122567923}}
|
2023-05-18T05:38:08+00:00
|
543fab65e9f238d414855a8c6675fe54af738c42
|
# Dataset Card for "genshin_ch_10npc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
xmj2002/genshin_ch_10npc
|
[
"task_categories:text-to-speech",
"language:zh",
"license:apache-2.0",
"region:us"
] |
2023-05-18T03:03:50+00:00
|
{"language": ["zh"], "license": "apache-2.0", "task_categories": ["text-to-speech"], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "language", "dtype": "string"}, {"name": "npcName", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2459515323.168046, "num_examples": 17293}, {"name": "test", "num_bytes": 273358494.8319542, "num_examples": 1922}], "download_size": 2154942775, "dataset_size": 2732873818}}
|
2023-06-02T06:30:27+00:00
|
165c94525b3e033d8ce767aa63199aea6b7867b5
|
# Dataset Card for "e4f0cd73"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/e4f0cd73
|
[
"region:us"
] |
2023-05-18T04:39:18+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 184, "num_examples": 10}], "download_size": 1341, "dataset_size": 184}}
|
2023-05-18T04:39:21+00:00
|
4b538177d7d73ec437315e8bf00183831206e667
|
# Dataset Card for "SMM2-levels-simple"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
valashir/SMM2-levels-simple
|
[
"region:us"
] |
2023-05-18T05:22:57+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "level", "sequence": {"sequence": {"sequence": "uint8"}}}, {"name": "text", "dtype": "string"}, {"name": "text-baseline", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 11203477436.0, "num_examples": 202096}, {"name": "val", "num_bytes": 112695376.0, "num_examples": 2048}], "download_size": 1273700121, "dataset_size": 11316172812.0}}
|
2023-05-18T05:28:59+00:00
|
3f1850ffb048178260dcc2cd779871b54a957e61
|
Haziqsayyed/gpt-expressions
|
[
"task_categories:summarization",
"size_categories:n<1K",
"language:en",
"license:afl-3.0",
"code",
"region:us"
] |
2023-05-18T05:48:25+00:00
|
{"language": ["en"], "license": "afl-3.0", "size_categories": ["n<1K"], "task_categories": ["summarization"], "pretty_name": "Rule and Expressions", "tags": ["code"]}
|
2023-06-27T05:33:41+00:00
|
|
8d42e530ee68901bfcb5bb0b7a7149f72a8460c5
|
dtype: audio
splits:
- name: train
num_bytes: 76893503.0
num_examples: 1
download_size: 32336566
dataset_size: 76893503.0
---
# Dataset Card for "ipl"
|
zmeanszachary/ipl
|
[
"region:us"
] |
2023-05-18T06:05:32+00:00
|
{}
|
2023-05-18T07:01:38+00:00
|
6ed8ca07a596a5f7ee9f3e21e36cb262e22c2afa
|
This dataset is the qed_train.tsv file from the FLAN project converted into alpaca formatting with the context in the input and the outputs ordered to be explanation then answer.
https://github.com/google-research/FLAN/blob/main/flan/v2/cot_data/qed_train.tsv
@inproceedings{weifinetuned,
title={Finetuned Language Models are Zero-Shot Learners},
author={Wei, Jason and Bosma, Maarten and Zhao, Vincent and Guu, Kelvin and Yu, Adams Wei and Lester, Brian and Du, Nan and Dai, Andrew M and Le, Quoc V},
booktitle={International Conference on Learning Representations}
}
|
PocketDoc/QED-Alpaca
|
[
"language:en",
"region:us"
] |
2023-05-18T06:12:38+00:00
|
{"language": ["en"]}
|
2023-05-23T13:59:06+00:00
|
e6eaffc1e1b37df94808dc5473bb18873bf80eaa
|
# Dataset Card for "japanese_alpaca_data"
- This dataset is based on `masa3141`'s great work on `japanese-alpaca-lora` [[github]](https://github.com/masa3141/japanese-alpaca-lora). Please also refer to this repo.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fujiki/japanese_alpaca_data
|
[
"language:ja",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-05-18T06:13:15+00:00
|
{"language": ["ja"], "license": "cc-by-nc-sa-4.0", "pretty_name": "japanese_alpaca", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24733874, "num_examples": 52002}], "download_size": 13849623, "dataset_size": 24733874}}
|
2023-05-19T11:54:13+00:00
|
b51f76d33665ce9ad34d970ccb5b8c0ea6055ccc
|
# Dataset Card for "train_librispeech_self"
num_examples: 28539
|
averageandyyy/train_librispeech_self
|
[
"region:us"
] |
2023-05-18T06:37:34+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcript", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6992473149.621, "num_examples": 28539}], "download_size": 6432578037, "dataset_size": 6992473149.621}}
|
2023-05-25T11:10:21+00:00
|
c85e1e5d6ca63b9158d50e3fa81ed25f61f7e571
|
yangdechuan/demo
|
[
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:openrail",
"region:us"
] |
2023-05-18T06:40:37+00:00
|
{"language": ["en"], "license": "openrail", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "tiny_demo"}
|
2023-05-18T06:48:30+00:00
|
|
6c5d8824d7d2868688be9ea7cac4687e50680bac
|
# Dataset Card for "dataset_pokemon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joeySadventure/dataset_pokemon
|
[
"region:us"
] |
2023-05-18T06:52:58+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 119417305.0, "num_examples": 833}], "download_size": 99672356, "dataset_size": 119417305.0}}
|
2023-05-18T07:12:12+00:00
|
e8caba2d4e8aa6d42a019d0fe56e07e51f188b38
|
# Dataset Card for "SOGC-archive-trademarks-1883-2001"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Travad98/SOGC-archive-trademarks-1883-2001
|
[
"region:us"
] |
2023-05-18T06:53:54+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "labels", "sequence": "int64"}, {"name": "target_sequence", "dtype": "string"}, {"name": "period", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33325831057.787483, "num_examples": 2257}, {"name": "test", "num_bytes": 4267241991.89215, "num_examples": 289}, {"name": "validation", "num_bytes": 4163883189.320368, "num_examples": 282}], "download_size": 2904399312, "dataset_size": 41756956239.0}}
|
2023-05-18T12:32:44+00:00
|
d491f1e451c2a41f34e1f5ac76889bdb830912c0
|
dtype: audio
splits:
- name: train
num_bytes: 76893503.0
num_examples: 1
download_size: 32336566
dataset_size: 76893503.0
---
# Dataset Card for "adad"
|
zmeanszachary/adad
|
[
"region:us"
] |
2023-05-18T07:02:47+00:00
|
{}
|
2023-05-18T07:07:28+00:00
|
fc6a7454276ed93ffecda32349b79c4382f04865
|
# Dataset Card for "combined_librispeech_self"
num_examples: 2620 (test)
num_examples: 28539 (train)
|
averageandyyy/combined_librispeech_self
|
[
"region:us"
] |
2023-05-18T07:13:13+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcript", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6992473149.621, "num_examples": 28539}, {"name": "test", "num_bytes": 342449773.04, "num_examples": 2620}], "download_size": 6782935012, "dataset_size": 7334922922.661}}
|
2023-05-25T11:12:02+00:00
|
4096ad5fbbdb3185b43799f26d732eecaf629fbc
|
thewall/tg2
|
[
"license:openrail",
"region:us"
] |
2023-05-18T07:20:35+00:00
|
{"license": "openrail"}
|
2023-07-05T08:05:49+00:00
|
|
622b9c6b6d960bf75725246986852718cde070b9
|
Dataset Card for "goodreads"
Must-read books summary
Features:
* Book - Name of the book. Soemtimes this includes the details of the Series it belongs to inside a parenthesis. This information can be further extracted to analyse only series.
* Author - Name of the book's Author
* Description - The book's description as mentioned on Goodreads
* Genres - Multiple Genres as classified on Goodreads. Could be useful for Multi-label classification or Content based recommendation and Clustering.
* Average Rating - The average rating (Out of 5) given on Goodreads
* Number of Ratings - The Number of users that have Ratings. (Not to be confused with reviews)
* URL - The Goodreads URL for the book's details' page
---
license: mit
---
|
Eitanli/goodreads
|
[
"region:us"
] |
2023-05-18T07:49:52+00:00
|
{}
|
2023-05-18T08:08:02+00:00
|
ee2cbd1a7627407df0245ea788c4ba08576fa202
|
mitermix/chess-selfplay
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-18T07:56:25+00:00
|
{"license": "apache-2.0"}
|
2023-05-22T05:58:34+00:00
|
|
b599af16aeef5fed3fd7ba6d05edd6069d011cd7
|
# Dataset Card for "flores200_devtest_mt5-1b-flores200-scaffold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hlillemark/flores200_devtest_mt5-1b-flores200-scaffold
|
[
"region:us"
] |
2023-05-18T08:18:20+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "source_lang", "dtype": "string"}, {"name": "target_lang", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "chrf_unreduced", "dtype": "string"}], "splits": [{"name": "devtest", "num_bytes": 378647046, "num_examples": 500000}], "download_size": 261416114, "dataset_size": 378647046}}
|
2023-05-19T17:51:48+00:00
|
b8dd4a2c61ff1c1019db553d016ba99f2ac649c9
|
# Dataset Card for "lotr-book"
The Lord of the Rings books extracted into one dataset.
[Source](https://github.com/jeremyarancio/llm-rpg/blob/main/llm/prepare_dataset.py)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Notes
* Book [link](https://gosafir.com/mag/wp-content/uploads/2019/12/Tolkien-J.-The-lord-of-the-rings-HarperCollins-ebooks-2010.pdf)
* Footers and header were removed.
* Starts at page 45 and ends at page 1055
|
JeremyArancio/lotr-book
|
[
"region:us"
] |
2023-05-18T08:53:28+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2432593, "num_examples": 1}], "download_size": 0, "dataset_size": 2432593}}
|
2023-06-02T11:30:41+00:00
|
5620da0aba716e8c88a7c8ea0658f354e7aa8b98
|
# Dataset Card for "stack-smol-xxl"
This is a subset of the [deduplicated Stack dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup)
It was generated like so:
```python
from datasets import load_dataset, Dataset
languages = ["css", "prolog", "c", "fortran", "solidity", "kotlin", "literate-agda", "julia", "java-server-pages",
"isabelle", "idris", "lean", "powershell", "go", "erlang", "f-sharp", "ada", "pascal", "perl", "r", "protocol-buffer",
"cmake", "sas", "ruby", "rust", "rmarkdown", "c-sharp", "smalltalk", "haskell", "maple", "mathematica", "ocaml",
"makefile", "lua", "literate-coffeescript", "literate-haskell", "restructuredtext", "racket", "standard-ml",
"systemverilog", "tex", "awk", "assembly", "alloy", "agda", "emacs-lisp", "dart", "cuda", "bluespec", "augeas", "batchfile",
"tcsh", "stan", "scala", "tcl", "stata", "applescript", "shell", "clojure", "scheme", "antlr", "sparql", "sql",
"glsl", "elm", "dockerfile", "cpp", "coffeescript", "common-lisp", "elixir", "groovy", "html", "java", "javascript",
"markdown", "php", "python", "typescript", "verilog", "visual-basic", "vhdl", "thrift", "matlab", "yacc", "zig", "xslt", "json", "yaml"]
def dset_gen():
for language in languages:
dset = load_dataset("bigcode/the-stack-dedup", data_dir=f"data/{language}", streaming=True, split="train")
sample = dset.take(250_000)
for row in sample:
yield row
dset = Dataset.from_generator(dset_gen)
```
## Dataset Structure
```
num_examples: 11658586
download_size: 28807934580
dataset_size: 78577965159
```
### Data Instances
Each data instance corresponds to one file. The content of the file is in the `content` feature, and other features (`repository_name`, `licenses`, etc.) provide some metadata. Note that a given file can appear in several different repositories that satisfy our safe-license criterion. If that is the case, only the first – in alphabetical order -- of these repositories is shown for simplicity.
### Data Fields
- `content` (string): the content of the file.
- `size` (integer): size of the uncompressed file.
- `lang` (string): the programming language.
- `ext` (string): file extension
- `avg_line_length` (float): the average line-length of the file.
- `max_line_length` (integer): the maximum line-length of the file.
- `alphanum_fraction` (float): the fraction of characters in the file that are alphabetical or numerical characters.
- `hexsha` (string): unique git hash of file
- `max_{stars|forks|issues}_repo_path` (string): path to file in repo containing this file with maximum number of `{stars|forks|issues}`
- `max_{stars|forks|issues}_repo_name` (string): name of repo containing this file with maximum number of `{stars|forks|issues}`
- `max_{stars|forks|issues}_repo_head_hexsha` (string): hexsha of repository head
- `max_{stars|forks|issues}_repo_licenses` (string): licenses in repository
- `max_{stars|forks|issues}_count` (integer): number of `{stars|forks|issues}` in repository
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_min_datetime` (string): first timestamp of a `{stars|forks|issues}` event
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_max_datetime` (string): last timestamp of a `{stars|forks|issues}` event
|
cakiki/stack-smol-xxl
|
[
"language:code",
"license:other",
"region:us"
] |
2023-05-18T08:57:15+00:00
|
{"language": ["code"], "license": "other", "dataset_info": {"features": [{"name": "hexsha", "dtype": "string"}, {"name": "size", "dtype": "int64"}, {"name": "ext", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "max_stars_repo_path", "dtype": "string"}, {"name": "max_stars_repo_name", "dtype": "string"}, {"name": "max_stars_repo_head_hexsha", "dtype": "string"}, {"name": "max_stars_repo_licenses", "sequence": "string"}, {"name": "max_stars_count", "dtype": "int64"}, {"name": "max_stars_repo_stars_event_min_datetime", "dtype": "string"}, {"name": "max_stars_repo_stars_event_max_datetime", "dtype": "string"}, {"name": "max_issues_repo_path", "dtype": "string"}, {"name": "max_issues_repo_name", "dtype": "string"}, {"name": "max_issues_repo_head_hexsha", "dtype": "string"}, {"name": "max_issues_repo_licenses", "sequence": "string"}, {"name": "max_issues_count", "dtype": "int64"}, {"name": "max_issues_repo_issues_event_min_datetime", "dtype": "string"}, {"name": "max_issues_repo_issues_event_max_datetime", "dtype": "string"}, {"name": "max_forks_repo_path", "dtype": "string"}, {"name": "max_forks_repo_name", "dtype": "string"}, {"name": "max_forks_repo_head_hexsha", "dtype": "string"}, {"name": "max_forks_repo_licenses", "sequence": "string"}, {"name": "max_forks_count", "dtype": "int64"}, {"name": "max_forks_repo_forks_event_min_datetime", "dtype": "string"}, {"name": "max_forks_repo_forks_event_max_datetime", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "avg_line_length", "dtype": "float64"}, {"name": "max_line_length", "dtype": "int64"}, {"name": "alphanum_fraction", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 78577965159, "num_examples": 11658586}], "download_size": 28807934580, "dataset_size": 78577965159}}
|
2023-06-06T10:37:36+00:00
|
1775c5b036fba27b919b1ec4c9556fe3d2463f16
|
# Dataset Card for "9c4666e9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/9c4666e9
|
[
"region:us"
] |
2023-05-18T09:01:38+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 188, "num_examples": 10}], "download_size": 1337, "dataset_size": 188}}
|
2023-05-18T09:01:39+00:00
|
851a4485ca3cecd949741661b444bd9f599a0b95
|
Quinm101/ATpubmed
|
[
"license:openrail",
"region:us"
] |
2023-05-18T09:03:25+00:00
|
{"license": "openrail"}
|
2023-05-18T18:35:12+00:00
|
|
8751d66d86bd6cf702f6529e6c1162ee88d6c651
|
# Dataset Card for "840ef337"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/840ef337
|
[
"region:us"
] |
2023-05-18T09:17:49+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1340, "dataset_size": 182}}
|
2023-05-18T09:17:52+00:00
|
92a2e1174b7b8e8a576a40d47809653e371c3ab8
|
All the images were taken from [Unsplash](https://unsplash.com/s/photos/starbucks-logo).
|
diffusers/starbucks-example
|
[
"region:us"
] |
2023-05-18T09:44:07+00:00
|
{}
|
2023-05-18T09:45:03+00:00
|
fef68e841549238f01402fed895b9f34def245fa
|
Fredithefish/GPTeacher-for-RedPajama-Chat
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-18T10:28:04+00:00
|
{"license": "apache-2.0"}
|
2023-05-18T10:29:04+00:00
|
|
680678213402bf6abea39e470d891682541c9ea8
|
pxovela/Vodka_v2_training_data
|
[
"license:openrail",
"region:us"
] |
2023-05-18T10:31:10+00:00
|
{"license": "openrail"}
|
2023-05-18T11:18:12+00:00
|
|
154c3f31fc8360c2b66c9d2eb7c116b86d6c47c7
|
Cyb3rWard0g/ATTCKGroups
|
[
"license:mit",
"region:us"
] |
2023-05-18T10:38:43+00:00
|
{"license": "mit"}
|
2023-05-18T10:41:20+00:00
|
|
3474500cfd12d7a8c6e88083144aaf1d255661e2
|
# Dataset Card for "train_valid_digit_mask_augmented_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mazkobot/train_valid_digit_mask_augmented_raw
|
[
"region:us"
] |
2023-05-18T10:39:58+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "test", "num_bytes": 195044952.756, "num_examples": 6086}, {"name": "validation", "num_bytes": 169086019.324, "num_examples": 5276}], "download_size": 341299033, "dataset_size": 364130972.08000004}}
|
2023-05-18T10:40:10+00:00
|
2d8f9c38ff3d36add81a42d3d53d6879fe6683b4
|
Thouph/incomplete_224
|
[
"license:mit",
"region:us"
] |
2023-05-18T10:43:10+00:00
|
{"license": "mit", "viewer": false}
|
2023-05-19T12:49:00+00:00
|
|
0f9e81bfd01847e83b2274cff64c366c37a14d10
|
hoangho/dataset
|
[
"license:mit",
"region:us"
] |
2023-05-18T10:54:12+00:00
|
{"license": "mit"}
|
2023-05-18T10:55:56+00:00
|
|
113754bc5449fb70a282f7ab3c667d51510a5b5e
|
**Unified Prompt Selection** - provides a set of tools for utilizing and evaluating a probability-based prompt selection method that is based on the following paper.
Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo. [https://arxiv.org/abs/2305.14877](https://arxiv.org/abs/2305.14877).
This repository is prepared to reproduce the evaluation results of the paper.
|
gimmaru/ups_reproduction
|
[
"arxiv:2305.14877",
"region:us"
] |
2023-05-18T10:56:42+00:00
|
{}
|
2023-12-24T03:42:08+00:00
|
a3d955d3f9759d8a143df99a1abaf4ced196c090
|
vargha/liquidmarket_chatbot
|
[
"region:us"
] |
2023-05-18T11:04:09+00:00
|
{}
|
2023-05-18T11:05:21+00:00
|
|
531e7e81169a31ead175395f0b5cfb28448854c0
|
# Dataset Card for "prepared-yagpt"
## Short Description
This dataset is aimed for training of chatbots on russian language.
It consists plenty of dialogues that allows you to train you model answer user prompts.
## Notes
1. Special tokens
- history, speaker1, speaker2 (history can be optionally removed, i.e. substituted on empty string)
2. Dataset is based on
- [Matreshka](https://huggingface.co/datasets/zjkarina/matreshka)
- [Yandex-Q](https://huggingface.co/datasets/its5Q/yandex-q)
- [Diasum](https://huggingface.co/datasets/bragovo/diasum)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
under-tree/prepared-yagpt
|
[
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ru",
"region:us"
] |
2023-05-18T11:17:21+00:00
|
{"language": ["ru"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational", "text-generation"], "pretty_name": "Dialogue Dataset for YAGPT ChatBot", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42680359.78397168, "num_examples": 53550}, {"name": "test", "num_bytes": 7532625.216028317, "num_examples": 9451}], "download_size": 25066987, "dataset_size": 50212985}}
|
2023-05-18T11:26:50+00:00
|
29cb829c1dfa43306f313c814de96c0ed75ca7e1
|
This is the official code for **HistRED: A Historical Document-Level Relation Extraction Dataset** (ACL 2023).
All materials related to this paper can be found here.
- [ACL Anthology](https://aclanthology.org/2023.acl-long.180/): Official proceeding publication
- [Virtual-ACL 2023](https://virtual2023.aclweb.org/paper_P536.html#slides): You can view papers, posters, and presentation slides.
- [arXiv](https://arxiv.org/abs/2307.04285): This is the camera-ready version, which is a key part of this paper.
Note that this dataset is open under [CC BY-NC-ND 4.0](https://creativecommons.org/licenses/by-nc-nd/4.0/) license.
The same code (except the dataset) can be seen in [Github](https://github.com/dudrrm/HistRED/tree/main)
```python
from datasets import load_dataset
dataset = load_dataset("Soyoung/HistRED")
```
# Dataset Example
Due to the complexity of the dataset, we replace the dataset preview with an example figure.
The text is translated into English for comprehension (*), however, unlike the figure, the dataset does not include English-translated text, only containing Korean and Hanja.
Also, only one relation is shown for readability.
Relation information includes
1. subject and object entities for Korean and Hanja *(sbj_kor, sbj_han, obj_kor, obj_han)*,
2. a relation type *(label)*,
3. and evidence sentence index(es) for each language *(evidence_kor, evidence_han)*.
Metadata contains additional information, such as which book the text is extracted from.

# Corpus of HistRED: \<\< Yeonhaengnok \>\>
In this dataset, we choose *Yeonhaengnok*, a collection of records originally written in Hanja, classical Chinese writing, which has later been translated into Korean.
[Joseon](https://en.wikipedia.org/wiki/Joseon), the last dynastic kingdom of Korea, lasted just over five centuries, from 1392 to 1897, and many aspects of Korean traditions and customs trace their roots back to this era.
Numerous historical documents exist from the Joseon dynasty, including *Annals of Joseon Dynasty* ([AJD](https://en.wikipedia.org/wiki/Veritable_Records_of_the_Joseon_Dynasty)) and *Diaries of the Royal Secretariats* ([DRS](https://en.wikipedia.org/wiki/Seungjeongwon_ilgi)).
Note that the majority of Joseon's records were written in Hanja, the archaic Chinese writing that differs from modern Chinese because the Korean language had not been standardized until much later.
In short, Yeonhaengnok is a travel diary from the Joseon period. In the past, traveling to other places, particularly to foreign countries, was rare.
Therefore, intellectuals who traveled to Chung (also referred to as the [Qing dynasty](https://en.wikipedia.org/wiki/Qing_dynasty)) meticulously documented their journeys, and Yeonhaengnok is a compilation of these accounts.
Diverse individuals from different generations recorded their business trips following similar routes from Joseon to Chung, focusing on people, products, and events they encountered.
The Institute for the Translation of Korean Classics (ITKC) has open-sourced the original and their translated texts for many historical documents, promoting active historical research.
The entire documents were collected from an open-source database at https://db.itkc.or.kr/.
# Properties
- Our dataset contains (i) named entities, (ii) relations between the entities, and (iii) parallel relationships between Korean and Hanja texts.
- <code style="color : red"> dataset.py </code> return processed dataset that can be easily applied to general NLP models.
- For monolingual setting: *KoreanDataset*, *HanjaDataset*
- For Bilingual setting: *JointDataset*
- <code style="color : red"> ner_map.json </code> and <code style="color : red"> label_map.json </code> are the mapping dictionaries from label classes to indexes.
- Sequence level (SL) is a unit of sequence length for extracting self-contained sub-texts without losing context information for each relation in the text. Each folder SL-k indicates that SL is k.
# Dataset usages
- Testbed for evaluating the model performance when varying the sequence length.
- Relation extraction task especially on Non-English or historical corpus.
# Citation
```
@inproceedings{yang-etal-2023-histred,
title = "{H}ist{RED}: A Historical Document-Level Relation Extraction Dataset",
author = "Yang, Soyoung and
Choi, Minseok and
Cho, Youngwoo and
Choo, Jaegul",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.180",
pages = "3207--3224",
}
```
|
Soyoung/HistRED
|
[
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:ko",
"license:cc-by-nc-nd-4.0",
"art",
"arxiv:2307.04285",
"region:us"
] |
2023-05-18T12:00:36+00:00
|
{"language": ["ko"], "license": "cc-by-nc-nd-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "tags": ["art"]}
|
2023-08-01T14:05:24+00:00
|
823f3456b2a15f5446decdc5abebb214cb94241d
|
Persian dataset with Myers-Briggs 16 types. crawled on twitter persian users.
|
mjavadmt/mbti-persian-twitter
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:fa",
"region:us"
] |
2023-05-18T12:01:33+00:00
|
{"language": ["fa"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "MBTI-persian-dataset"}
|
2023-05-18T15:55:59+00:00
|
01e54c314c46a68188ea04c6559e13b52e72f6cd
|
# Dataset Card for "gaps_jpn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bjoernp/gaps_jpn
|
[
"region:us"
] |
2023-05-18T12:04:36+00:00
|
{"dataset_info": {"features": [{"name": "sentences", "dtype": "string"}, {"name": "sentences_jp", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 63008094210, "num_examples": 231591358}], "download_size": 33283332401, "dataset_size": 63008094210}}
|
2023-05-18T12:22:28+00:00
|
1c626d43a932fb636800fc4e8fd729c38774bcb1
|
# Dataset Card for "c4_t5_packed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hlillemark/c4_t5_packed
|
[
"region:us"
] |
2023-05-18T12:36:21+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 738234634800, "num_examples": 180057228}, {"name": "validation", "num_bytes": 41000000, "num_examples": 10000}], "download_size": 362188841348, "dataset_size": 738275634800}}
|
2023-05-18T23:33:06+00:00
|
4d11f0fd88e285ac948fca3d9f0cc869d4bdd7e6
|
WizardLM's instructions with Claude's outputs. Includes an unfiltered version as well.
|
Norquinal/WizardLM_alpaca_claude_evol_instruct_70k
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-18T12:56:26+00:00
|
{"license": "apache-2.0"}
|
2023-05-18T22:09:15+00:00
|
5f29cee5aad9550d8c96f961eb5fe5e90654ce85
|
# zh-tw-pythia-ta8000-v1-e1-tr_sg-001-c512
This dataset is a part of the `zh-tw-llm` project.
* Tokenizer: `zh-tw-pythia-tokenizer-a8000-v1`
* Built with: `translations`, `sharegpt`
* Rows: `train` `206319`, `test` `300`
* Max length: `512`
* Full config:
```json
{"build_with": ["translations", "sharegpt"], "preview_length": 128, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "rows_limit": 100000, "test_size": 100, "test_split_seed": 42}, "sharegpt_settings": {"source_dataset": "zetavg/ShareGPT-Processed", "train_on_inputs": false, "languages": [{"en": 0.4}, "zh_Hant"], "rows_limit": 8000, "test_size": 0.02, "test_split_seed": 42, "test_rows_limit": 100}}
```
|
zh-tw-llm-dv/zh-tw-pythia-ta8000-v1-e1-tr_sg-001-c512
|
[
"region:us"
] |
2023-05-18T13:02:17+00:00
|
{"dataset_info": {"dataset_size": 283945970.0, "download_size": 115782741, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}], "splits": [{"name": "train", "num_bytes": 283466501.0, "num_examples": 206319}, {"name": "test", "num_bytes": 479469.0, "num_examples": 300}]}}
|
2023-05-18T13:03:29+00:00
|
7de5674bd620c69643d48472e2c334bb3f96937a
|
EdisonBlack/checkpoint
|
[
"region:us"
] |
2023-05-18T13:11:08+00:00
|
{}
|
2023-05-19T03:48:21+00:00
|
|
2bbb40ad48e037ad9cde565ff2068246c54c6db3
|
# Dataset Card for "ikitracs_training_dataset_FINAL"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mtyrrell/ikitracs_training_dataset_FINAL
|
[
"region:us"
] |
2023-05-18T13:22:06+00:00
|
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "country_code", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "type_of_document", "dtype": "string"}, {"name": "parameter", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "target_type", "dtype": "string"}, {"name": "target_year", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "document_path", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "matched_paragraph", "dtype": "string"}, {"name": "matched_paragraph_FINAL", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11686221, "num_examples": 4816}], "download_size": 2318561, "dataset_size": 11686221}}
|
2023-05-18T13:24:44+00:00
|
fef7e44a493eb9c59e94edfdd0fa193629616f89
|
# Dataset Card for "clavin"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ShuaKang/calvin_d
|
[
"region:us"
] |
2023-05-18T13:23:46+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 208782476.5, "num_examples": 5124}], "download_size": 208631711, "dataset_size": 208782476.5}}
|
2023-05-18T13:23:54+00:00
|
486c4c85f4e05f2575acd6bf9e563eb72229d415
|
# Dataset Card for "synthetic_competing_risk"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Gabriel/synthetic_competing_risk
|
[
"region:us"
] |
2023-05-18T13:39:02+00:00
|
{"dataset_info": {"features": [{"name": "time", "dtype": "int64"}, {"name": "label", "dtype": "int64"}, {"name": "true_time", "dtype": "int64"}, {"name": "true_label", "dtype": "int64"}, {"name": "feature1", "dtype": "float64"}, {"name": "feature2", "dtype": "float64"}, {"name": "feature3", "dtype": "float64"}, {"name": "feature4", "dtype": "float64"}, {"name": "feature5", "dtype": "float64"}, {"name": "feature6", "dtype": "float64"}, {"name": "feature7", "dtype": "float64"}, {"name": "feature8", "dtype": "float64"}, {"name": "feature9", "dtype": "float64"}, {"name": "feature10", "dtype": "float64"}, {"name": "feature11", "dtype": "float64"}, {"name": "feature12", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2611200, "num_examples": 19200}, {"name": "test", "num_bytes": 816000, "num_examples": 6000}, {"name": "validation", "num_bytes": 652800, "num_examples": 4800}], "download_size": 3733894, "dataset_size": 4080000}}
|
2023-05-18T13:39:13+00:00
|
06382cecbb95b1239ddddbaf879f328392ca8b1c
|
# Dataset Card for "aims_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aleh/aims_2
|
[
"region:us"
] |
2023-05-18T14:01:23+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1596319469.0, "num_examples": 25}], "download_size": 434309241, "dataset_size": 1596319469.0}}
|
2023-05-18T14:04:40+00:00
|
06e4c1a96c2cc2767646009b0c4cc3b0ee9499b1
|
Dataset for anime head detection (include the entire head, not only the face parts).
| Dataset | Train | Test | Validate | Description |
|------------------------|-------|------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ani_face_detection.v1i | 25698 | 113 | 253 | A high-quality third-party dataset (seems to no longer be publicly available, please contact me for removal if it infringes your rights) that can be used for training directly. Although its name includes `face`, but what it actually annotated are `head`. |
We provide an [online demo](https://huggingface.co/spaces/deepghs/anime_object_detection) here.
|
deepghs/anime_head_detection
|
[
"task_categories:object-detection",
"size_categories:10K<n<100K",
"license:mit",
"art",
"region:us"
] |
2023-05-18T14:05:37+00:00
|
{"license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["object-detection"], "tags": ["art"]}
|
2023-05-18T15:33:08+00:00
|
01c486103ec0bfdb1521c9cc7bd07a11e1bf335f
|
sled-umich/MindCraft2
|
[
"license:gpl-2.0",
"region:us"
] |
2023-05-18T14:19:53+00:00
|
{"license": "gpl-2.0"}
|
2023-05-18T15:14:09+00:00
|
|
b896798a9a4ca64edf4f2632223c1da5fcef0f7e
|
# Dataset Card for "BilbaoCaptions2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
landersanmi/BilbaoCaptions2
|
[
"region:us"
] |
2023-05-18T14:25:17+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3185702553.866935, "num_examples": 3781}, {"name": "test", "num_bytes": 797057555.1330653, "num_examples": 946}], "download_size": 3952516923, "dataset_size": 3982760109.0}}
|
2023-05-18T14:46:18+00:00
|
0d678c171adde36b0a3588d423917c4f20beaa11
|
Dampish/DDDC
|
[
"license:cc-by-nc-4.0",
"region:us"
] |
2023-05-18T14:33:47+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-05-18T14:38:29+00:00
|
|
08b5a321cdf5765300e4965d1903f61665f33f2e
|
# Dataset Card for "eli5_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
9wimu9/sinhala_eli5
|
[
"region:us"
] |
2023-05-18T14:34:31+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "text", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 487152126, "num_examples": 109215}], "download_size": 200153932, "dataset_size": 487152126}}
|
2023-05-23T18:51:59+00:00
|
b1aaadc8024296d747b8e42ffed1a398c7894312
|
Dataset for anime face detection (face only, not the entire head).
| Dataset | Train | Test | Validate | Description |
|:-----------------------:|:-----:|:----:|:--------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| v1.4 | 12798 | 622 | 1217 | Additional images from different categories have been annotated based on the `v1` dataset. Furthermore, all automatically annotated data samples from the `v1` dataset have been manually corrected. |
| v1.4-raw | 4266 | 622 | 1217 | Same as `v1.4`, without any preprocess and data augmentation. Suitable for directly upload to Roboflow platform. |
| v1 | 5943 | 293 | 566 | Primarily consists of illustrations, auto-annotated with [hysts/anime-face-detector](https://github.com/hysts/anime-face-detector), and necessary manual corrections is performed. |
| raw | 1981 | 293 | 566 | Same as `v1`, without any preprocess and data augmentation. Suitable for directly upload to Roboflow platform. |
| Anime Face CreateML.v1i | 4263 | 609 | 1210 | Third-party dataset, source: https://universe.roboflow.com/my-workspace-mph8o/anime-face-createml/dataset/1 |
The best practice is to combine the `Anime Face CreateML.v1i` dataset with the `v1.4` dataset for training. We provide an [online demo](https://huggingface.co/spaces/deepghs/anime_object_detection).
|
deepghs/anime_face_detection
|
[
"task_categories:object-detection",
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-05-18T14:49:35+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["object-detection"], "tags": ["art"]}
|
2023-06-08T01:08:54+00:00
|
9ae704019b9d742404c050e1b523deb40257258d
|
# zh-tw-pythia-ta8000-v1-it1-sg-001
This dataset is a part of the `zh-tw-llm` project.
* Tokenizer: `zh-tw-pythia-tokenizer-a8000-v1`
* Built with: `sharegpt`
* Rows: `train` `8150`, `test` `84`
* Max length: `2048`
* Full config:
```json
{"build_with": ["sharegpt"], "preview_length": 512, "sharegpt_settings": {"source_dataset": "zetavg/ShareGPT-Processed", "train_on_inputs": false, "languages": [{"en": 0.3}, {"zh": 0.2}, "zh_Hant"], "rows_limit": 10000, "test_size": 0.01, "test_split_seed": 42, "test_rows_limit": 100}}
```
|
zh-tw-llm-dv/zh-tw-pythia-ta8000-v1-it1-sg-001
|
[
"region:us"
] |
2023-05-18T14:50:43+00:00
|
{"dataset_info": {"dataset_size": 125318970.0, "download_size": 36093839, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}], "splits": [{"name": "train", "num_bytes": 123951700.0, "num_examples": 8150}, {"name": "test", "num_bytes": 1367270.0, "num_examples": 84}]}}
|
2023-05-18T15:54:20+00:00
|
dd4fd548bce9046cb25a3a8ad7dce7878fe379ca
|
# Dataset Card for "chunk_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_2
|
[
"region:us"
] |
2023-05-18T15:20:03+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 907969796, "num_examples": 178313}], "download_size": 921876189, "dataset_size": 907969796}}
|
2023-05-18T15:20:31+00:00
|
fb4e7f1de67e892cc46df85e94c18bb146ac3ae1
|
# Dataset Card for "chunk_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_0
|
[
"region:us"
] |
2023-05-18T15:24:39+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1165136164, "num_examples": 228817}], "download_size": 1184952464, "dataset_size": 1165136164}}
|
2023-05-18T15:25:20+00:00
|
1154cd0ff018a6972fe7289c50c2e1e9033cf37b
|
# Dataset Card for "chunk_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_1
|
[
"region:us"
] |
2023-05-18T15:27:19+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 936367880, "num_examples": 183890}], "download_size": 949352727, "dataset_size": 936367880}}
|
2023-05-18T15:27:49+00:00
|
0f26c34eaf84768146e8a874f45686f7a6d30359
|
# Dataset Card for "chunk_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_4
|
[
"region:us"
] |
2023-05-18T15:32:56+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 917277972, "num_examples": 180141}], "download_size": 933036301, "dataset_size": 917277972}}
|
2023-05-18T15:33:46+00:00
|
80486c8fc7df202e2b6a92877e29d1e06847c8d1
|
This dataset contains 153,735 training images from [HaGRID](https://github.com/hukenovs/hagrid) (HAnd Gesture Recognition Image Dataset) modified for image classification instead of object detection. The original dataset is 716GB. I created this sample for a tutorial so readers can use the dataset in the free tiers of Google Colab and Kaggle Notebooks.
### Original Authors:
* [Alexander Kapitanov](https://www.linkedin.com/in/hukenovs)
* [Andrey Makhlyarchuk](https://www.linkedin.com/in/makhliarchuk)
* [Karina Kvanchiani](https://www.linkedin.com/in/kvanchiani)
### Original Dataset Links
* [GitHub](https://github.com/hukenovs/hagrid)
* [Kaggle Datasets Page](https://www.kaggle.com/datasets/kapitanov/hagrid)
|
cj-mills/hagrid-classification-512p-no-gesture-150k-zip
|
[
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] |
2023-05-18T15:34:52+00:00
|
{"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"]}
|
2023-05-22T22:00:45+00:00
|
406a732fb47bf9feb2607d02765b9fc30ee17628
|
# Dataset Card for "chunk_9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_9
|
[
"region:us"
] |
2023-05-18T15:38:27+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1078490692, "num_examples": 211801}], "download_size": 1096986740, "dataset_size": 1078490692}}
|
2023-05-18T15:39:02+00:00
|
f294e2e664d88268ab313a771c13eb1a28002e20
|
bart2020/HR_Attrition
|
[
"region:us"
] |
2023-05-18T15:40:18+00:00
|
{}
|
2023-05-18T15:47:19+00:00
|
|
0b93ef30cf3c79f57bd15629a00b733ae7804ee1
|
# Dataset Card for "chunk_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_6
|
[
"region:us"
] |
2023-05-18T15:43:06+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1134848948, "num_examples": 222869}], "download_size": 1151416503, "dataset_size": 1134848948}}
|
2023-05-18T15:43:46+00:00
|
be5920b496437e445cc33d9b7a15767e82dd72ef
|
# Dataset Card for "chunk_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_5
|
[
"region:us"
] |
2023-05-18T15:52:14+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 977154800, "num_examples": 191900}], "download_size": 992947607, "dataset_size": 977154800}}
|
2023-05-18T15:53:59+00:00
|
e791405ae588f990dccf6b468279e3b30c8f541d
|
# Dataset Card for "Fashion_controlnet_dataset_V3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Abrumu/Fashion_controlnet_dataset_V3
|
[
"region:us"
] |
2023-05-18T16:04:45+00:00
|
{"dataset_info": {"features": [{"name": "target", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "cloth", "dtype": "image"}, {"name": "control", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "CLIP_captions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7964862365.0, "num_examples": 11647}], "download_size": 7944023014, "dataset_size": 7964862365.0}}
|
2023-05-19T08:44:48+00:00
|
b594340ce320c850d80fe13095b291f410aad4ea
|
# Dataset Card for "aims_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aleh/aims_4
|
[
"region:us"
] |
2023-05-18T16:11:40+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 630264241.0, "num_examples": 25}], "download_size": 141922879, "dataset_size": 630264241.0}}
|
2023-05-18T16:28:43+00:00
|
c8bf1649be5378ce812612550bfc2dc04baaa94c
|
# Dataset Card for "chunk_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_3
|
[
"region:us"
] |
2023-05-18T16:13:45+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 993082576, "num_examples": 195028}], "download_size": 1005792153, "dataset_size": 993082576}}
|
2023-05-18T16:15:31+00:00
|
135d7229618dfbe3a3725db46066683528e48291
|
# Dataset Card for T1w MRI Brain Slices
### Dataset Summary
This dataset contains images of brain MRI scans of 301 healthy adults (181 younger, 120 older adults) alongside information about their sex, age and education.
It contains 30 images from the coronal axis of each person's scan.
It was created using the [Neurocognitive aging data release with behavioral, structural, and multi-echo functional MRI measures](https://openneuro.org/datasets/ds003592/versions/1.0.13) dataset, created by [Spreng et al.](https://doi.org/10.18112/openneuro.ds003592.v1.0.13).

### Example of Useage
An example on how this dataset can be used for predicting the gender using a simple CNN can be found here: [MRI Gender Classification](https://github.com/g4m3r0/MRI_Gender_Classification).
|
g4m3r/T1w_MRI_Brain_Slices
|
[
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"brain",
"mri",
"biology",
"gender",
"sex",
"age",
"Image Classification",
"region:us"
] |
2023-05-18T16:21:24+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "pretty_name": "T1w MRI Brain Slices", "tags": ["brain", "mri", "biology", "gender", "sex", "age", "Image Classification"]}
|
2023-05-20T17:46:50+00:00
|
2095017f977313436d67fa8440bfb4ee861b78bd
|
# Dataset Card for "chunk_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_7
|
[
"region:us"
] |
2023-05-18T16:23:29+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1139024388, "num_examples": 223689}], "download_size": 1157863393, "dataset_size": 1139024388}}
|
2023-05-18T16:25:37+00:00
|
366e1538d52b930162cf51bfeb5b8868d01e99de
|
# Dataset Card for "chunk_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_8
|
[
"region:us"
] |
2023-05-18T16:29:37+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1118931356, "num_examples": 219743}], "download_size": 1138834704, "dataset_size": 1118931356}}
|
2023-05-18T16:31:39+00:00
|
5b4d144114220495cbdfcfad08eb0a6db909fb9c
|
SimonasK/AudioCaps-Spectrograms
|
[
"region:us"
] |
2023-05-18T16:36:01+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 541606529.265, "num_examples": 1319}], "download_size": 542591614, "dataset_size": 541606529.265}}
|
2023-05-18T16:46:19+00:00
|
|
e7d41b9e33b2eafb340ebbab584ac8925dbf92af
|
ysn-rfd/Khatnegardatasets
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-18T16:42:27+00:00
|
{"license": "apache-2.0"}
|
2023-05-18T16:42:27+00:00
|
|
95f01e2a564ae1c02e21a4a3d87585ea5ad770c1
|
# Dataset Card for "asks_validation_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dhmeltzer/asks_validation_embedded
|
[
"region:us"
] |
2023-05-18T16:52:47+00:00
|
{"dataset_info": {"features": [{"name": "q_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "selftext", "dtype": "string"}, {"name": "document", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "a_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "score", "dtype": "int32"}]}, {"name": "title_urls", "sequence": [{"name": "url", "dtype": "string"}]}, {"name": "selftext_urls", "sequence": [{"name": "url", "dtype": "string"}]}, {"name": "answers_urls", "sequence": [{"name": "url", "dtype": "string"}]}, {"name": "title_body", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "validation_asks", "num_bytes": 17840672, "num_examples": 2281}], "download_size": 15368159, "dataset_size": 17840672}}
|
2023-05-18T16:52:50+00:00
|
ae93b55ac5a58cd089e66f80bcf289ce4cb978b6
|
# Dataset Card for "FypDatasetWithSplitsRgb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
IdoAi/FypDatasetWithSplitsRgb
|
[
"region:us"
] |
2023-05-18T18:02:19+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1997747885.8, "num_examples": 10700}, {"name": "validation", "num_bytes": 204033799.13, "num_examples": 1094}, {"name": "test", "num_bytes": 68700437.0, "num_examples": 365}], "download_size": 2263896820, "dataset_size": 2270482121.93}}
|
2023-05-18T18:11:21+00:00
|
25ecb9cfe60ddebe6ee722f1b646036cfb5a93c8
|
# dolly_hhrlhf-text2text
This is `mosaicml/dolly_hhrlhf` with the following changes:
- clean up/adapt `prompt` column for the `text2text-generation` task (no need for a special template)
- split the original `train` set into a 95% train and an explicit validation set (5%)
- fixed extra spaces in puncuation (as this is not a French dataset)
details on extra spaces:
```
Original sentence 1: How can I be healthy ?
Fixed sentence 1: How can I be healthy?
```
|
pszemraj/dolly_hhrlhf-text2text
|
[
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"source_datasets:mosaicml/dolly_hhrlhf",
"language:en",
"license:cc-by-sa-3.0",
"instruct",
"region:us"
] |
2023-05-18T18:44:11+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "source_datasets": "mosaicml/dolly_hhrlhf", "task_categories": ["text2text-generation"], "tags": ["instruct"]}
|
2023-05-18T19:07:42+00:00
|
70172169daf938fcff071b0bccc1f25cee056cff
|
Alpaca Cleaned Dataset.
Machine Translated facebook/nllb-200-3.3B
Languages
Turkish
|
cgulse/alpaca-cleaned-tr
|
[
"size_categories:10K<n<100K",
"language:tr",
"license:cc-by-4.0",
"alpaca",
"instruction-finetuning",
"region:us"
] |
2023-05-18T18:50:32+00:00
|
{"language": ["tr"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "pretty_name": "Turkish Alpaca-cleaned", "tags": ["alpaca", "instruction-finetuning"]}
|
2023-05-18T18:59:11+00:00
|
6f7b56c8ec3f3fa06020075e386fe7b429a46f03
|
# Dataset Card for "chunk_12"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_12
|
[
"region:us"
] |
2023-05-18T19:07:56+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 996086856, "num_examples": 195618}], "download_size": 1015430710, "dataset_size": 996086856}}
|
2023-05-18T19:08:35+00:00
|
8506fd505c60184f1b883252477be3265441c43d
|
# Dataset Card for "chunk_14"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_14
|
[
"region:us"
] |
2023-05-18T19:08:58+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1068245588, "num_examples": 209789}], "download_size": 1085801371, "dataset_size": 1068245588}}
|
2023-05-18T19:09:40+00:00
|
9ba4208e7c7004115245c02cc0a522238bb461f1
|
# Dataset Card for "chunk_11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_11
|
[
"region:us"
] |
2023-05-18T19:22:16+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1003694304, "num_examples": 197112}], "download_size": 1020329415, "dataset_size": 1003694304}}
|
2023-05-18T19:24:07+00:00
|
a778ce8654667396ec6e67d239d4d9160e7966e9
|
# Dataset Card for "chunk_13"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_13
|
[
"region:us"
] |
2023-05-18T19:22:50+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1187067408, "num_examples": 233124}], "download_size": 1201620165, "dataset_size": 1187067408}}
|
2023-05-18T19:23:43+00:00
|
4a970a281f0fe9f7d6c15919875d872abfef80dd
|
FidelOdok/DOA_dataset
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-05-18T19:26:16+00:00
|
{"license": "creativeml-openrail-m", "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "10", "3": "100", "4": "101", "5": "102", "6": "103", "7": "104", "8": "105", "9": "106", "10": "107", "11": "108", "12": "109", "13": "11", "14": "110", "15": "111", "16": "112", "17": "113", "18": "114", "19": "115", "20": "116", "21": "117", "22": "118", "23": "119", "24": "12", "25": "120", "26": "121", "27": "122", "28": "123", "29": "124", "30": "125", "31": "126", "32": "127", "33": "128", "34": "129", "35": "13", "36": "130", "37": "131", "38": "132", "39": "133", "40": "134", "41": "135", "42": "136", "43": "137", "44": "138", "45": "139", "46": "14", "47": "140", "48": "141", "49": "142", "50": "143", "51": "144", "52": "145", "53": "146", "54": "147", "55": "148", "56": "149", "57": "15", "58": "150", "59": "151", "60": "152", "61": "153", "62": "154", "63": "155", "64": "156", "65": "157", "66": "158", "67": "159", "68": "16", "69": "160", "70": "161", "71": "162", "72": "163", "73": "164", "74": "165", "75": "166", "76": "167", "77": "168", "78": "169", "79": "17", "80": "170", "81": "171", "82": "172", "83": "173", "84": "174", "85": "175", "86": "176", "87": "177", "88": "178", "89": "179", "90": "18", "91": "180", "92": "181", "93": "182", "94": "183", "95": "184", "96": "185", "97": "186", "98": "187", "99": "188", "100": "189", "101": "19", "102": "190", "103": "191", "104": "192", "105": "193", "106": "194", "107": "195", "108": "197", "109": "198", "110": "199", "111": "2", "112": "20", "113": "200", "114": "201", "115": "202", "116": "203", "117": "204", "118": "205", "119": "206", "120": "207", "121": "208", "122": "209", "123": "21", "124": "210", "125": "211", "126": "212", "127": "213", "128": "214", "129": "215", "130": "216", "131": "217", "132": "218", "133": "219", "134": "22", "135": "220", "136": "221", "137": "222", "138": "223", "139": "224", "140": "225", "141": "226", "142": "227", "143": "228", "144": "229", "145": "23", "146": "230", "147": "231", "148": "232", "149": "233", "150": "234", "151": "235", "152": "236", "153": "237", "154": "238", "155": "239", "156": "24", "157": "240", "158": "241", "159": "242", "160": "243", "161": "244", "162": "245", "163": "246", "164": "247", "165": "248", "166": "249", "167": "25", "168": "250", "169": "251", "170": "252", "171": "253", "172": "254", "173": "255", "174": "256", "175": "257", "176": "258", "177": "259", "178": "26", "179": "260", "180": "261", "181": "262", "182": "263", "183": "264", "184": "265", "185": "266", "186": "267", "187": "268", "188": "269", "189": "27", "190": "270", "191": "271", "192": "272", "193": "273", "194": "274", "195": "275", "196": "276", "197": "277", "198": "278", "199": "279", "200": "28", "201": "280", "202": "281", "203": "282", "204": "283", "205": "284", "206": "285", "207": "286", "208": "287", "209": "288", "210": "289", "211": "29", "212": "290", "213": "291", "214": "292", "215": "293", "216": "294", "217": "295", "218": "296", "219": "297", "220": "298", "221": "299", "222": "3", "223": "30", "224": "300", "225": "301", "226": "302", "227": "303", "228": "304", "229": "305", "230": "306", "231": "307", "232": "308", "233": "309", "234": "31", "235": "310", "236": "311", "237": "312", "238": "313", "239": "314", "240": "315", "241": "316", "242": "317", "243": "318", "244": "319", "245": "32", "246": "320", "247": "321", "248": "322", "249": "323", "250": "324", "251": "325", "252": "326", "253": "327", "254": "328", "255": "329", "256": "33", "257": "330", "258": "331", "259": "332", "260": "333", "261": "334", "262": "335", "263": "336", "264": "337", "265": "338", "266": "339", "267": "34", "268": "340", "269": "341", "270": "342", "271": "343", "272": "344", "273": "345", "274": "346", "275": "347", "276": "348", "277": "349", "278": "35", "279": "350", "280": "351", "281": "352", "282": "353", "283": "354", "284": "355", "285": "356", "286": "357", "287": "358", "288": "359", "289": "36", "290": "360", "291": "361", "292": "362", "293": "363", "294": "364", "295": "365", "296": "366", "297": "367", "298": "368", "299": "369", "300": "37", "301": "370", "302": "371", "303": "372", "304": "373", "305": "374", "306": "375", "307": "376", "308": "377", "309": "378", "310": "379", "311": "38", "312": "380", "313": "381", "314": "382", "315": "383", "316": "384", "317": "385", "318": "386", "319": "387", "320": "388", "321": "389", "322": "39", "323": "390", "324": "391", "325": "392", "326": "393", "327": "394", "328": "395", "329": "396", "330": "397", "331": "398", "332": "399", "333": "4", "334": "40", "335": "400", "336": "401", "337": "402", "338": "403", "339": "404", "340": "405", "341": "406", "342": "407", "343": "408", "344": "409", "345": "41", "346": "410", "347": "411", "348": "412", "349": "413", "350": "414", "351": "415", "352": "416", "353": "417", "354": "418", "355": "419", "356": "42", "357": "420", "358": "421", "359": "422", "360": "423", "361": "424", "362": "425", "363": "426", "364": "427", "365": "428", "366": "43", "367": "44", "368": "45", "369": "46", "370": "47", "371": "48", "372": "49", "373": "5", "374": "50", "375": "51", "376": "52", "377": "53", "378": "54", "379": "55", "380": "56", "381": "57", "382": "58", "383": "59", "384": "6", "385": "60", "386": "61", "387": "62", "388": "63", "389": "64", "390": "65", "391": "66", "392": "67", "393": "68", "394": "69", "395": "7", "396": "70", "397": "71", "398": "72", "399": "73", "400": "74", "401": "75", "402": "76", "403": "77", "404": "78", "405": "79", "406": "8", "407": "80", "408": "81", "409": "82", "410": "83", "411": "84", "412": "85", "413": "86", "414": "87", "415": "88", "416": "89", "417": "9", "418": "90", "419": "91", "420": "92", "421": "93", "422": "94", "423": "95", "424": "96", "425": "97", "426": "98", "427": "99"}}}}], "splits": [{"name": "train", "num_bytes": 1938452772.0610814, "num_examples": 5030}], "download_size": 1936833914, "dataset_size": 1938452772.0610814}}
|
2023-05-20T19:44:42+00:00
|
|
1ca6d4b00ef2da94916a216bae364edfdab22c18
|
# Dataset Card for "chunk_15"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_15
|
[
"region:us"
] |
2023-05-18T19:27:03+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1016429396, "num_examples": 199613}], "download_size": 1035201391, "dataset_size": 1016429396}}
|
2023-05-18T19:28:54+00:00
|
f9f89f5c1689b0d9559cd79bf104245b06dc76ad
|
plgfro/Kaggles-Galaxy-Zoo-Dataset
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-18T19:37:33+00:00
|
{"license": "apache-2.0"}
|
2023-05-18T19:37:33+00:00
|
|
3da7171bd3349b4d5663e2ea9386181e3b0bb927
|
# Dataset Card for "chunk_19"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_19
|
[
"region:us"
] |
2023-05-18T19:39:46+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1433815544, "num_examples": 281582}], "download_size": 1460335757, "dataset_size": 1433815544}}
|
2023-05-18T19:40:33+00:00
|
44ab4543ff4f1dab9e3bbaa2250b17a007cfe131
|
# IAM Sentences
This dataset contains all sentences from the IAM Handwriting database as combined images instead of separate lines.
|
alpayariyak/IAM_Sentences
|
[
"region:us"
] |
2023-05-18T19:42:06+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1053121464.077, "num_examples": 5663}], "download_size": 1128818107, "dataset_size": 1053121464.077}}
|
2023-05-19T01:03:14+00:00
|
8d1fc39ec91b80cef0c73216b92a354ffbf17992
|
# Dataset Card for "chunk_16"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_16
|
[
"region:us"
] |
2023-05-18T19:45:55+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1234601228, "num_examples": 242459}], "download_size": 1255696002, "dataset_size": 1234601228}}
|
2023-05-18T19:48:09+00:00
|
5d4e98ba9564432df1d6e1620dfd0d236ce642bf
| ERROR: type should be string, got "\nhttps://huggingface.co/datasets/RUCAIBox/Story-Generation\n\nRUC AI Box HC Story Generation augmented and converted to alpaca format.\nNo filtering has been done." |
PocketDoc/RUCAIBox-Story-Generation-Alpaca
|
[
"task_categories:text-generation",
"language:en",
"region:us"
] |
2023-05-18T19:46:19+00:00
|
{"language": ["en"], "task_categories": ["text-generation"]}
|
2023-05-18T20:58:55+00:00
|
27a00b2274d7c76f9915ce1263c9640fdf1dc58c
|
# Dataset Card for "food27"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVdatasets/food27
|
[
"region:us"
] |
2023-05-18T19:52:49+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "apple_pie", "1": "beef_tartare", "2": "beignets", "3": "carrot_cake", "4": "cheesecake", "5": "cheese_plate", "6": "chicken_wings", "7": "chocolate_cake", "8": "chocolate_mousse", "9": "dumplings", "10": "edamame", "11": "filet_mignon", "12": "french_fries", "13": "fried_calamari", "14": "guacamole", "15": "ice_cream", "16": "macarons", "17": "miso_soup", "18": "nachos", "19": "onion_rings", "20": "pizza", "21": "poutine", "22": "red_velvet_cake", "23": "steak", "24": "strawberry_shortcake", "25": "tiramisu", "26": "waffles"}}}}], "splits": [{"name": "train", "num_bytes": 1010337492.0, "num_examples": 20250}, {"name": "validation", "num_bytes": 334516930.25, "num_examples": 6750}], "download_size": 1327834336, "dataset_size": 1344854422.25}}
|
2023-05-18T19:53:43+00:00
|
96b1f270a3d66ffe2828197fda56ac6b3a12f6b4
|
# Dataset Card for "chunk_17"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_17
|
[
"region:us"
] |
2023-05-18T19:57:14+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1409338300, "num_examples": 276775}], "download_size": 1436131593, "dataset_size": 1409338300}}
|
2023-05-18T19:59:48+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.