sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
b5fdfb2d398ebf80491832760a4c1b88ed551b42
|
# Dataset Card for "Genomic_Benchmarks_drosophila_enhancers_stark"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
katarinagresova/Genomic_Benchmarks_drosophila_enhancers_stark
|
[
"region:us"
] |
2023-03-13T19:34:02+00:00
|
{"dataset_info": {"features": [{"name": "seq", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 11032914, "num_examples": 5184}, {"name": "test", "num_bytes": 3694762, "num_examples": 1730}], "download_size": 1743725, "dataset_size": 14727676}}
|
2023-03-13T19:34:15+00:00
|
49ef8d0d80087ece7441096afcef0123b0781ce0
|
# Dataset Card for "Genomic_Benchmarks_human_enhancers_cohn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
katarinagresova/Genomic_Benchmarks_human_enhancers_cohn
|
[
"region:us"
] |
2023-03-13T19:34:24+00:00
|
{"dataset_info": {"features": [{"name": "seq", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 10671616, "num_examples": 20843}, {"name": "test", "num_bytes": 3557376, "num_examples": 6948}], "download_size": 1662449, "dataset_size": 14228992}}
|
2023-03-13T19:34:39+00:00
|
a4f7f76b18a38f43ff3a7d5e8299408c3c121df2
|
# Dataset Card for "Genomic_Benchmarks_demo_human_or_worm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
katarinagresova/Genomic_Benchmarks_demo_human_or_worm
|
[
"region:us"
] |
2023-03-13T19:34:40+00:00
|
{"dataset_info": {"features": [{"name": "seq", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 15900000, "num_examples": 75000}, {"name": "test", "num_bytes": 5300000, "num_examples": 25000}], "download_size": 2380379, "dataset_size": 21200000}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-10-04T12:09:13+00:00
|
3c07e4be6425be11073baa2daacf6c143c64e4ff
|
# Dataset Card for "Genomic_Benchmarks_human_ocr_ensembl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
katarinagresova/Genomic_Benchmarks_human_ocr_ensembl
|
[
"region:us"
] |
2023-03-13T19:35:11+00:00
|
{"dataset_info": {"features": [{"name": "seq", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 47282994, "num_examples": 139804}, {"name": "test", "num_bytes": 11844868, "num_examples": 34952}], "download_size": 5583796, "dataset_size": 59127862}}
|
2023-03-13T19:35:27+00:00
|
8ee2c40da8906b6988b959072e3d701f681f0ebd
|
# Dataset Card for "Genomic_Benchmarks_human_ensembl_regulatory"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
katarinagresova/Genomic_Benchmarks_human_ensembl_regulatory
|
[
"region:us"
] |
2023-03-13T19:35:29+00:00
|
{"dataset_info": {"features": [{"name": "seq", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 102226826, "num_examples": 231348}, {"name": "test", "num_bytes": 25514299, "num_examples": 57713}], "download_size": 12019655, "dataset_size": 127741125}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-10-04T12:12:39+00:00
|
1e4a3993d89263e6496b64824cad9716c5f6643b
|
# Dataset Card for "Genomic_Benchmarks_human_enhancers_ensembl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
katarinagresova/Genomic_Benchmarks_human_enhancers_ensembl
|
[
"region:us"
] |
2023-03-13T19:35:47+00:00
|
{"dataset_info": {"features": [{"name": "seq", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 34821392, "num_examples": 123872}, {"name": "test", "num_bytes": 8668172, "num_examples": 30970}], "download_size": 4077057, "dataset_size": 43489564}}
|
2023-03-13T19:36:04+00:00
|
ca7954eb9540d23babbcec93c94986e6a3b3305a
|
check out https://github.com/tatsu-lab/stanford_alpaca
I hate the look of multi-line strings in the middle of functions and I just wanted to skip that step when using the data for training.
|
crumb/stanford_alpaca_full_prompts
|
[
"region:us"
] |
2023-03-13T19:47:03+00:00
|
{}
|
2023-03-13T20:22:55+00:00
|
ab8e0997dca87b3a8b6df9ccb83bc4ecaa8e3269
|
freddyaboulton/gradio-theme-subdomains
|
[
"license:mit",
"region:us"
] |
2023-03-13T19:56:49+00:00
|
{"license": "mit"}
|
2023-09-26T06:54:17+00:00
|
|
ace58c1c544ce87ea7a03e7b696667c1cc00ac84
|
# Hugging Face Ethics & Society Papers
This is an incomplete list of ethics-related papers published by researchers at Hugging Face.
- Gradio: https://arxiv.org/abs/1906.02569
- DistilBERT: https://arxiv.org/abs/1910.01108
- RAFT: https://arxiv.org/abs/2109.14076
- Interactive Model Cards: https://arxiv.org/abs/2205.02894
- Data Governance in the Age of Large-Scale Data-Driven Language Technology: https://arxiv.org/abs/2206.03216
- Quality at a Glance: https://arxiv.org/abs/2103.12028
- A Framework for Deprecating Datasets: https://arxiv.org/abs/2111.04424
- Bugs in the Data: https://arxiv.org/abs/2208.11695
- Measuring Data: https://arxiv.org/abs/2212.05129
- Perturbation Augmentation for Fairer NLP: https://arxiv.org/abs/2205.12586
- SEAL: https://arxiv.org/abs/2210.05839
- Multitask Prompted Training Enables Zero-Shot Task Generalization: https://arxiv.org/abs/2110.08207
- BLOOM: https://arxiv.org/abs/2211.05100
- ROOTS: https://arxiv.org/abs/2303.03915
- Evaluate & Evaluation on the Hub: https://arxiv.org/abs/2210.01970
- Spacerini: https://arxiv.org/abs/2302.14534
- ROOTS Search Tool: https://arxiv.org/abs/2302.14035
- Fair Diffusion: https://arxiv.org/abs/2302.10893
- Counting Carbon: https://arxiv.org/abs/2302.08476
- The Gradient of Generative AI Release: https://arxiv.org/abs/2302.04844
- BigScience: A Case Study in the Social Construction of a Multilingual Large Language Model: https://arxiv.org/abs/2212.04960
- Towards Openness Beyond Open Access: User Journeys through 3 Open AI Collaboratives: https://arxiv.org/abs/2301.08488
- Stable Bias: Analyzing Societal Representations in Diffusion Models: https://arxiv.org/abs/2303.11408
- Stronger Together: on the Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML: https://arxiv.org/abs/2305.18615
|
society-ethics/papers
|
[
"ethics",
"arxiv:1906.02569",
"arxiv:1910.01108",
"arxiv:2109.14076",
"arxiv:2205.02894",
"arxiv:2206.03216",
"arxiv:2103.12028",
"arxiv:2111.04424",
"arxiv:2208.11695",
"arxiv:2212.05129",
"arxiv:2205.12586",
"arxiv:2210.05839",
"arxiv:2110.08207",
"arxiv:2211.05100",
"arxiv:2303.03915",
"arxiv:2210.01970",
"arxiv:2302.14534",
"arxiv:2302.14035",
"arxiv:2302.10893",
"arxiv:2302.08476",
"arxiv:2302.04844",
"arxiv:2212.04960",
"arxiv:2301.08488",
"arxiv:2303.11408",
"arxiv:2305.18615",
"region:us"
] |
2023-03-13T20:07:35+00:00
|
{"tags": ["ethics"]}
|
2023-05-31T12:53:19+00:00
|
5389570c2e19bedf31e76c0af9534007b57c0ed0
|
# Dataset Card for "tv_dialogue"
This dataset contains transcripts for famous movies and TV shows from multiple sources.
An example dialogue would be:
```
[PERSON 1] Hello
[PERSON 2] Hello Person 2!
How's it going?
(they are both talking)
[PERSON 1] I like being an example
on Huggingface!
They are examples on Huggingface.
CUT OUT TO ANOTHER SCENCE
We are somewhere else
[PERSON 1 (v.o)] I wonder where we are?
```
All dialogues were processed to follow this format. Each row is a single episode / movie (**2781** rows total)
following the [OpenAssistant](https://open-assistant.io/) format. The METADATA column contains dditional information as a JSON string.
## Dialogue only, with some information on the scene
| Show | Number of scripts | Via | Source |
|----|----|---|---|
| Friends | 236 episodes | https://github.com/emorynlp/character-mining | friends/emorynlp |
| The Office | 186 episodes | https://www.kaggle.com/datasets/nasirkhalid24/the-office-us-complete-dialoguetranscript | office/nasirkhalid24 |
| Marvel Cinematic Universe | 18 movies | https://www.kaggle.com/datasets/pdunton/marvel-cinematic-universe-dialogue | marvel/pdunton |
| Doctor Who | 306 episodes | https://www.kaggle.com/datasets/jeanmidev/doctor-who | drwho/jeanmidev |
| Star Trek | 708 episodes | http://www.chakoteya.net/StarTrek/index.html based on https://github.com/GJBroughton/Star_Trek_Scripts/ | statrek/chakoteya |
## Actual transcripts with detailed information on the scenes
| Show | Number of scripts | Via | Source |
|----|----|---|---|
| Top Movies | 919 movies | https://imsdb.com/ | imsdb |
| Top Movies | 171 movies | https://www.dailyscript.com/ | dailyscript |
| Stargate SG-1 | 18 episodes | https://imsdb.com/ | imsdb |
| South Park | 129 episodes | https://imsdb.com/ | imsdb |
| Knight Rider | 80 episodes | http://www.knightriderarchives.com/ | knightriderarchives |
|
sedthh/tv_dialogue
|
[
"task_categories:conversational",
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"OpenAssistant",
"transcripts",
"subtitles",
"television",
"region:us"
] |
2023-03-13T20:33:06+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["conversational", "text2text-generation", "text-generation"], "pretty_name": "TV and Movie dialogue and transcript corpus", "dataset_info": {"features": [{"name": "TEXT", "dtype": "string"}, {"name": "METADATA", "dtype": "string"}, {"name": "SOURCE", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 211728118, "num_examples": 2781}], "download_size": 125187885, "dataset_size": 211728118}, "tags": ["OpenAssistant", "transcripts", "subtitles", "television"]}
|
2023-03-16T13:44:59+00:00
|
1fb4cfe2bc57fd31469891ca2ae1d91e2428463d
|
mrshalsam/tg
|
[
"license:openrail",
"region:us"
] |
2023-03-13T20:41:25+00:00
|
{"license": "openrail"}
|
2023-03-13T20:41:25+00:00
|
|
075161a0c990597da2f884ed96361a2321d67335
|
# Dataset Card for "reddit-v1-all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Thewillonline/reddit-v1-all
|
[
"region:us"
] |
2023-03-13T20:49:35+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17957706291, "num_examples": 356079320}], "download_size": 10534815421, "dataset_size": 17957706291}}
|
2023-03-13T20:59:20+00:00
|
f935bc73d6a32227d64c49646f20150f28a382e2
|
# Dataset Card for "ocr_bert-training-2err"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lowem1/ocr_bert-training-2err
|
[
"region:us"
] |
2023-03-13T21:10:39+00:00
|
{"dataset_info": {"features": [{"name": "truth", "dtype": "string"}, {"name": "aug", "dtype": "string"}, {"name": "aug_type", "dtype": "string"}, {"name": "doc_tag", "dtype": "string"}, {"name": "distance", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 212731, "num_examples": 1795}], "download_size": 31395, "dataset_size": 212731}}
|
2023-03-13T21:10:41+00:00
|
333059654d8be680840dc6fb38c3bdff810285de
|
# Dataset Card for "news-summary-new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
argilla/news-summary-new
|
[
"language:en",
"region:us"
] |
2023-03-13T22:51:21+00:00
|
{"language": "en", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 252347, "num_examples": 114}], "download_size": 87832, "dataset_size": 252347}}
|
2023-07-13T10:15:37+00:00
|
6fc01ac6d428611c2684752e342235b8c96310fa
|
=
|
plaba/stack-overflow-q-and-a
|
[
"task_categories:text-generation",
"task_categories:question-answering",
"license:other",
"region:us"
] |
2023-03-13T23:24:52+00:00
|
{"license": "other", "task_categories": ["text-generation", "question-answering"]}
|
2023-03-14T00:31:18+00:00
|
3174749adc8cfa601920fba986ef2bca5f59e512
|
# Dataset Card for "miniwob_plusplus_T5_unbounded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LucasThil/miniwob_plusplus_T5_unbounded
|
[
"region:us"
] |
2023-03-13T23:25:59+00:00
|
{"dataset_info": {"features": [{"name": "history_episodes", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "html_snippets", "dtype": "string"}, {"name": "actions", "dtype": "string"}, {"name": "refs", "dtype": "int64"}, {"name": "keydown_texts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 360530668, "num_examples": 75039}], "download_size": 39135939, "dataset_size": 360530668}}
|
2023-03-15T10:08:56+00:00
|
9d14df1c5e5545146349a655383447dc00f5933f
|
hagairaja/testing
|
[
"license:mit",
"region:us"
] |
2023-03-14T00:15:37+00:00
|
{"license": "mit"}
|
2023-03-14T00:15:37+00:00
|
|
55d8e4b5cfaad4f5c00193b09b6aba54fd63b696
|
# Dataset Card for "predicted-squad2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pacovaldez/predicted-squad2
|
[
"region:us"
] |
2023-03-14T01:28:47+00:00
|
{"dataset_info": {"features": [{"name": "question_id", "dtype": "int64"}, {"name": "question_title", "dtype": "string"}, {"name": "question_body", "dtype": "string"}, {"name": "accepted_answer_id", "dtype": "int64"}, {"name": "question_creation_date", "dtype": "timestamp[us]"}, {"name": "question_answer_count", "dtype": "int64"}, {"name": "question_favorite_count", "dtype": "float64"}, {"name": "question_score", "dtype": "int64"}, {"name": "question_view_count", "dtype": "int64"}, {"name": "tags", "dtype": "string"}, {"name": "answer_body", "dtype": "string"}, {"name": "answer_creation_date", "dtype": "timestamp[us]"}, {"name": "answer_score", "dtype": "int64"}, {"name": "link", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answer_start", "dtype": "int64"}, {"name": "answer_end", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "predicted_answer", "dtype": "string"}, {"name": "parsed_answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4103753, "num_examples": 100}], "download_size": 1950624, "dataset_size": 4103753}}
|
2023-03-14T01:28:49+00:00
|
3e8328e932f300e9c4b0e5bd566db631d36087de
|
# Lora - 븝미
## Dataset Description
- **원본** [19) 븜미 로라](https://arca.live/b/aiart/71610355)
강도 0.6 ~ 0.7 사용
프롬프트 : bmpmi, red hair, red eyes, long hair (필요시 Twintails 추가)
[다운로드](https://huggingface.co/datasets/AIARTCHAN/lora-bmpmi/resolve/main/Bmpmi.safetensors)
|
AIARTCHAN/lora-bmpmi
|
[
"license:creativeml-openrail-m",
"lora",
"aiartchan",
"stable-diffusion",
"region:us"
] |
2023-03-14T01:30:15+00:00
|
{"license": "creativeml-openrail-m", "tags": ["lora", "aiartchan", "stable-diffusion"]}
|
2023-03-14T02:02:48+00:00
|
3665f5201db1dfeb23eec56f5b0a30d4c2d7973c
|
# Dataset Card for "annotated_github_dataset_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
michaelnath/annotated_github_dataset_2
|
[
"region:us"
] |
2023-03-14T01:37:00+00:00
|
{"dataset_info": {"features": [{"name": "function", "dtype": "string"}, {"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "features", "sequence": "float32"}, {"name": "purpose", "dtype": "string"}, {"name": "detailed_description", "dtype": "string"}, {"name": "code_trans", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8222665, "num_examples": 10003}], "download_size": 2821232, "dataset_size": 8222665}}
|
2023-03-14T01:37:04+00:00
|
eb577389a506c41ae17e442c0841b536852a9a02
|
# Dataset Card for "predicted-stackoverflow"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pacovaldez/predicted-stackoverflow
|
[
"region:us"
] |
2023-03-14T02:07:57+00:00
|
{"dataset_info": {"features": [{"name": "question_id", "dtype": "int64"}, {"name": "question_title", "dtype": "string"}, {"name": "question_body", "dtype": "string"}, {"name": "accepted_answer_id", "dtype": "int64"}, {"name": "question_creation_date", "dtype": "timestamp[us]"}, {"name": "question_answer_count", "dtype": "int64"}, {"name": "question_favorite_count", "dtype": "float64"}, {"name": "question_score", "dtype": "int64"}, {"name": "question_view_count", "dtype": "int64"}, {"name": "tags", "dtype": "string"}, {"name": "answer_body", "dtype": "string"}, {"name": "answer_creation_date", "dtype": "timestamp[us]"}, {"name": "answer_score", "dtype": "int64"}, {"name": "link", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answer_start", "dtype": "int64"}, {"name": "answer_end", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "predicted_answer", "dtype": "string"}, {"name": "parsed_answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4777686, "num_examples": 100}], "download_size": 2244820, "dataset_size": 4777686}}
|
2023-03-14T02:07:59+00:00
|
3d024d29b7a0a09ee4c86ae8f6c11a53c0198c3a
|
# Dataset Card for "avril15s02-datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ethers/avril15s02-waifu-datasets
|
[
"license:openrail",
"region:us"
] |
2023-03-14T03:56:10+00:00
|
{"license": "openrail", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 121792143, "num_examples": 323}], "download_size": 121749499, "dataset_size": 121792143}}
|
2023-03-14T03:58:56+00:00
|
da0b959a51d42ebc906d1935b2885362bdc63ba4
|
# Dataset Card for "functions_annotated_with_intents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
michaelnath/functions_annotated_with_intents
|
[
"region:us"
] |
2023-03-14T06:06:35+00:00
|
{"dataset_info": {"features": [{"name": "function", "dtype": "string"}, {"name": "intent_category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1123421, "num_examples": 2768}], "download_size": 419825, "dataset_size": 1123421}}
|
2023-03-14T06:44:50+00:00
|
e2f6df5444ada6735acdcb91135be3657b9fda43
|
tastoHO/rim
|
[
"license:openrail",
"region:us"
] |
2023-03-14T06:56:48+00:00
|
{"license": "openrail"}
|
2023-03-14T06:56:48+00:00
|
|
8a5b4f2c5a7f0a8a127480c23fb05e02507bfdcf
|
tastoHO/teayeon
|
[
"license:openrail",
"region:us"
] |
2023-03-14T06:57:19+00:00
|
{"license": "openrail"}
|
2023-03-14T06:57:19+00:00
|
|
5e9d8890b974d9fa34cf94ebd4232ff026add1cc
|
livinNector/naamapadam
|
[
"license:cc0-1.0",
"region:us"
] |
2023-03-14T07:30:43+00:00
|
{"license": "cc0-1.0"}
|
2023-03-14T07:30:43+00:00
|
|
89173bc6ee039ac347ced51fb19ea1260d4c4b38
|
shivangibithel/Flickr8k
|
[
"task_categories:image-to-text",
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-03-14T07:48:39+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["image-to-text", "text-to-image"]}
|
2023-03-14T08:03:17+00:00
|
|
7496b3f7d5570c5b9b29038ecce019c061be06a4
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
WilliamWen/train_set_001
|
[
"task_categories:token-classification",
"license:apache-2.0",
"region:us"
] |
2023-03-14T08:21:49+00:00
|
{"license": "apache-2.0", "task_categories": ["token-classification"]}
|
2023-03-14T10:00:08+00:00
|
9e528293a1bf5d20e4e46d43ecb499b208f358bf
|
# Dataset Card for naamapadam
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/AI4Bharat/indicner
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** Anoop Kunchukuttan
### Dataset Summary
Naamapadam is the largest publicly available Named Entity Annotated dataset for 11 Indic languages. This corpora was created by projecting named entities from English side to the Indic language side of the English-Indic languages parallel corpus. The dataset additionally contains manually labelled test set for 8 Indic languages containing 500-1000 sentences.
### Supported Tasks and Leaderboards
**Tasks:** NER on Indian languages.
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
{'words': ['उन्हेनें', 'शिकांगों','में','बोरोडिन','की','पत्नी','को','तथा','वाशिंगटन','में','रूसी','व्यापार','संघ','को','पैसे','भेजे','।'],
'ner': [0, 3, 0, 1, 0, 0, 0, 0, 3, 0, 5, 6, 6, 0, 0, 0, 0],
}
### Data Fields
- `words`: Raw tokens in the dataset.
- `ner`: the NER tags for this dataset.
### Data Splits
(to be updated, see paper for correct numbers)
| Language | Train | Validation | Test |
|---:|---:|---:|---:|
| as | 10266 | 52 | 51 |
| bn | 961679 | 4859 | 607 |
| gu | 472845 | 2389 | 50 |
| hi | 985787 | 13460 | 437 |
| kn | 471763 | 2381 | 1019 |
| ml | 716652 | 3618 | 974 |
| mr | 455248 | 2300 | 1080 |
| or | 196793 | 993 | 994 |
| pa | 463534 | 2340 | 2342 |
| ta | 497882 | 2795 | 49 |
| te | 507741 | 2700 | 53 |
## Usage
You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:
```code
pip install datasets
```
To use the dataset, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('ai4bharat/naamapadam')
```
## Dataset Creation
We use the parallel corpus from the Samanantar Dataset between English and the 11 major Indian languages to create the NER dataset. We annotate the English portion of the parallel corpus with existing state-of-the-art NER model. We use word-level alignments learned from the parallel corpus to project the entity labels from English to the Indian language.
### Curation Rationale
naamapadam was built from [Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/). This dataset was built for the task of Named Entity Recognition in Indic languages. The dataset was introduced to introduce new resources to the Indic languages language that was under-served for Natural Language Processing.
### Source Data
[Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/)
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
NER annotations were done following the CoNLL-2003 guidelines.
#### Who are the annotators?
The annotations for the testset have been done by volunteers who are proficient in the respective languages. We would like to thank all the volunteers:
- Anil Mhaske
- Anoop Kunchukuttan
- Archana Mhaske
- Arnav Mhaske
- Gowtham Ramesh
- Harshit Kedia
- Nitin Kedia
- Rudramurthy V
- Sangeeta Rajagopal
- Sumanth Doddapaneni
- Vindhya DS
- Yash Madhani
- Kabir Ahuja
- Shallu Rani
- Armin Virk
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to provide a large-scale Named Entity Recognition dataset for Indic languages. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
<!-- <a rel="license" float="left" href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100" />
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100" href="http://creativecommons.org/publicdomain/zero/1.0/"/>
</a>
<br/> -->
**CC0 License Statement**
<a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100"/>
</a>
<br>
<br>
- We do not own any of the text from which this data has been extracted.
- We license the actual packaging of the mined data under the [Creative Commons CC0 license (“no rights reserved”)](http://creativecommons.org/publicdomain/zero/1.0).
- To the extent possible under law, <a rel="dct:publisher" href="https://ai4bharat.iitm.ac.in/"> <span property="dct:title">AI4Bharat</span></a> has waived all copyright and related or neighboring rights to <span property="dct:title">Naamapadam</span> manually collected data and existing sources.
- This work is published from: India.
### Citation Information
If you are using the Naampadam corpus, please cite the following article:
```
@misc{mhaske2022naamapadam,
doi = {10.48550/ARXIV.2212.10168},
url = {https://arxiv.org/abs/2212.10168},
author = {Mhaske, Arnav and Kedia, Harshit and Doddapaneni, Sumanth and Khapra, Mitesh M. and Kumar, Pratyush and Murthy, Rudra and Kunchukuttan, Anoop},
title = {Naamapadam: A Large-Scale Named Entity Annotated Data for Indic Languages}
publisher = {arXiv},
year = {2022},
}
```
<!-- Contributors -->
### Contributors
- Arnav Mhaske <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Harshit Kedia <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Sumanth Doddapaneni <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Mitesh M. Khapra <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Pratyush Kumar <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub>
- Rudra Murthy <sub> ([AI4Bharat](https://ai4bharat.org), [IBM](https://www.ibm.com))</sub>
- Anoop Kunchukuttan <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub>
This work is the outcome of a volunteer effort as part of the [AI4Bharat initiative](https://ai4bharat.iitm.ac.in).
<!-- Contact -->
### Contact
- Anoop Kunchukuttan ([[email protected]](mailto:[email protected]))
- Rudra Murthy V ([[email protected]](mailto:[email protected]))
|
AnanthZeke/naamapadam
|
[
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc0-1.0",
"arxiv:2212.10168",
"region:us"
] |
2023-03-14T08:26:19+00:00
|
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "naamapadam"}
|
2023-03-16T05:18:15+00:00
|
1f2e54a42e7b9d7bdf5518aa3b2edc2657837302
|
just a test to see how this works
|
bandoos/test_fr
|
[
"task_categories:token-classification",
"language:fr",
"region:us"
] |
2023-03-14T09:31:52+00:00
|
{"language": ["fr"], "task_categories": ["token-classification"]}
|
2023-03-14T09:34:26+00:00
|
16632591c976286e58f6288001294355994d93d0
|
# Sentiment fairness dataset
================================
This dataset is to measure gender fairness in the downstream task of sentiment analysis. This dataset is a subset of the SST data that was filtered to have only the sentences that contain gender information. The python code used to create this dataset can be found in the prepare_sst.ipyth file.
Then the filtered datset was labeled by 4 human annotators who are the authors of this dataset. The annotations instructions are given below.
---
# Annotation Instructions
==============================
Each sentence has two existing labels:
* 'label' gives the sentiment score
* 'gender' gives the guessed gender of the target of the sentiment
The 'gender' label has two tags:
* 'masc' for masculine-gendered words, like 'he' or 'father'
* 'femm' for feminine-gendered words, like 'she' or 'mother'
For each sentence, you are to annotate if the sentence's **sentiment is directed toward a gendered person** i.e. the gender label is correct.
There are two primary ways the gender label can be incorrect: 1) the sentiment is not directed toward a gendered person/character, or 2) the sentiment is directed toward a gendered person/character but the gender is incorrect.
Please annotate **1** if the sentence is **correctly labeled** and **0** if not.
(The sentiment labels should be high quality, so mostly we're checking that the gender is correctly labeled.)
Some clarifying notes:
* If the sentiment is directed towards multiple people with different genders, mark as 0; in this case, the subject of the sentiment is not towards a single gender.
* If the sentiment is directed towards the movie or its topic, even if the movie or topic seems gendered, mark as 0; in this case, the subject of the sentiment isn't a person or character (it's a topic).
* If the sentiment is directed towards a named person or character, and you think you can infer the gender, don't! We are only marking as 1 sentences where the subject is gendered in the sentence itself.
## Positive examples (you'd annotate 1)
* sentence: She gave an excellent performance.
* label: .8
* gender: femm
Sentiment is directed at the 'she'.
---
* sentence: The director gets excellent performances out of his cast.
* label: .7
* gender: masc
Sentiment is directed at the male-gendered director.
---
* sentence: Davis the performer is plenty fetching enough, but she needs to shake up the mix, and work in something that doesn't feel like a half-baked stand-up routine.
* label: .4
* gender: femm
Sentiment is directed at Davis, who is gendered with the pronoun 'she'.
## Negative examples (you'd annotate 0)
* sentence: A near miss for this new director.
* label: .3
* gender: femm
This sentence was labeled 'femm' because it had the word 'miss' in it, but the sentiment is not actually directed towards a feminine person (we don't know the gender of the director).
---
* sentence: This terrible book-to-movie adaption must have the author turning in his grave.
* label: .2
* gender: masc
The sentiment is directed towards the movie, or maybe the director, but not the male-gendered author.
---
* sentence: Despite a typical mother-daughter drama, the excellent acting makes this movie a charmer.
* label: .8
* gender: femm
Sentiment is directed at the acting, not a person or character.
---
* sentence: The film's maudlin focus on the young woman's infirmity and her naive dreams play like the worst kind of Hollywood heart-string plucking.
* label: .8
* gender: femm
Similar to above, the sentiment is directed towards the movie's focus---though the focus may be gendered, we are only keeping sentences where the sentiment is directed towards a gendered person or character.
---
* sentence: Lohman adapts to the changes required of her, but the actress and director Peter Kosminsky never get the audience to break through the wall her character erects.
* label: .4
* gender: femm
The sentiment is directed towards both the actress and the director, who may have different genders.
---
# The final dataset
=====================
The final dataset conatina the following columns:
Sentnces: the sentence that contain a sentiment.
label: the sentiment label if hte sentience is positve or negative.
gender: the gender of hte target of the sentiment in the sentence.
A1: the annotation of the first annotator. ("1" means that the gender in the "gender" colum is correctly the target of the sentnce. "0" means otherwise)
A2: the annotation of the second annotator. ("1" means that the gender in the "gender" colum is correctly the target of the sentnce. "0" means otherwise)
A3: the annotation of the third annotator. ("1" means that the gender in the "gender" colum is correctly the target of the sentnce. "0" means otherwise)
Keep: a boolean indicating wheather to keeep this sentnce or not. "Keep" means that the gender of this sentence was labelled by more than one annotator as correct.
agreement: the number of annotators who agreeed o nteh label.
correct: the number of annotators who gave the majority of labels.
incorrect: the number of annotators who gave the minority labels.
**This dataset is ready to use as the majority of the human annotators agreed that the sentiment of these sentences is targeted at the gender mentioned in the "gender" column**
---
# Citation
==============
@misc{sst-sentiment-fainress-dataset,
title={A dataset to measure fairness in the sentiment analysis task},
author={Gero, Katy and Butters, Nathan and Bethke, Anna and Elsafoury, Fatma},
howpublished={https://github.com/efatmae/SST_sentiment_fairness_data},
year={2023}
}
|
fatmaElsafoury2022/SST_sentiment_fairness_data
|
[
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:afl-3.0",
"Fairness dataset",
"sentiment analysis",
"Gender",
"region:us"
] |
2023-03-14T09:42:18+00:00
|
{"language": ["en"], "license": "afl-3.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "tags": ["Fairness dataset", "sentiment analysis", "Gender"]}
|
2023-05-16T08:52:58+00:00
|
f10694b195dfbd4d9b69dc5c9179e3c1bf106730
|
# Dataset Card for "reper3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
veroniccccccha/reper3
|
[
"region:us"
] |
2023-03-14T10:34:03+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 250354042.0, "num_examples": 5}], "download_size": 18883304, "dataset_size": 250354042.0}}
|
2023-03-14T10:34:15+00:00
|
7b033bc55106c1febd1e73843400f17c102cf22d
|
hayesyang/practise
|
[
"region:us"
] |
2023-03-14T10:53:31+00:00
|
{}
|
2023-03-15T02:48:04+00:00
|
|
5d72552dab2a77f0207aac2dea00d3fb37b1e3d0
|
# Dataset Card for "targets_ghg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jose-datamaran/targets_ghg
|
[
"region:us"
] |
2023-03-14T11:02:42+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_target", "1": "target"}}}}], "splits": [{"name": "train", "num_bytes": 139050.63525091799, "num_examples": 653}, {"name": "test", "num_bytes": 34922.36474908201, "num_examples": 164}], "download_size": 97996, "dataset_size": 173973.0}}
|
2023-03-14T14:13:17+00:00
|
54f2877b1910096ecae6e1b38bd377d0d9fd3225
|
# Dataset Card for "Malevich-captions-BLIP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Outrun32/Malevich-captions-BLIP
|
[
"region:us"
] |
2023-03-14T11:19:11+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3612984.0, "num_examples": 50}], "download_size": 3614461, "dataset_size": 3612984.0}}
|
2023-03-14T11:19:21+00:00
|
cdee7efdbbfd0e3b9e7814ee347cd8e1f0020504
|
EarthnDusk/FloraFauna_Dataset
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-03-14T11:27:04+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-03-14T11:30:15+00:00
|
|
24c30e8bace265b9d3a7f2fed7c24aefb8623f41
|
# Dataset Card for "ecthr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
huynguyendayrui/ecthr
|
[
"region:us"
] |
2023-03-14T11:37:14+00:00
|
{"dataset_info": {"features": [{"name": "text", "sequence": "string"}, {"name": "labels_task_a", "sequence": {"class_label": {"names": {"0": "2", "1": "3", "2": "5", "3": "6", "4": "8", "5": "9", "6": "10", "7": "11", "8": "14", "9": "P1-1"}}}}, {"name": "law", "sequence": "string"}, {"name": "labels_task_b", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 267570945, "num_examples": 9000}, {"name": "test", "num_bytes": 35381069, "num_examples": 1000}, {"name": "validation", "num_bytes": 33956620, "num_examples": 1000}], "download_size": 157641185, "dataset_size": 336908634}}
|
2023-03-14T11:38:42+00:00
|
64d1f670afd84cb16fbc2de443faa95f08f11d21
|
# Dataset Card for "summarization"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
c00k1ez/summarization
|
[
"region:us"
] |
2023-03-14T11:54:47+00:00
|
{"dataset_info": {"features": [{"name": "chapter_id", "dtype": "int64"}, {"name": "book_id", "dtype": "int64"}, {"name": "chapter_title", "dtype": "string"}, {"name": "chapter_summary", "dtype": "string"}, {"name": "source", "dtype": "int64"}, {"name": "chapters_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 25303291, "num_examples": 816}, {"name": "validation", "num_bytes": 3636465, "num_examples": 139}], "download_size": 14675842, "dataset_size": 28939756}}
|
2023-03-14T11:56:28+00:00
|
57ebaeaf4d223abd36190e408bc649581964f72d
|
# Dataset Card for "reklambox3-balanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklambox3-balanced
|
[
"region:us"
] |
2023-03-14T11:58:28+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 562383, "num_examples": 1124}, {"name": "test", "num_bytes": 142052, "num_examples": 282}], "download_size": 428550, "dataset_size": 704435}}
|
2023-03-14T12:12:25+00:00
|
de3dfe5b460cc516208c19184941b2578ad92104
|
Dataset generated from HKR train set using Stackmix
=========================================
Number of images: 2476836
Sources:
* [HKR dataset](https://github.com/abdoelsayed2016/HKR_Dataset)
* [Stackmix code](https://github.com/ai-forever/StackMix-OCR)
|
nastyboget/stackmix_hkr_large
|
[
"task_categories:image-to-text",
"size_categories:1M<n<10M",
"language:ru",
"license:mit",
"region:us"
] |
2023-03-14T12:15:53+00:00
|
{"language": ["ru"], "license": "mit", "size_categories": ["1M<n<10M"], "task_categories": ["image-to-text"]}
|
2023-03-20T10:15:05+00:00
|
8775ba2685650455ba05fc1623d083ea9ccb1dd4
|
Persing/mtg_card_data
|
[
"license:mit",
"region:us"
] |
2023-03-14T12:16:03+00:00
|
{"license": "mit"}
|
2023-03-14T12:20:18+00:00
|
|
26d77d043b53d241bac0b8abc96b104a59727804
|
# Dataset Card for "reklambox-balanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklambox-balanced
|
[
"region:us"
] |
2023-03-14T12:19:14+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_name", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 561839, "num_examples": 1102}, {"name": "test", "num_bytes": 140041, "num_examples": 276}], "download_size": 0, "dataset_size": 701880}}
|
2023-03-16T16:16:40+00:00
|
e6d8f0f91cc9b2a60c4a1c6b550900d142065df8
|
# Dataset Card for "super-mario-bros-levels-discrete"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
valashir/super-mario-bros-levels-discrete
|
[
"region:us"
] |
2023-03-14T12:35:22+00:00
|
{"dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "level", "sequence": {"sequence": {"sequence": "uint8"}}}], "splits": [{"name": "train", "num_bytes": 25036582, "num_examples": 2098}], "download_size": 490101, "dataset_size": 25036582}}
|
2023-03-14T12:36:16+00:00
|
5b8259b8c93cb6c30b9bc0e43c8533e85ad1614a
|
# Dataset Card for "NewArabicDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MohammedNasri/NewArabicDataset
|
[
"region:us"
] |
2023-03-14T12:40:08+00:00
|
{"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 75789246624, "num_examples": 78899}, {"name": "test", "num_bytes": 10027780960, "num_examples": 10440}], "download_size": 13566982393, "dataset_size": 85817027584}}
|
2023-03-14T12:58:14+00:00
|
5ec7f912cb9eabdc7203748220fabf5ba37bdbb1
|
Dzeniks/fever_2way
|
[
"license:mit",
"region:us"
] |
2023-03-14T12:47:45+00:00
|
{"license": "mit"}
|
2023-03-14T13:07:23+00:00
|
|
7f6833bd09999eec6acd014aa9c175c49423e0cd
|
# Dataset Card for "reklamation24_medizin-gesundheit-pflege"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklamation24_medizin-gesundheit-pflege
|
[
"region:us"
] |
2023-03-14T14:04:45+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_name", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 218144, "num_examples": 466}, {"name": "test", "num_bytes": 51557, "num_examples": 117}], "download_size": 0, "dataset_size": 269701}}
|
2023-04-19T07:32:48+00:00
|
45f03ac914f6043969b3753a42d85c79db06b07f
|
# Dataset Card for "reklamation24_oeffentlichkeit-soziales"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklamation24_oeffentlichkeit-soziales
|
[
"region:us"
] |
2023-03-14T14:06:41+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_name", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 85044, "num_examples": 152}, {"name": "test", "num_bytes": 21399, "num_examples": 39}], "download_size": 0, "dataset_size": 106443}}
|
2023-04-19T07:35:03+00:00
|
f3837c6be4f5d4291075a7837034b3a2e201124c
|
# Dataset Card for "reklamation24_oeffentlicher-verkehr-vermietung"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklamation24_oeffentlicher-verkehr-vermietung
|
[
"region:us"
] |
2023-03-14T14:08:01+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_name", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 254245, "num_examples": 488}, {"name": "test", "num_bytes": 68728, "num_examples": 122}], "download_size": 0, "dataset_size": 322973}}
|
2023-04-19T07:36:24+00:00
|
310666a5e8d2473c4d9af143c66e9a44c3f73dd5
|
# Dataset Card for "hugo_suits_ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yotam56/hugo_suits_ds
|
[
"region:us"
] |
2023-03-14T14:10:36+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Subfolder_1", "1": "Subfolder_10", "2": "Subfolder_11", "3": "Subfolder_12", "4": "Subfolder_13", "5": "Subfolder_14", "6": "Subfolder_15", "7": "Subfolder_16", "8": "Subfolder_17", "9": "Subfolder_18", "10": "Subfolder_2", "11": "Subfolder_3", "12": "Subfolder_4", "13": "Subfolder_5", "14": "Subfolder_6", "15": "Subfolder_7", "16": "Subfolder_8", "17": "Subfolder_9"}}}}], "splits": [{"name": "train", "num_bytes": 862857.0, "num_examples": 91}], "download_size": 859535, "dataset_size": 862857.0}}
|
2023-03-14T14:10:38+00:00
|
7f1ea0d2ccee932b584360e2c894a548cd0e8855
|
# Dataset Card for "torch-forum"
Dataset structure
```
{
title:str
category:str,
posts:List[{
poster:str,
contents:str,
likes:int,
isAccepted:bool
}]
}
```
|
foldl/torch-forum
|
[
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-sa-4.0",
"code",
"region:us"
] |
2023-03-14T14:24:22+00:00
|
{"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering", "text-classification", "text-generation"], "pretty_name": "Pytorch Forums Parsed", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "posts", "list": [{"name": "contents", "dtype": "string"}, {"name": "isAccepted", "dtype": "bool"}, {"name": "likes", "dtype": "int64"}, {"name": "poster", "dtype": "string"}]}, {"name": "answered", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 1540936, "num_examples": 706}], "download_size": 734399, "dataset_size": 1540936}, "tags": ["code"]}
|
2023-03-15T12:52:42+00:00
|
f0f9924441be381e5fc90ef084e94683e249d8c0
|
# Dataset information
Dataset concatenating all QA datasets with context available in French and open-source.
In addition, an augmented version of these datasets has been added (same context but different questions to create data in SQuADv2 format).
In total, there are 221,348 training data, **910** validation data and 6,376 test data (the first 3,188 rows correspond to SQuADv2 format, the remaining 3,188 to SQuADv2 format).
In practice, due to the restrictive license for the FQUAD 1.0 dataset, we can only share **179,886** rows of the 221,348 training data and not the test dataset.
Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/QA_en/) or [French](https://blog.vaniila.ai/QA/).
# Usage
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/frenchQA",sep=";")
```
```
dataset
DatasetDict({
train: Dataset({
features: ['context', 'question', 'answer', 'answer_start', 'dataset'],
num_rows: 179886
})
validation: Dataset({
features: ['context', 'question', 'answer', 'answer_start', 'dataset'],
num_rows: 910
})
})
```
# Dataset
## Dataset details
| Dataset | Format | Train split | Dev split | Test split | Available in frenchQA |
| ----------- | ----------- | ----------- | ----------- | ----------- | ------------------------ |
| [piaf](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)| SQuAD 1.0 | 9 224 Q & A | X | X | Yes |
| piaf_v2| SQuAD 2.0 | 9 224 Q & A | X | X | Yes |
| [fquad](https://fquad.illuin.tech/)| SQuAD 1.0 | 20 731 Q & A | 3 188 Q & A (is not used for training, but as a test dataset) | 2 189 Q & A (not freely available)| No due to the license |
| fquad_v2 | SQuAD 2.0 | 20 731 Q & A | 3 188 Q & A (is not used for training, but as a test dataset) | X | No due to the license |
| [lincoln/newsquadfr](https://huggingface.co/datasets/lincoln/newsquadfr) | SQuAD 1.0 | 1 650 Q & A | 455 Q & A | X | Yes |
| lincoln/newsquadfr_v2 | SQuAD 2.0 | 1 650 Q & A | 455 Q & A | X | Yes |
| [pragnakalp/squad_v2_french_translated](https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated)| SQuAD 2.0 | 79 069 Q & A | X | X | Yes |
| pragnakalp/squad_v2_french_translated_v2| SQuAD 2.0 | 79 069 Q & A | X | X | Yes |
## Columns
```
dataset_train = dataset['train'].to_pandas()
dataset_train.head()
context question answer answer_start dataset
0 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Quand Beyonce a-t-elle commencé à devenir popu... à la fin des années 1990 269 pragnakalp/squad_v2_french_translated
1 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Quand Beyonce a-t-elle quitté Destiny's Child ... 2003 549 pragnakalp/squad_v2_french_translated
2 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Qui a dirigé le groupe Destiny's Child ? Mathew Knowles 376 pragnakalp/squad_v2_french_translated
3 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Quand Beyoncé a-t-elle sorti Dangerously in Lo... 2003 549 pragnakalp/squad_v2_french_translated
4 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Combien de Grammy Awards Beyoncé a-t-elle gagn... cinq 629 pragnakalp/squad_v2_french_translated
```
- the `context` column contains the context
- the `question` column contains the question
- the `answer` column contains the answer (has been replaced by `no_answer` for rows in SQuAD v2 format)
- the `answer_start` column contains the start position of the answer in the context (has been replaced by `-1` for rows in SQuAD v2 format)
- the `dataset` column identifies the row's original dataset (if you wish to apply filters to it, rows in SQuAD v2 format are indicated with the suffix `_v2` in the dataset name)
## Split
- `train` corresponds to the concatenation of the training dataset from `pragnakalp/squad_v2_english_translated` + `lincoln/newsquadfr` + `PIAFv1.2` + the augmented version of each dataset in SQuADv2 format (no shuffle has been performed)
- `validation` corresponds to the concatenation of the newsquadfr validation dataset + this same dataset expanded in SQuAD v2 format (= newsquadfr_v2) (no shuffle performed)
# Question type statistics
The question type distribution is as follows:
| Type of question | Frequency in percent |
| ----------- | ----------- |
|What (que) |55.02|
|Who (qui) |15.96|
|How much (combien)|7.92|
|When (quand) |6.90|
|Where (où) |3.15|
|How (comment) |3.76|
|What (quoi) |2.60|
|Why (pourquoi) |1.25|
|Other |3.44|
The number of questions containing a negation, e.g. "What was the name of Chopin's first music teacher who was not an amateur musician?", is estimated at 3.55% of the total questions.
For information, the distribution of the complete dataset (containing FQUAD 1.0 and FQUAD 1.0 data in SQUAD 2.0 format) is as follows:
| Type of question | Frequency in percent |
| ----------- | ----------- |
|What (que) |55.12|
|Who (qui) |16.24|
|How much (combien)|7.56|
|When (quand) |6.85|
|Where (où) |3.98|
|How (comment) |3.76|
|What (quoi) |2.94|
|Why (pourquoi) |1.41|
|Other |2.14|
The number of questions containing a negation, e.g. "What was the name of Chopin's first music teacher who was not an amateur musician?", is estimated at 3.07% of the total questions.
# Citation
```
@misc {frenchQA2023,
author = { {ALBAR, Boris and BEDU, Pierre and BOURDOIS, Loïck} },
organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { frenchQA (Revision 6249cd5) },
year = 2023,
url = { https://huggingface.co/CATIE-AQ/frenchQA },
doi = { 10.57967/hf/0862 },
publisher = { Hugging Face }
}
```
# License
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/deed.en)
|
CATIE-AQ/frenchQA
|
[
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:fr",
"license:cc-by-4.0",
"doi:10.57967/hf/0862",
"region:us"
] |
2023-03-14T14:32:36+00:00
|
{"language": ["fr"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["question-answering"]}
|
2024-02-07T08:41:05+00:00
|
5fd0403e242ce75f1b1f4c2310a5dfc7050cac3b
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
iasobolev/guilget
|
[
"region:us"
] |
2023-03-14T14:42:01+00:00
|
{}
|
2023-09-04T04:54:28+00:00
|
59663c7f72051d4f2caa379383c78cedc2d6ce97
|
# Dataset Card for "pythia-memorized-evals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EleutherAI/pythia-memorized-evals
|
[
"region:us"
] |
2023-03-14T15:11:02+00:00
|
{"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "tokens", "sequence": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "duped.1.4b", "num_bytes": 730820104, "num_examples": 1373722}, {"name": "deduped.1.4b", "num_bytes": 557587604, "num_examples": 1048097}, {"name": "duped.160m", "num_bytes": 366906036, "num_examples": 689673}, {"name": "deduped.160m", "num_bytes": 309195740, "num_examples": 581195}, {"name": "duped.12b", "num_bytes": 1267397432, "num_examples": 2382326}, {"name": "deduped.12b", "num_bytes": 995486380, "num_examples": 1871215}, {"name": "duped.70m", "num_bytes": 246822996, "num_examples": 463953}, {"name": "deduped.70m", "num_bytes": 218890336, "num_examples": 411448}, {"name": "duped.2.8b", "num_bytes": 891140964, "num_examples": 1675077}, {"name": "deduped.2.8b", "num_bytes": 720972252, "num_examples": 1355211}, {"name": "duped.410m", "num_bytes": 516221412, "num_examples": 970341}, {"name": "deduped.410m", "num_bytes": 431472748, "num_examples": 811039}, {"name": "duped.6.9b", "num_bytes": 1128355508, "num_examples": 2120969}, {"name": "deduped.6.9b", "num_bytes": 893916408, "num_examples": 1680294}, {"name": "duped.1b", "num_bytes": 668267012, "num_examples": 1256141}, {"name": "deduped.1b", "num_bytes": 549484180, "num_examples": 1032865}, {"name": "duped.12b.23000", "num_bytes": 105429100, "num_examples": 198175}, {"name": "duped.12b.43000", "num_bytes": 235278596, "num_examples": 442253}, {"name": "duped.12b.63000", "num_bytes": 385528696, "num_examples": 724678}, {"name": "duped.12b.83000", "num_bytes": 568442532, "num_examples": 1068501}, {"name": "duped.12b.103000", "num_bytes": 803564188, "num_examples": 1510459}, {"name": "duped.12b.123000", "num_bytes": 1061877852, "num_examples": 1996011}, {"name": "deduped.12b.23000", "num_bytes": 86938376, "num_examples": 163418}, {"name": "deduped.12b.43000", "num_bytes": 190915116, "num_examples": 358863}, {"name": "deduped.12b.63000", "num_bytes": 311255644, "num_examples": 585067}, {"name": "deduped.12b.83000", "num_bytes": 453300176, "num_examples": 852068}, {"name": "deduped.12b.103000", "num_bytes": 636047496, "num_examples": 1195578}, {"name": "deduped.12b.123000", "num_bytes": 832077260, "num_examples": 1564055}, {"name": "deduped.1b.new", "num_bytes": 549484180, "num_examples": 1032865}], "download_size": 4735823411, "dataset_size": 16713076324}, "configs": [{"config_name": "default", "data_files": [{"split": "duped.12b.23000", "path": "data/duped.12b.23000-*"}, {"split": "duped.12b.43000", "path": "data/duped.12b.43000-*"}, {"split": "duped.12b.63000", "path": "data/duped.12b.63000-*"}, {"split": "duped.12b.83000", "path": "data/duped.12b.83000-*"}, {"split": "duped.12b.103000", "path": "data/duped.12b.103000-*"}, {"split": "duped.12b.123000", "path": "data/duped.12b.123000-*"}, {"split": "deduped.12b.23000", "path": "data/deduped.12b.23000-*"}, {"split": "deduped.12b.43000", "path": "data/deduped.12b.43000-*"}, {"split": "deduped.12b.63000", "path": "data/deduped.12b.63000-*"}, {"split": "deduped.12b.83000", "path": "data/deduped.12b.83000-*"}, {"split": "deduped.12b.103000", "path": "data/deduped.12b.103000-*"}, {"split": "deduped.12b.123000", "path": "data/deduped.12b.123000-*"}, {"split": "duped.70m", "path": "data/duped.70m-*"}, {"split": "duped.160m", "path": "data/duped.160m-*"}, {"split": "duped.410m", "path": "data/duped.410m-*"}, {"split": "duped.1b", "path": "data/duped.1b-*"}, {"split": "duped.1.4b", "path": "data/duped.1.4b-*"}, {"split": "duped.2.8b", "path": "data/duped.2.8b-*"}, {"split": "duped.6.9b", "path": "data/duped.6.9b-*"}, {"split": "duped.12b", "path": "data/duped.12b-*"}, {"split": "deduped.70m", "path": "data/deduped.70m-*"}, {"split": "deduped.160m", "path": "data/deduped.160m-*"}, {"split": "deduped.410m", "path": "data/deduped.410m-*"}, {"split": "deduped.1b", "path": "data/deduped.1b-*"}, {"split": "deduped.1.4b", "path": "data/deduped.1.4b-*"}, {"split": "deduped.2.8b", "path": "data/deduped.2.8b-*"}, {"split": "deduped.6.9b", "path": "data/deduped.6.9b-*"}, {"split": "deduped.12b", "path": "data/deduped.12b-*"}, {"split": "deduped.1b.new", "path": "data/deduped.1b.new-*"}]}]}
|
2024-01-02T16:56:50+00:00
|
6e270e9d16962637c34ce875bdda52be4d89e2b5
|
# Dataset Card for "COCO_captions_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/COCO_captions_train
|
[
"region:us"
] |
2023-03-14T16:05:25+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "filepath", "dtype": "string"}, {"name": "sentids", "list": "int32"}, {"name": "filename", "dtype": "string"}, {"name": "imgid", "dtype": "int32"}, {"name": "split", "dtype": "string"}, {"name": "sentences_tokens", "list": {"list": "string"}}, {"name": "sentences_raw", "list": "string"}, {"name": "sentences_sentid", "list": "int32"}, {"name": "cocoid", "dtype": "int32"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 18595506212.0, "num_examples": 113287}], "download_size": 18500220513, "dataset_size": 18595506212.0}}
|
2023-03-17T21:59:22+00:00
|
55eb356b9f7785a32352350eefa001ed714d8551
|
# Dataset Card for "classes_white_background_ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yotam56/classes_white_background_ds
|
[
"region:us"
] |
2023-03-14T16:21:03+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "dresses", "1": "pants", "2": "shorts", "3": "t-shirt"}}}}], "splits": [{"name": "train", "num_bytes": 50672.0, "num_examples": 12}], "download_size": 57065, "dataset_size": 50672.0}}
|
2023-03-14T16:21:05+00:00
|
ee0e85bb9715d570e34529007ab92451979e3a25
|
# Dataset Card for "minimath"
The objective of `minimath` is to evaluate the mathematical capability of language model in a quick while diverse setting.
The dataset is composed of sampling from the below dataset:
https://huggingface.co/datasets/math_dataset
https://huggingface.co/datasets/math_qa
https://huggingface.co/datasets/competition_math
https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_math_jsonl
https://huggingface.co/datasets/qwedsacf/grade-school-math-instructions
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kenhktsui/minimath
|
[
"region:us"
] |
2023-03-14T16:39:10+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "Rationale", "dtype": "string"}, {"name": "annotated_formula", "dtype": "string"}, {"name": "linear_formula", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1114848, "num_examples": 2880}], "download_size": 543796, "dataset_size": 1114848}}
|
2023-03-14T17:31:06+00:00
|
683cc1b7b263128839d57b0de4695fdd7aeaeae0
|
kabachuha/wesnoth-ethea-canon-campaigns
|
[
"task_categories:text-generation",
"language:en",
"license:gpl-2.0",
"art",
"code",
"gamedev",
"scenarios",
"writing",
"literature",
"wesnoth",
"region:us"
] |
2023-03-14T16:51:23+00:00
|
{"language": ["en"], "license": "gpl-2.0", "task_categories": ["text-generation"], "tags": ["art", "code", "gamedev", "scenarios", "writing", "literature", "wesnoth"]}
|
2023-03-14T17:21:48+00:00
|
|
7de519bfbe870ed1c5220744a528d90a05a916f5
|
# Dataset Card for "ignatius"
This dataset was created to participate in the keras dreambooth sprint. It is based on the Spanish comedian [Ignatius Farray](https://es.wikipedia.org/wiki/Ignatius_Farray)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
matallanas/ignatius
|
[
"task_categories:text-to-image",
"license:openrail",
"region:us"
] |
2023-03-14T17:37:28+00:00
|
{"license": "openrail", "task_categories": ["text-to-image"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5743920, "num_examples": 28}], "download_size": 5743956, "dataset_size": 5743920}}
|
2023-03-16T18:10:50+00:00
|
b8c34afa3cfd1bd10e32c5ce86ffc384f0aca20d
|
trondizzy/Tatoeba_v2022_03_03
|
[
"task_categories:translation",
"size_categories:100K<n<1M",
"language:uk",
"language:en",
"license:cc",
"region:us"
] |
2023-03-14T17:39:18+00:00
|
{"language": ["uk", "en"], "license": "cc", "size_categories": ["100K<n<1M"], "task_categories": ["translation"]}
|
2023-03-14T18:01:06+00:00
|
|
7c08a95052d21ce1d95972906251f91b32e89f3b
|
trondizzy/uk_en_combined_OPUS_sets
|
[
"task_categories:translation",
"size_categories:1M<n<10M",
"language:uk",
"language:en",
"license:cc",
"region:us"
] |
2023-03-14T17:49:58+00:00
|
{"language": ["uk", "en"], "license": "cc", "size_categories": ["1M<n<10M"], "task_categories": ["translation"]}
|
2023-03-14T18:00:21+00:00
|
|
e66ef331a7d6d106be262fd9750248c1e9d90fbe
|
# **Dataset Card for Hate-Offensive Speech**
This is the original dataset created by the user [badmatr11x](https://www.huggingface.co/badmatr11x/). Datasets contains the annotated tweets classifying into the three categories; **hate-speech**, **offensive-speech** and **neither**.
# **Dataset Structure**
Database Structure as follows:
```
{
"label": {
0: "hate-speech",
1: "offensive-speech",
2: "neither"
},
"tweet": <string>
}
```
### **Dataset Instances**
Examples from the datasets as follows:
Lable-0 (Hate Speech)
```
{
"label": 0,
"tweet": "@user @user @user we were? maybe you are-but don't you dare demonize innocent infants born with white skin, "
}
```
Label-1 (Offensive Speech)
```
{
"label": 1,
"tweet": "...and I'm goin back to school.. only for the hoes and a class or two"
}
```
Label-2 (Neither)
```
{
"label": 2,
"tweet": "@user @user are you guys going to take forever to bring the new gmc?"
}
```
# **Data Fields**
- `label`: a int64 value
- `tweet`: a string
# **Data Splits**
- Datasets splits into the three parts; train, validation and test.
- Training datasets contains 90% tweeets, validation contains 5% and rest of 5% assigned to test datasets.
|
badmatr11x/hate-offensive-speech
|
[
"task_categories:text-classification",
"task_ids:multi-label-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] |
2023-03-14T18:01:04+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "source_dataset": ["original"], "dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "tweet", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5045816.7990131285, "num_examples": 51070}, {"name": "test", "num_bytes": 280301.1995065645, "num_examples": 2837}, {"name": "validation", "num_bytes": 280400.0014803066, "num_examples": 2838}], "download_size": 3879287, "dataset_size": 5606517.999999999}}
|
2023-03-15T20:17:11+00:00
|
283e1526f15570c1fba16f82739327a5ab00684b
|
# Dataset Card for "MedQuAD_Context_Question_Answer_Triples_TWO"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AnonymousSub/MedQuAD_Context_Question_Answer_Triples_TWO
|
[
"region:us"
] |
2023-03-14T18:17:35+00:00
|
{"dataset_info": {"features": [{"name": "Contexts", "dtype": "string"}, {"name": "Questions", "dtype": "string"}, {"name": "Answers", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 190839732, "num_examples": 47441}], "download_size": 21760499, "dataset_size": 190839732}}
|
2023-03-14T18:17:38+00:00
|
45355d708117787e3027207251aec7f7f2128e73
|
Back-up of a group of LoRAs deleted by the author/trainer.
Models:
Blue Archive Midori
Trigger words:
midori, midori's style
Blue Archive Momoi (Final version, there were 2 older versions afaik)
Trigger words:
momoi (blue archive),halo,cat tail, momoi's style
Blue Archive Mari
Trigger words (Has 2 skins):
mari,halo,custom skin, nun
mari,halo,outside-wear, gym unifrom/sportswear
Blue Archive Miyu
Trigger words:
Miyu,long hair,Miyu's Style,white pantyhose,skirt,blue shirt, blue skirt,school uniform,halo, long sleeves,pleated skirt,green neckchief
That's it ig lol
|
Lancer1408/NotKerarekke
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-03-14T18:29:00+00:00
|
{"license": "creativeml-openrail-m", "pretty_name": "NotKRRKE"}
|
2023-03-14T18:49:24+00:00
|
b8bdcedf2378356e43c2b24223f1e0947dc5a589
|
# Dataset Card for "mix_ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yotam56/mix_ds
|
[
"region:us"
] |
2023-03-14T18:39:01+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "american_shirt", "1": "black", "2": "blue", "3": "buttoned_shirt", "4": "checked_shirt", "5": "coat", "6": "dark_tshirts", "7": "hoodie", "8": "long_sleeves", "9": "other_tshirts", "10": "polo", "11": "red", "12": "striped_sweater", "13": "striped_tshirts", "14": "white_with_logo", "15": "yellow"}}}}], "splits": [{"name": "train", "num_bytes": 3587713.0, "num_examples": 84}], "download_size": 3527249, "dataset_size": 3587713.0}}
|
2023-03-14T18:39:04+00:00
|
3fb2261eef4366c2a4e244727cc8af8c72729db1
|
## Dataset is imported from CodeXGLUE and pre-processed using their script.
# Where to find in Semeru:
The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/text-to-code/concode in Semeru
# CodeXGLUE -- Text2Code Generation
Here are the dataset and pipeline for text-to-code generation task.
## Task Definition
Generate source code of class member functions in Java, given natural language description and class environment. Class environment is the programmatic context provided by the rest of the class, including other member variables and member functions in class. Models are evaluated by exact match and BLEU.
It's a challenging task because the desired code can vary greatly depending on the functionality the class provides. Models must (a) have a deep understanding of NL description and map the NL to environment variables, library API calls and user-defined methods in the class, and (b) decide on the structure of the resulting code.
## Dataset
### Concode dataset
We use concode dataset which is a widely used code generation dataset from Iyer's EMNLP 2018 paper [Mapping Language to Code in Programmatic Context](https://www.aclweb.org/anthology/D18-1192.pdf).
We have downloaded his published dataset and followed his preprocessed script. You can find the preprocessed data in `dataset/concode` directory.
Data statistics of concode dataset are shown in the below table:
| | #Examples |
| ------- | :---------: |
| Train | 100,000 |
| Dev | 2,000 |
| Test | 2,000 |
### Data Format
Code corpus are saved in json lines format files. one line is a json object:
```
{
"nl": "Increment this vector in this place. con_elem_sep double[] vecElement con_elem_sep double[] weights con_func_sep void add(double)",
"code": "public void inc ( ) { this . add ( 1 ) ; }"
}
```
`nl` combines natural language description and class environment. Elements in class environment are seperated by special tokens like `con_elem_sep` and `con_func_sep`.
## Reference
<pre><code>@article{iyer2018mapping,
title={Mapping language to code in programmatic context},
author={Iyer, Srinivasan and Konstas, Ioannis and Cheung, Alvin and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:1808.09588},
year={2018}
}</code></pre>
|
semeru/Text-Code-concode-Java
|
[
"license:mit",
"region:us"
] |
2023-03-14T18:46:04+00:00
|
{"license": "mit", "Programminglanguage": "Java", "version": "N/A", "Date": "2018 paper https://aclanthology.org/D18-1192.pdf", "Contaminated": "Very Likely", "Size": "Standard Tokenizer"}
|
2023-03-27T17:35:28+00:00
|
d339b47df058ac3511c55696839c0a2a0346810e
|
# Dataset Card for "functions_annotated_with_intents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joetey/functions_annotated_with_intents
|
[
"region:us"
] |
2023-03-14T19:23:09+00:00
|
{"dataset_info": {"features": [{"name": "function", "dtype": "string"}, {"name": "intent_category", "dtype": "string"}, {"name": "purpose", "dtype": "string"}, {"name": "code_trans", "dtype": "string"}, {"name": "detailed_description", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1833549, "num_examples": 2768}], "download_size": 699265, "dataset_size": 1833549}}
|
2023-03-14T19:23:12+00:00
|
bce9a6f856591ca6ac88e48a7285b38ab00484d3
|
# Dataset Card for "hugo_tsne_ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yotam56/hugo_tsne_ds
|
[
"region:us"
] |
2023-03-14T19:29:18+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "dresses", "1": "jackets", "2": "man_hoodie", "3": "red_tshirts", "4": "suits", "5": "white_tshirts", "6": "women_pants", "7": "women_shorts", "8": "women_skirts"}}}}], "splits": [{"name": "train", "num_bytes": 358596.0, "num_examples": 45}], "download_size": 367026, "dataset_size": 358596.0}}
|
2023-03-14T19:29:21+00:00
|
578c12844aadca9702522657789c519ca87b46c8
|
# MBXP
## Table of Contents
- [MBXP](#MBXP)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#related-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Executional Correctness](#execution)
- [Execution Example](#execution-example)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
# MBXP
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval)
- **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8)
### Dataset Summary
This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data,
namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval.
<br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868).
### Related Tasks and Leaderboards
* [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval)
* [MBXP](https://huggingface.co/datasets/mxeval/mbxp)
* [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x)
### Languages
The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings.
## Dataset Structure
To lookup currently supported datasets
```python
from datasets import get_dataset_config_names
get_dataset_config_names("mxeval/mbxp")
['python', 'csharp', 'go', 'java', 'javascript', 'kotlin', 'perl', 'php', 'ruby', 'scala', 'swift', 'typescript']
```
To load a specific dataset and language
```python
from datasets import load_dataset
load_dataset("mxeval/mbxp", "python")
DatasetDict({
test: Dataset({
features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'canonical_solution', 'description'],
num_rows: 974
})
})
```
### Data Instances
An example of a dataset instance:
```python
{
"task_id": "MBPP/1",
"language": "python",
"prompt": "\n\ndef min_cost(cost, m, n):\n\t\"\"\"\n\tWrite a function to find the minimum cost path to reach (m, n) from (0, 0) for the given cost matrix cost[][] and a position (m, n) in cost[][].\n\t>>> min_cost([[1, 2, 3], [4, 8, 2], [1, 5, 3]], 2, 2)\n\t8\n\t>>> min_cost([[2, 3, 4], [5, 9, 3], [2, 6, 4]], 2, 2)\n\t12\n\t>>> min_cost([[3, 4, 5], [6, 10, 4], [3, 7, 5]], 2, 2)\n\t16\n\t\"\"\"\n",
"test": "\n\nMETADATA = {}\n\n\ndef check(candidate):\n assert candidate([[1, 2, 3], [4, 8, 2], [1, 5, 3]], 2, 2) == 8\n assert candidate([[2, 3, 4], [5, 9, 3], [2, 6, 4]], 2, 2) == 12\n assert candidate([[3, 4, 5], [6, 10, 4], [3, 7, 5]], 2, 2) == 16\n\n",
"entry_point": "min_cost",
"canonical_solution": "\tR = 3\n\tC = 3\n\t \n\ttc = [[0 for x in range(C)] for x in range(R)] \n\ttc[0][0] = cost[0][0] \n\tfor i in range(1, m+1): \n\t\ttc[i][0] = tc[i-1][0] + cost[i][0] \n\tfor j in range(1, n+1): \n\t\ttc[0][j] = tc[0][j-1] + cost[0][j] \n\tfor i in range(1, m+1): \n\t\tfor j in range(1, n+1): \n\t\t\ttc[i][j] = min(tc[i-1][j-1], tc[i-1][j], tc[i][j-1]) + cost[i][j] \n\treturn tc[m][n]",
"description": "Write a function to find the minimum cost path to reach (m, n) from (0, 0) for the given cost matrix cost[][] and a position (m, n) in cost[][]."
}
```
### Data Fields
- `task_id`: identifier for the data sample
- `prompt`: input for the model containing function header and docstrings
- `canonical_solution`: solution for the problem in the `prompt`
- `description`: task description
- `test`: contains function to test generated code for correctness
- `entry_point`: entry point for test
- `language`: programming lanuage identifier to call the appropriate subprocess call for program execution
### Data Splits
- MBXP
- Python
- Java
- Javascript
- Typescript
- Kotlin
- Ruby
- Php
- Cpp
- Csharp
- Go
- Perl
- Scala
- Swift
## Dataset Creation
### Curation Rationale
Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
### Personal and Sensitive Information
None.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
### Dataset Curators
AWS AI Labs
## Execution
### Execution Example
Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset.
```python
>>> from datasets import load_dataset
>>> from mxeval.execution import check_correctness
>>> mbxp_python = load_dataset("mxeval/mbxp", "python", split="test")
>>> example_problem = mbxp_python[0]
>>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0)
{'task_id': 'MBPP/1', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 10.314226150512695}
```
### Considerations for Using the Data
Make sure to sandbox the execution environment.
### Licensing Information
[LICENSE](https://huggingface.co/datasets/mxeval/mbxp/blob/main/mbxp-LICENSE) <br>
[THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/mbxp/blob/main/THIRD_PARTY_LICENSES)
### Citation Information
```
@article{mbxp_athiwaratkun2022,
title = {Multi-lingual Evaluation of Code Generation Models},
author = {Athiwaratkun, Ben and
Gouda, Sanjay Krishna and
Wang, Zijian and
Li, Xiaopeng and
Tian, Yuchen and
Tan, Ming
and Ahmad, Wasi Uddin and
Wang, Shiqi and
Sun, Qing and
Shang, Mingyue and
Gonugondla, Sujan Kumar and
Ding, Hantian and
Kumar, Varun and
Fulton, Nathan and
Farahani, Arash and
Jain, Siddhartha and
Giaquinto, Robert and
Qian, Haifeng and
Ramanathan, Murali Krishna and
Nallapati, Ramesh and
Ray, Baishakhi and
Bhatia, Parminder and
Sengupta, Sudipta and
Roth, Dan and
Xiang, Bing},
doi = {10.48550/ARXIV.2210.14868},
url = {https://arxiv.org/abs/2210.14868},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
### Contributions
[skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi)
|
mxeval/mbxp
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"mxeval",
"mbxp",
"mbpp",
"code-generation",
"arxiv:2210.14868",
"region:us"
] |
2023-03-14T21:32:18+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "mbxp", "tags": ["mxeval", "mbxp", "mbpp", "code-generation", "mxeval"]}
|
2023-07-03T17:10:10+00:00
|
63e25957849ef5676e5b686ed0a60106697a73c0
|
# Multi-HumanEval
## Table of Contents
- [multi-humaneval](#multi-humaneval)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#related-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Executional Correctness](#execution)
- [Execution Example](#execution-example)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
# multi-humaneval
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval)
- **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8)
### Dataset Summary
This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data,
namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval.
<br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868).
### Related Tasks and Leaderboards
* [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval)
* [MBXP](https://huggingface.co/datasets/mxeval/mbxp)
* [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x)
### Languages
The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings.
## Dataset Structure
To lookup currently supported datasets
```python
get_dataset_config_names("mxeval/multi-humaneval")
['python', 'csharp', 'go', 'java', 'javascript', 'kotlin', 'perl', 'php', 'ruby', 'scala', 'swift', 'typescript']
```
To load a specific dataset and language
```python
from datasets import load_dataset
load_dataset("mxeval/multi-humaneval", "python")
DatasetDict({
test: Dataset({
features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'canonical_solution', 'description'],
num_rows: 164
})
})
```
### Data Instances
An example of a dataset instance:
```python
{
"task_id": "HumanEval/0",
"language": "python",
"prompt": "from typing import List\n\n\ndef has_close_elements(numbers: List[float], threshold: float) -> bool:\n \"\"\" Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n False\n >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n True\n \"\"\"\n",
"test": "\n\nMETADATA = {\n \"author\": \"jt\",\n \"dataset\": \"test\"\n}\n\n\ndef check(candidate):\n assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True\n assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False\n assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True\n assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False\n assert candidate([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True\n assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True\n assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False\n\n",
"entry_point": "has_close_elements",
"canonical_solution": " for idx, elem in enumerate(numbers):\n for idx2, elem2 in enumerate(numbers):\n if idx != idx2:\n distance = abs(elem - elem2)\n if distance < threshold:\n return True\n\n return False\n",
"description": "Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n False\n >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n True"
}
```
### Data Fields
- `task_id`: identifier for the data sample
- `prompt`: input for the model containing function header and docstrings
- `canonical_solution`: solution for the problem in the `prompt`
- `description`: task description
- `test`: contains function to test generated code for correctness
- `entry_point`: entry point for test
- `language`: programming lanuage identifier to call the appropriate subprocess call for program execution
### Data Splits
- HumanXEval
- Python
- Csharp
- Go
- Java
- Javascript
- Kotlin
- Perl
- Php
- Ruby
- Scala
- Swift
- Typescript
## Dataset Creation
### Curation Rationale
Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
### Personal and Sensitive Information
None.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
## Execution
### Execution Example
Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset.
```python
>>> from datasets import load_dataset
>>> from mxeval.execution import check_correctness
>>> humaneval_python = load_dataset("mxeval/multi-humaneval", "python", split="test")
>>> example_problem = humaneval_python[0]
>>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0)
{'task_id': 'HumanEval/0', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 9.636878967285156}
```
### Considerations for Using the Data
Make sure to sandbox the execution environment.
### Dataset Curators
AWS AI Labs
### Licensing Information
[LICENSE](https://huggingface.co/datasets/mxeval/multi-humaneval/blob/main/multi-humaneval-LICENSE) <br>
[THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/multi-humaneval/blob/main/THIRD_PARTY_LICENSES)
### Citation Information
```
@article{mbxp_athiwaratkun2022,
title = {Multi-lingual Evaluation of Code Generation Models},
author = {Athiwaratkun, Ben and
Gouda, Sanjay Krishna and
Wang, Zijian and
Li, Xiaopeng and
Tian, Yuchen and
Tan, Ming
and Ahmad, Wasi Uddin and
Wang, Shiqi and
Sun, Qing and
Shang, Mingyue and
Gonugondla, Sujan Kumar and
Ding, Hantian and
Kumar, Varun and
Fulton, Nathan and
Farahani, Arash and
Jain, Siddhartha and
Giaquinto, Robert and
Qian, Haifeng and
Ramanathan, Murali Krishna and
Nallapati, Ramesh and
Ray, Baishakhi and
Bhatia, Parminder and
Sengupta, Sudipta and
Roth, Dan and
Xiang, Bing},
doi = {10.48550/ARXIV.2210.14868},
url = {https://arxiv.org/abs/2210.14868},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
### Contributions
[skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi)
|
mxeval/multi-humaneval
|
[
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"mxeval",
"code-generation",
"multi-humaneval",
"humaneval",
"arxiv:2210.14868",
"region:us"
] |
2023-03-14T21:37:18+00:00
|
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation"], "pretty_name": "multi-humaneval", "dataset_info": {"features": [{"name": "task_id", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "test", "dtype": "string"}, {"name": "entry_point", "dtype": "string"}], "splits": [{"name": "multi-humaneval_python", "num_bytes": 165716, "num_examples": 164}], "download_size": 67983, "dataset_size": 165716}, "tags": ["mxeval", "code-generation", "multi-humaneval", "humaneval"]}
|
2023-03-20T19:20:48+00:00
|
e61d63a2ed66985389a754782e55703561cc359c
|
# MBXP
## Table of Contents
- [MathQA-X](#MathQA-X)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#related-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Executional Correctness](#execution)
- [Execution Example](#execution-example)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
# MathQA-X
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval)
- **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8)
### Dataset Summary
This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data,
namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval.
<br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868).
### Related Tasks and Leaderboards
* [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval)
* [MBXP](https://huggingface.co/datasets/mxeval/mbxp)
* [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x)
### Languages
The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings.
## Dataset Structure
To lookup currently supported datasets
```python
get_dataset_config_names("mxeval/mathqa-x")
['python', 'java', 'javascript']
```
To load a specific dataset and language
```python
from datasets import load_dataset
load_dataset("mxeval/mathqa-x", "python")
DatasetDict({
test: Dataset({
features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'canonical_solution'],
num_rows: 1883
})
})
```
### Data Instances
An example of a dataset instance:
```python
{
"task_id": "MathQA/0",
"language": "python",
"prompt": "def problem():\n \"\"\"\n a shopkeeper sold an article offering a discount of 5 % and earned a profit of 31.1 % . what would have been the percentage of profit earned if no discount had been offered ? n0 = 5.0 n1 = 31.1\n \"\"\"\n",
"test": "import math\ndef compare(x, y):\n return math.fabs(x-y)<1e-8\ncandidate = problem\nassert compare(candidate(), 38.0)\ndef check(x): pass\n",
"entry_point": "problem",
"canonical_solution": " n0 = 5.0\n n1 = 31.1\n t0 = n1 + 100.0\n t1 = 100.0 - n0\n t2 = t0 * 100.0\n t3 = t2 / t1\n answer = t3 - 100.0\n return answer\n"
}
```
### Data Fields
- `task_id`: identifier for the data sample
- `prompt`: input for the model containing function header and docstrings
- `canonical_solution`: solution for the problem in the `prompt`
- `description`: task description
- `test`: contains function to test generated code for correctness
- `entry_point`: entry point for test
- `language`: programming lanuage identifier to call the appropriate subprocess call for program execution
### Data Splits
- MathQA-X
- Python
- Java
- Javascript
## Dataset Creation
### Curation Rationale
Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
### Personal and Sensitive Information
None.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
## Execution
### Execution Example
Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset.
```python
>>> from datasets import load_dataset
>>> from mxeval.execution import check_correctness
>>> mathqa_python = load_dataset("mxeval/mathqa-x", "python", split="test")
>>> example_problem = mathqa_python[0]
>>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0)
{'task_id': 'MathQA/0', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 9.673357009887695}
```
### Considerations for Using the Data
Make sure to sandbox the execution environment.
### Dataset Curators
AWS AI Labs
### Licensing Information
[LICENSE](https://huggingface.co/datasets/mxeval/mathqa-x/blob/main/mathqa-x-LICENSE) <br>
[THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/mathqa-x/blob/main/THIRD_PARTY_LICENSES)
### Citation Information
```
@inproceedings{
athiwaratkun2023multilingual,
title={Multi-lingual Evaluation of Code Generation Models},
author={Ben Athiwaratkun and Sanjay Krishna Gouda and Zijian Wang and Xiaopeng Li and Yuchen Tian and Ming Tan and Wasi Uddin Ahmad and Shiqi Wang and Qing Sun and Mingyue Shang and Sujan Kumar Gonugondla and Hantian Ding and Varun Kumar and Nathan Fulton and Arash Farahani and Siddhartha Jain and Robert Giaquinto and Haifeng Qian and Murali Krishna Ramanathan and Ramesh Nallapati and Baishakhi Ray and Parminder Bhatia and Sudipta Sengupta and Dan Roth and Bing Xiang},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=Bo7eeXm6An8}
}
```
### Contributions
[skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi)
|
mxeval/mathqa-x
|
[
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"mathqa-x",
"mathqa",
"mxeval",
"arxiv:2210.14868",
"region:us"
] |
2023-03-14T21:41:40+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "pretty_name": "mbxp", "tags": ["mathqa-x", "mathqa", "mxeval"]}
|
2023-03-20T19:21:12+00:00
|
ac57b80a28ac0259875110894f8a238cf84dcc14
|
# MxEval
**M**ultilingual E**x**ecution **Eval**uation
## Table of Contents
- [MxEval](#MxEval)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Executional Correctness](#execution)
- [Execution Example](#execution-example)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval)
- **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8)
### Dataset Summary
This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data,
namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval.
<br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868).
### Supported Tasks and Leaderboards
* [MBXP](https://huggingface.co/datasets/mxeval/mbxp)
* [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval)
* [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x)
### Languages
The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings.
## Dataset Structure
To lookup currently supported datasets
```python
get_dataset_config_names("mxeval/mxeval")
['mathqa-x', 'mbxp', 'multi-humaneval']
```
To load a specific dataset and language
```python
from datasets import load_dataset
load_dataset("mxeval/mxeval", "mbxp", split="python")
Dataset({
features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'description', 'canonical_solution'],
num_rows: 974
})
```
### Data Instances
An example of a dataset instance:
```python
{
"task_id": "MBSCP/6",
"language": "scala",
"prompt": "object Main extends App {\n /**\n * You are an expert Scala programmer, and here is your task.\n * * Write a Scala function to check whether the two numbers differ at one bit position only or not.\n *\n * >>> differAtOneBitPos(13, 9)\n * true\n * >>> differAtOneBitPos(15, 8)\n * false\n * >>> differAtOneBitPos(2, 4)\n * false\n */\n def differAtOneBitPos(a : Int, b : Int) : Boolean = {\n",
"test": "\n\n var arg00 : Int = 13\n var arg01 : Int = 9\n var x0 : Boolean = differAtOneBitPos(arg00, arg01)\n var v0 : Boolean = true\n assert(x0 == v0, \"Exception -- test case 0 did not pass. x0 = \" + x0)\n\n var arg10 : Int = 15\n var arg11 : Int = 8\n var x1 : Boolean = differAtOneBitPos(arg10, arg11)\n var v1 : Boolean = false\n assert(x1 == v1, \"Exception -- test case 1 did not pass. x1 = \" + x1)\n\n var arg20 : Int = 2\n var arg21 : Int = 4\n var x2 : Boolean = differAtOneBitPos(arg20, arg21)\n var v2 : Boolean = false\n assert(x2 == v2, \"Exception -- test case 2 did not pass. x2 = \" + x2)\n\n\n}\n",
"entry_point": "differAtOneBitPos",
"description": "Write a Scala function to check whether the two numbers differ at one bit position only or not."
}
```
### Data Fields
- `task_id`: identifier for the data sample
- `prompt`: input for the model containing function header and docstrings
- `canonical_solution`: solution for the problem in the `prompt`
- `description`: task description
- `test`: contains function to test generated code for correctness
- `entry_point`: entry point for test
- `language`: programming lanuage identifier to call the appropriate subprocess call for program execution
### Data Splits
- HumanXEval
- Python
- Java
- JavaScript
- Csharp
- CPP
- Go
- Kotlin
- PHP
- Perl
- Ruby
- Swift
- Scala
- MBXP
- Python
- Java
- JavaScript
- TypeScript
- Csharp
- CPP
- Go
- Kotlin
- PHP
- Perl
- Ruby
- Swift
- Scala
- MathQA
- Python
- Java
- JavaScript
## Dataset Creation
### Curation Rationale
Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
### Personal and Sensitive Information
None.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
### Dataset Curators
AWS AI Labs
## Execution
### Execution Example
Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset.
```python
>>> from datasets import load_dataset
>>> from mxeval.execution import check_correctness
>>> mbxp_python = load_dataset("mxeval/mxeval", "mbxp", split="python")
>>> example_problem = mbxp_python[0]
>>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0)
{'task_id': 'MBPP/1', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 10.582208633422852}
```
### Considerations for Using the Data
Make sure to sandbox the execution environment since generated code samples can be harmful.
### Licensing Information
[LICENSE](https://huggingface.co/datasets/mxeval/mxeval/blob/main/LICENSE) <br>
[THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/mxeval/blob/main/THIRD_PARTY_LICENSES)
# Citation Information
```
@article{mbxp_athiwaratkun2022,
title = {Multi-lingual Evaluation of Code Generation Models},
author = {Athiwaratkun, Ben and
Gouda, Sanjay Krishna and
Wang, Zijian and
Li, Xiaopeng and
Tian, Yuchen and
Tan, Ming
and Ahmad, Wasi Uddin and
Wang, Shiqi and
Sun, Qing and
Shang, Mingyue and
Gonugondla, Sujan Kumar and
Ding, Hantian and
Kumar, Varun and
Fulton, Nathan and
Farahani, Arash and
Jain, Siddhartha and
Giaquinto, Robert and
Qian, Haifeng and
Ramanathan, Murali Krishna and
Nallapati, Ramesh and
Ray, Baishakhi and
Bhatia, Parminder and
Sengupta, Sudipta and
Roth, Dan and
Xiang, Bing},
doi = {10.48550/ARXIV.2210.14868},
url = {https://arxiv.org/abs/2210.14868},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
# Contributions
[skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi)
|
mxeval/mxeval
|
[
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"mxeval",
"code-generation",
"mbxp",
"multi-humaneval",
"mathqax",
"arxiv:2210.14868",
"region:us"
] |
2023-03-14T22:25:01+00:00
|
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation"], "pretty_name": "mxeval", "dataset_info": {"features": [{"name": "task_id", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "test", "dtype": "string"}, {"name": "entry_point", "dtype": "string"}], "splits": [{"name": "multilingual-humaneval_python", "num_bytes": 165716, "num_examples": 164}], "download_size": 67983, "dataset_size": 165716}, "tags": ["mxeval", "code-generation", "mbxp", "multi-humaneval", "mathqax"]}
|
2023-03-27T17:42:12+00:00
|
7eab8aca840c2f33bff342a7e0755c1922ab6a93
|
lauralex/dbdicons
|
[
"license:mit",
"region:us"
] |
2023-03-14T23:06:25+00:00
|
{"license": "mit"}
|
2023-03-14T23:06:25+00:00
|
|
88619754b4b9e215fbae77aa50d710e7967a1317
|
# Dataset Card for "super_glue_text_to_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hanmaegeo/super_glue_text_to_text
|
[
"region:us"
] |
2023-03-14T23:16:20+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 36955412, "num_examples": 29694}], "download_size": 11766196, "dataset_size": 36955412}}
|
2023-03-14T23:16:22+00:00
|
f7501645e7d46c6e76f40ea6a932923917504dcd
|
# Dataset Card for "mscoco_20k_unique_imgs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JotDe/mscoco_20k_unique_imgs
|
[
"region:us"
] |
2023-03-14T23:22:16+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1690346959.4, "num_examples": 19998}], "download_size": 1686678854, "dataset_size": 1690346959.4}}
|
2023-03-15T11:57:21+00:00
|
25d7c709074834adafe2ab64fcbbb28c7ea0aff4
|
# Leyzer: A Dataset for Multilingual Virtual Assistants
Leyzer is a multilingual text corpus designed to study multilingual and cross-lingual natural language understanding (NLU) models and the strategies of localization of
virtual assistants. It consists of 20 domains across three languages: English, Spanish and Polish, with 186 intents and a wide range of samples, ranging from 1 to 672
sentences per intent. For more stats please refer to wiki.
## Citation
If you use this model, please cite the following:
```
@inproceedings{kubis2023caiccaic,
author={Marek Kubis and Paweł Skórzewski and Marcin Sowański and Tomasz Ziętkiewicz},
pages={1319–1324},
title={Center for Artificial Intelligence Challenge on Conversational AI Correctness},
booktitle={Proceedings of the 18th Conference on Computer Science and Intelligence Systems},
year={2023},
doi={10.15439/2023B6058},
url={http://dx.doi.org/10.15439/2023B6058},
volume={35},
series={Annals of Computer Science and Information Systems}
}
```
|
cartesinus/leyzer-fedcsis
|
[
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"language:pl",
"language:es",
"license:cc-by-4.0",
"natural-language-understanding",
"region:us"
] |
2023-03-15T00:12:27+00:00
|
{"language": ["en", "pl", "es"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "tags": ["natural-language-understanding"]}
|
2023-10-20T08:28:32+00:00
|
39512d914ce18ec9c1a11d14eae9f1af3896ad50
|
# Dataset Card for "6k-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nbtpj/6k-sample
|
[
"region:us"
] |
2023-03-15T00:57:45+00:00
|
{"dataset_info": {"features": [{"name": "bucketized_user_age", "dtype": "float32"}, {"name": "movie_genres", "sequence": "int64"}, {"name": "movie_id", "dtype": "binary"}, {"name": "movie_title", "dtype": "binary"}, {"name": "timestamp", "dtype": "int64"}, {"name": "user_gender", "dtype": "bool"}, {"name": "user_id", "dtype": "binary"}, {"name": "user_occupation_label", "dtype": "int64"}, {"name": "user_occupation_text", "dtype": "binary"}, {"name": "user_rating", "dtype": "float32"}, {"name": "user_zip_code", "dtype": "binary"}, {"name": "bio prompt", "dtype": "string"}, {"name": "history prompt", "dtype": "string"}, {"name": "history overview prompt", "dtype": "string"}, {"name": "job-group prompt", "dtype": "string"}, {"name": "age-group prompt", "dtype": "string"}, {"name": "gender-group prompt", "dtype": "string"}, {"name": "region-group prompt", "dtype": "string"}, {"name": "cross-movie prompt", "dtype": "string"}, {"name": "cross-cate prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39211129, "num_examples": 6000}], "download_size": 5318113, "dataset_size": 39211129}}
|
2023-03-15T00:58:07+00:00
|
a5a7e1f3af61d3f3d27fe3ddc66b6e89c06561d8
|
# Dataset Card for "movielens-1m-ratings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nbtpj/movielens-1m-ratings
|
[
"region:us"
] |
2023-03-15T01:01:22+00:00
|
{"dataset_info": {"features": [{"name": "bucketized_user_age", "dtype": "float32"}, {"name": "movie_genres", "sequence": "int64"}, {"name": "movie_id", "dtype": "binary"}, {"name": "movie_title", "dtype": "binary"}, {"name": "timestamp", "dtype": "int64"}, {"name": "user_gender", "dtype": "bool"}, {"name": "user_id", "dtype": "binary"}, {"name": "user_occupation_label", "dtype": "int64"}, {"name": "user_occupation_text", "dtype": "binary"}, {"name": "user_rating", "dtype": "float32"}, {"name": "user_zip_code", "dtype": "binary"}], "splits": [{"name": "train", "num_bytes": 116192936, "num_examples": 1000209}], "download_size": 43879407, "dataset_size": 116192936}}
|
2023-03-15T01:02:27+00:00
|
0745c126378be671853e28e6400c3d8bdc1b7ee5
|
# Dataset Card for "movielens-1m-movies"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nbtpj/movielens-1m-movies
|
[
"region:us"
] |
2023-03-15T01:02:29+00:00
|
{"dataset_info": {"features": [{"name": "movie_genres", "sequence": "int64"}, {"name": "movie_id", "dtype": "binary"}, {"name": "movie_title", "dtype": "binary"}], "splits": [{"name": "train", "num_bytes": 206339, "num_examples": 3883}], "download_size": 111390, "dataset_size": 206339}}
|
2023-03-15T01:02:41+00:00
|
3961207dcf812971661b65508dc27a05a6fffbb9
|
# Dataset Card for "Movies_and_TV"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nbtpj/Movies_and_TV
|
[
"region:us"
] |
2023-03-15T01:02:42+00:00
|
{"dataset_info": {"features": [{"name": "overall", "dtype": "float64"}, {"name": "verified", "dtype": "bool"}, {"name": "reviewTime", "dtype": "string"}, {"name": "reviewerID", "dtype": "string"}, {"name": "asin", "dtype": "string"}, {"name": "style", "dtype": "string"}, {"name": "reviewerName", "dtype": "string"}, {"name": "reviewText", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "unixReviewTime", "dtype": "int64"}, {"name": "vote", "dtype": "string"}, {"name": "image", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 4058038162, "num_examples": 8765568}], "download_size": 2295911945, "dataset_size": 4058038162}}
|
2023-03-15T01:32:38+00:00
|
fe43ac768e0e0df525c5925edb9c27c186730b3b
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** [https://usefusion.app](https://usefusion.app)
- **Repository:** [https://github.com/NEUROFUSIONInc](https://github.com/NEUROFUSIONInc)
- **Point of Contact:** [email protected]
### Dataset Summary
This is a dataset of EEG data & derived metrics recorded on the [Fusion](https://usefusion.app) platform from a single particpant through the course of a week.
Task: Eyes closed for 10mins at least twice a day. Participant also gave a short summary at the start of every recording in [events.csv](./events.csv)
Device: [Neurosity Crown](https://neurosity.co) - 8 channels [CP3, C3, F5, PO3, PO4, F6, C4, CP4]
## Dataset Structure
### Data Instances
All dataset are timeseries with a unixTimestamp column. Generated by using the [Neurosity brainwaves api](https://docs.neurosity.co/docs/api/brainwaves)
- rawBrainwaves: voltage readings across eeg channels
- signalQuality: standard deviation values & label (great, good, poor, noContact) per channel
- powerByBand: computed eeg power per channel (i.e CP3_delta, CP3_theta, CP3_alpha, CP3_beta, CP3_gamma)
- focus: prediction of user [focus based on gamma waves](https://docs.neurosity.co/docs/api/focus).
- calm: prediction of user [calm based on alpha waves](https://docs.neurosity.co/docs/api/calm).
### Data Splits
Each dataset is suffixed with *_unixTimestamp which represents the time of recording.
## Additional Information
### Dataset Curators
[NEUROFUSION Research Inc.](https://usefusion.app)
|
neurofusion/eeg-restingstate
|
[
"language:en",
"license:apache-2.0",
"neuro",
"eeg",
"powerspectra",
"focus",
"calm",
"longitudinal data",
"doi:10.57967/hf/0745",
"region:us"
] |
2023-03-15T01:27:19+00:00
|
{"language": ["en"], "license": "apache-2.0", "tags": ["neuro", "eeg", "powerspectra", "focus", "calm", "longitudinal data"]}
|
2023-05-15T16:14:37+00:00
|
4d0c4b37e0a7c8fbb900cfb8081a1924fdff6139
|
# Dataset Card for "Movies_and_TV_meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nbtpj/Movies_and_TV_meta
|
[
"region:us"
] |
2023-03-15T01:32:39+00:00
|
{"dataset_info": {"features": [{"name": "category", "dtype": "string"}, {"name": "tech1", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "fit", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "also_buy", "dtype": "string"}, {"name": "tech2", "dtype": "string"}, {"name": "brand", "dtype": "string"}, {"name": "feature", "dtype": "string"}, {"name": "rank", "dtype": "string"}, {"name": "also_view", "dtype": "string"}, {"name": "main_cat", "dtype": "string"}, {"name": "similar_item", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "price", "dtype": "string"}, {"name": "asin", "dtype": "string"}, {"name": "imageURL", "dtype": "string"}, {"name": "imageURLHighRes", "dtype": "string"}, {"name": "details", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 292562315, "num_examples": 203766}], "download_size": 152902943, "dataset_size": 292562315}}
|
2023-03-15T01:34:17+00:00
|
520b0e6f202c51d961d5cf6bdadf0476c846f0c4
|
# Dataset Card for "wikisource-green"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Zombely/wikisource-green
|
[
"region:us"
] |
2023-03-15T02:03:19+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train_1", "num_bytes": 15342818708.456, "num_examples": 9816}, {"name": "train_2", "num_bytes": 13234327199.457, "num_examples": 9997}, {"name": "train_3", "num_bytes": 8814747830.88, "num_examples": 9935}, {"name": "train_4", "num_bytes": 10839226390.145, "num_examples": 9995}, {"name": "train_5", "num_bytes": 12414635965.0, "num_examples": 10000}, {"name": "train_6", "num_bytes": 5911580759.0, "num_examples": 10000}, {"name": "train_7", "num_bytes": 11420080854.0, "num_examples": 10000}, {"name": "train_8", "num_bytes": 18080629271.0, "num_examples": 10000}, {"name": "train_9", "num_bytes": 11348011360.0, "num_examples": 10000}, {"name": "train_10", "num_bytes": 14141957301.0, "num_examples": 10000}, {"name": "train_11", "num_bytes": 9983910604.0, "num_examples": 10000}, {"name": "train_12", "num_bytes": 13105253749.0, "num_examples": 10000}, {"name": "train_13", "num_bytes": 15681320595.0, "num_examples": 10000}, {"name": "train_14", "num_bytes": 14896725472.0, "num_examples": 10000}, {"name": "train_15", "num_bytes": 11493364396.927, "num_examples": 9987}, {"name": "validation", "num_bytes": 4487934740.612, "num_examples": 4077}], "download_size": 5330245163, "dataset_size": 191196525196.477}}
|
2023-03-18T11:50:26+00:00
|
144d00f59b39b64429bc20269028b96897ad622c
|
debugzxcv/nana7mi
|
[
"license:unknown",
"region:us"
] |
2023-03-15T02:59:38+00:00
|
{"license": "unknown"}
|
2023-03-15T03:13:27+00:00
|
|
36f4c6dd2e2b7c4d2a7f3355738e453f766d1114
|
# Dataset Card for "oig"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
elricwan/oig
|
[
"region:us"
] |
2023-03-15T03:02:02+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 47893346745, "num_examples": 14430253}], "download_size": 25698713382, "dataset_size": 47893346745}}
|
2023-03-15T03:38:15+00:00
|
d6afc27243f60c9d0084f84997da5106edfbed73
|
# Dataset Card for "tatoeba-en-tgl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jmbt22/tatoeba-en-tgl
|
[
"region:us"
] |
2023-03-15T03:54:33+00:00
|
{"dataset_info": {"features": [{"name": "translation.en", "dtype": "string"}, {"name": "translation.tl", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 539187, "num_examples": 7305}], "download_size": 357015, "dataset_size": 539187}}
|
2023-03-15T03:54:47+00:00
|
66035498c77bd00abdf62d495763f687010f9db4
|
11
|
Lilithchouy/1111bb
|
[
"license:afl-3.0",
"region:us"
] |
2023-03-15T05:08:17+00:00
|
{"license": "afl-3.0"}
|
2023-03-15T06:34:43+00:00
|
f4acbed6a153027f190538d7acf9b383ce9508d6
|
# Dataset Card for "bad_code_to_good_code_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
michaelnath/bad_code_to_good_code_dataset
|
[
"region:us"
] |
2023-03-15T05:38:03+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2905897365, "num_examples": 2786238}], "download_size": 550189166, "dataset_size": 2905897365}}
|
2023-03-15T05:39:28+00:00
|
4087d83f6e645fa4a4915981e55d8c0150dceff4
|
RiniPL/Dementia_Dataset
|
[
"task_categories:image-classification",
"language:en",
"license:ecl-2.0",
"code",
"region:us"
] |
2023-03-15T05:57:38+00:00
|
{"language": ["en"], "license": "ecl-2.0", "task_categories": ["image-classification"], "pretty_name": "Dementia", "tags": ["code"]}
|
2023-03-15T07:48:14+00:00
|
|
2ea43df97b572e7ba5cb06408d0bfd5653cc0506
|
# Dataset Card for "bad_code_to_good_code_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joetey/bad_code_to_good_code_dataset
|
[
"region:us"
] |
2023-03-15T06:05:27+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 703072, "num_examples": 589}], "download_size": 17498, "dataset_size": 703072}}
|
2023-03-15T09:57:21+00:00
|
e2e72afc7ffe4a4e4fe4a04ebf0c66765b43914a
|
I found this dataset on my harddrive, which if I remember correctly I got from the source mentioned in the paper:
"Claim extraction from text using transfer learning" - By Acharya Ashish Prabhakar, Salar Mohtaj, Sebastian Möller
https://aclanthology.org/2020.icon-main.39/
The github repo with the data seems down.
It extends FEVER dataset with non-claims for training claim detectors.
|
KnutJaegersberg/FEVER_claim_extraction
|
[
"license:mit",
"argument mining",
"region:us"
] |
2023-03-15T06:21:05+00:00
|
{"license": "mit", "tags": ["argument mining"]}
|
2023-03-15T06:25:27+00:00
|
8a7e4ef8c1b49d8176975287a3389b6cb00a8710
|
# Dataset Card for "my_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
claudehotline/my_dataset
|
[
"region:us"
] |
2023-03-15T07:32:07+00:00
|
{"dataset_info": {"features": [{"name": "data", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 80000, "num_examples": 10000}], "download_size": 96279, "dataset_size": 80000}}
|
2023-03-15T07:32:16+00:00
|
6f31ad6040d4eb7200ac9545a551d8f07692db41
|
# Dataset Card for "miniwob_plusplus_T5_randomized_ref2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LucasThil/miniwob_plusplus_T5_randomized_ref2
|
[
"region:us"
] |
2023-03-15T08:11:38+00:00
|
{"dataset_info": {"features": [{"name": "history_episodes", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "html_snippets", "dtype": "string"}, {"name": "actions", "dtype": "string"}, {"name": "refs", "dtype": "int64"}, {"name": "keydown_texts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 267456938, "num_examples": 60321}], "download_size": 0, "dataset_size": 267456938}}
|
2023-03-15T10:23:31+00:00
|
46d6ddb6fd87f90ae33e33abc30babd7d0f745c0
|
shivi/video-demo
|
[
"license:mit",
"region:us"
] |
2023-03-15T09:18:09+00:00
|
{"license": "mit"}
|
2023-04-21T13:52:42+00:00
|
|
17b3eaf95a749faad4f56d2ed74c9253a9aca2a7
|
This data is from the github repo https://github.com/Jiaxin-Pei/Certainty-in-Science-Communication about the paper "Measuring Sentence-Level and Aspect-Level (Un)certainty in Science Communications" by Jiaxin Pei and David Jurgens.
I put it here for later use, to train an ML model which estimates claim certainty.
|
KnutJaegersberg/science_finding_sentence_certainty
|
[
"license:mit",
"region:us"
] |
2023-03-15T09:35:22+00:00
|
{"license": "mit"}
|
2023-03-15T09:37:50+00:00
|
ac5701d3e6eb8dac3f592da26b85b6be0b5e0289
|
Dataset generated from cyrillic train set using Stackmix
========================================================
Number of images: 3700269
Sources:
* [Cyrillic dataset](https://www.kaggle.com/datasets/constantinwerner/cyrillic-handwriting-dataset)
* [Stackmix code](https://github.com/ai-forever/StackMix-OCR)
|
nastyboget/stackmix_cyrillic_large
|
[
"task_categories:image-to-text",
"size_categories:1M<n<10M",
"language:ru",
"license:mit",
"region:us"
] |
2023-03-15T09:48:56+00:00
|
{"language": ["ru"], "license": "mit", "size_categories": ["1M<n<10M"], "task_categories": ["image-to-text"]}
|
2023-03-20T10:19:24+00:00
|
b1ac0ab96b17868db1f992f0939748b23e230422
|
# Dataset Card for "DataAASR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MohammedNasri/DataAASR
|
[
"region:us"
] |
2023-03-15T10:01:07+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 126461196457.672, "num_examples": 388054}, {"name": "test", "num_bytes": 304561718.0, "num_examples": 10440}], "download_size": 124152883448, "dataset_size": 126765758175.672}}
|
2023-03-15T12:09:53+00:00
|
f4612993afb8e96b5258fa38a8f0dd2b6cb31267
|
# Dataset Card for "imdb_genre_prediction2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
james-burton/imdb_genre_prediction2
|
[
"region:us"
] |
2023-03-15T10:25:45+00:00
|
{"dataset_info": {"features": [{"name": "Rank", "dtype": "int64"}, {"name": "Title", "dtype": "string"}, {"name": "Description", "dtype": "string"}, {"name": "Director", "dtype": "string"}, {"name": "Actors", "dtype": "string"}, {"name": "Year", "dtype": "int64"}, {"name": "Runtime (Minutes)", "dtype": "int64"}, {"name": "Rating", "dtype": "float64"}, {"name": "Votes", "dtype": "int64"}, {"name": "Revenue (Millions)", "dtype": "float64"}, {"name": "Metascore", "dtype": "float64"}, {"name": "Genre_is_Drama", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 224399.15, "num_examples": 680}, {"name": "validation", "num_bytes": 39599.85, "num_examples": 120}, {"name": "test", "num_bytes": 65392, "num_examples": 200}], "download_size": 0, "dataset_size": 329391.0}}
|
2023-03-15T15:52:23+00:00
|
268ab17c0769d4df99fe71c6e2be3939b8bdeb8f
|
cg1177/TAD-features
|
[
"license:apache-2.0",
"region:us"
] |
2023-03-15T10:38:36+00:00
|
{"license": "apache-2.0"}
|
2023-03-15T10:38:36+00:00
|
|
acb1c4d1872712c9c8cc017192ca48993263f73c
|
# Dataset Card for "crc_image_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
BettercallSaulGM/crc_image_dataset
|
[
"region:us"
] |
2023-03-15T11:44:51+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 130747958.0, "num_examples": 1000}], "download_size": 0, "dataset_size": 130747958.0}}
|
2023-03-20T01:57:45+00:00
|
69b7b70f1f1d4bdbdeca38aa37d4191ceb4bdddd
|
# Dataset Card for "mscoco_200_unique_imgs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JotDe/mscoco_200_unique_imgs
|
[
"region:us"
] |
2023-03-15T11:55:23+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4761831.8, "num_examples": 199}], "download_size": 4745758, "dataset_size": 4761831.8}}
|
2023-03-15T11:55:38+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.