sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
c3c9f056669607abe69df3a3dfa3952d5f95f76d
|
# Dataset Card for "UA_speech_noisereduced_3c3p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AravindVadlapudi02/UA_speech_noisereduced_3c3p
|
[
"region:us"
] |
2023-03-18T11:11:56+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "healthy control", "1": "pathology"}}}}, {"name": "input_features", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 1152398400, "num_examples": 1200}, {"name": "test", "num_bytes": 4214897148, "num_examples": 4389}], "download_size": 621018100, "dataset_size": 5367295548}}
|
2023-03-18T11:13:14+00:00
|
a36959abd4494c05e36134fb77f8a089a1ea6b18
|
## Dataset description
This dataset was used to fine-tune this [model](https://huggingface.co/keras-dreambooth/dreambooth_diffusion_hokusai)
## Demo
You can try with this [demo](https://huggingface.co/spaces/keras-dreambooth/dreambooth_diffusion_hokusai)
## Intended uses & limitations
Image of Hokusai artist
|
keras-dreambooth/hokusai-style
|
[
"size_categories:n<1K",
"license:apache-2.0",
"keras-dreambooth",
"consentful",
"diffusers",
"text-to-image",
"region:us"
] |
2023-03-18T11:37:15+00:00
|
{"license": "apache-2.0", "size_categories": ["n<1K"], "tags": ["keras-dreambooth", "consentful", "diffusers", "text-to-image"]}
|
2023-03-18T13:48:37+00:00
|
fcc26d2223443bb6aaeff6de903e9a9745b73e11
|
# Dataset Card for "tib_002"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gigant/tib_002
|
[
"region:us"
] |
2023-03-18T11:55:33+00:00
|
{"dataset_info": {"features": [{"name": "doi", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "video_url", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "release_year", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "contributors", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "transcript", "dtype": "string"}, {"name": "transcript_segments", "sequence": [{"name": "id", "dtype": "int32"}, {"name": "seek", "dtype": "int32"}, {"name": "start", "dtype": "float32"}, {"name": "end", "dtype": "float32"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "int32"}, {"name": "temperature", "dtype": "float32"}, {"name": "avg_logprob", "dtype": "float32"}, {"name": "compression_ratio", "dtype": "float32"}, {"name": "no_speech_prob", "dtype": "float32"}]}, {"name": "keyframes", "sequence": [{"name": "slide", "dtype": "string"}, {"name": "frames", "sequence": "int32"}, {"name": "timestamp", "sequence": "float32"}]}, {"name": "language", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1011381643.8712058, "num_examples": 8831}], "download_size": 486130872, "dataset_size": 1011381643.8712058}}
|
2023-03-18T11:56:02+00:00
|
06fcdc6ad8136573cb088df30fe02d0fbd0a1470
|
nsanghi/sample-datasets
|
[
"license:apache-2.0",
"region:us"
] |
2023-03-18T12:37:29+00:00
|
{"license": "apache-2.0"}
|
2023-03-18T14:56:09+00:00
|
|
b21db77e2d2f7ad3c0f36dfc4a8b1dc1572b1de5
|
Akass2002/smartvoice
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-03-18T12:38:30+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-03-18T14:10:02+00:00
|
|
185171ecfe5bf0fbcccb6050f75018757c5834f7
|
# Dataset Card for "eli5-modified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
0x70DA/eli5-modified
|
[
"region:us"
] |
2023-03-18T14:02:29+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 507700481, "num_examples": 272634}, {"name": "test", "num_bytes": 46615755, "num_examples": 24512}, {"name": "validation", "num_bytes": 18567879, "num_examples": 9812}], "download_size": 354225553, "dataset_size": 572884115}}
|
2023-03-18T14:02:56+00:00
|
afecf8fe77dfa0c3309a8eb55cdde5617257db24
|
Sadhguru Quotes collected from https://isha.sadhguru.org/au/en/wisdom/type/quotes
|
Crapp/sadQuotes
|
[
"license:cc",
"doi:10.57967/hf/0454",
"region:us"
] |
2023-03-18T14:02:46+00:00
|
{"license": "cc"}
|
2023-03-18T15:02:11+00:00
|
342d5c6de828f4227cf7d5afa5ffd4ce70a7bc65
|
# Dataset Card for "VQAv2_sample_validation_text_davinci_002_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_text_davinci_002_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_2
|
[
"region:us"
] |
2023-03-18T14:48:13+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 2443, "num_examples": 2}], "download_size": 10579, "dataset_size": 2443}}
|
2023-03-18T15:02:01+00:00
|
f47432f994d515497d02011b1a46bad13f2314eb
|
# Dataset Card for "VQAv2_sample_validation_text_davinci_002_mode_T_A_C_Q_rices_ns_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_text_davinci_002_mode_T_A_C_Q_rices_ns_2
|
[
"region:us"
] |
2023-03-18T14:56:16+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 910, "num_examples": 2}, {"name": "fewshot_1", "num_bytes": 1467, "num_examples": 2}], "download_size": 12244, "dataset_size": 2377}}
|
2023-03-18T14:56:54+00:00
|
1c687f3519ce282fa571b8cb13cf1281c1e1b7b7
|
# Dataset Card for "VQAv2_sample_validation_text_davinci_002_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_text_davinci_002_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_10
|
[
"region:us"
] |
2023-03-18T15:03:43+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 12880, "num_examples": 10}], "download_size": 13332, "dataset_size": 12880}}
|
2023-03-18T15:08:57+00:00
|
fec06f38f5fdf5e3a63424ddb66e6b56dfe5016c
|
# Dataset Card for "VQAv2_sample_validation_text_davinci_003_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_text_davinci_003_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_10
|
[
"region:us"
] |
2023-03-18T15:11:55+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 12831, "num_examples": 10}], "download_size": 13218, "dataset_size": 12831}}
|
2023-03-18T15:11:57+00:00
|
b7e466e5fbac2cc1184df86532dc553e1563ebd5
|
# Dataset Card for "VQAv2_sample_validation_text_davinci_003_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_text_davinci_003_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_100
|
[
"region:us"
] |
2023-03-18T15:13:59+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 131343, "num_examples": 100}], "download_size": 66640, "dataset_size": 131343}}
|
2023-03-18T15:28:47+00:00
|
959b08ae5be0e4d2caac8b5126e492d639817df6
|
# Dataset Card for "VQAv2_sample_validation_google_flan_t5_xxl_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_google_flan_t5_xxl_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_100
|
[
"region:us"
] |
2023-03-18T15:17:41+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 229695, "num_examples": 100}], "download_size": 64707, "dataset_size": 229695}}
|
2023-03-18T15:20:10+00:00
|
eea61651528e8d3ed76a26de35e78bdf8abfbaf5
|
# Dataset Card for "VQAv2_sample_validation_text_davinci_003_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_text_davinci_003_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_200
|
[
"region:us"
] |
2023-03-18T15:32:52+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 255182, "num_examples": 200}], "download_size": 126747, "dataset_size": 255182}}
|
2023-03-18T15:32:54+00:00
|
e77c2a447f766b2df1d62f9110df76aca3287f9e
|
# Dataset Card for "VQAv2_sample_validation_google_flan_t5_xxl_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_google_flan_t5_xxl_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_200
|
[
"region:us"
] |
2023-03-18T15:35:41+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 443915, "num_examples": 200}], "download_size": 103291, "dataset_size": 443915}}
|
2023-03-18T15:35:43+00:00
|
db0e00317e4f3c5758ffea62cffd7ebfc1a326cd
|
# Dataset Card for "philschmid-de-blog"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
philschmid/philschmid-de-blog
|
[
"region:us"
] |
2023-03-18T16:02:59+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "tags", "sequence": "string"}, {"name": "summary", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1151548, "num_examples": 87}], "download_size": 541717, "dataset_size": 1151548}}
|
2023-03-29T20:01:21+00:00
|
e349be46446dc3f806e6e82ccd7153851420abc6
|
Kitino/BundaMole
|
[
"license:cc",
"region:us"
] |
2023-03-18T16:06:04+00:00
|
{"license": "cc"}
|
2023-03-18T16:06:04+00:00
|
|
f6193f55ea3a76a9236b88a7c96a29ec7e984735
|
# Dataset Card for "wikisource-red"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Zombely/wikisource-red
|
[
"region:us"
] |
2023-03-18T16:29:40+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train_1", "num_bytes": 12996760047.0, "num_examples": 10000}, {"name": "train_2", "num_bytes": 10554030726.546, "num_examples": 9998}, {"name": "train_3", "num_bytes": 13696109295.506, "num_examples": 9999}, {"name": "train_4", "num_bytes": 15480963077.0, "num_examples": 10000}, {"name": "train_5", "num_bytes": 13559162557.0, "num_examples": 10000}, {"name": "validation", "num_bytes": 2388915116.642, "num_examples": 1542}], "download_size": 2424783685, "dataset_size": 68675940819.694}}
|
2023-03-19T17:57:33+00:00
|
562978f9768f3e4e9c2ad12025868a1a9f47dffb
|
# Dataset Card for "wikipedia_stage2_coverage_100000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MartinKu/wikipedia_stage2_coverage_100000
|
[
"region:us"
] |
2023-03-18T16:31:57+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "S_V_position", "sequence": "int64"}, {"name": "O_C_position", "sequence": "int64"}, {"name": "start_point_list", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1989835331, "num_examples": 10000}], "download_size": 700963436, "dataset_size": 1989835331}}
|
2023-03-21T20:22:08+00:00
|
e9d5fff84b3a028e0c541e02c3d18d4486c8becc
|
# Dataset Card for "ebmnlp_pico"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
reginaboateng/ebmnlp_pico
|
[
"region:us"
] |
2023-03-18T17:53:53+00:00
|
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "chunk_tags", "sequence": "string"}, {"name": "pos_tags", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "I-INT", "2": "I-OUT", "3": "I-PAR"}}}}], "splits": [{"name": "train", "num_bytes": 27639457, "num_examples": 23952}, {"name": "test", "num_bytes": 1482781, "num_examples": 2065}, {"name": "dev", "num_bytes": 7446993, "num_examples": 7049}], "download_size": 4095965, "dataset_size": 36569231}}
|
2023-03-18T17:54:00+00:00
|
d64773d764275c4b9d8c3d7bb4a3ab5878d7034c
|
## <h1>Spongebob Transcripts Dataset 🧽</h1>
The Spongebob Transcripts Dataset is a collection of transcripts from the beloved animated television series, Spongebob Squarepants. This dataset includes information on each line of dialogue spoken by a character, including the character's name, their replica, and the episode ID.
The number of characters in the dataset: **84**
Total number of words in the dataset: **~80,800 words**, **~4000 rows**, **Updated to full Season 1**
## <h3>Dataset Overview 📊</h3>
|Column | Description |
|------------|-------------------------------------|
|**Speaker** | The character speaking the dialogue.|
|**Replica** | The line of dialogue spoken. |
|**EP_ID** | The episode ID of the transcript. |
## <h3>System Replicas🔍</h3>
The system replicas describe the actions and events that occur in each episode. These replicas are written in a specific format, using brackets to indicate actions and events.
**<h5>Replica Format</h5>**
`{system} : [The episode opens with a bubble transition, and we see a coral reef under the sea. The camera zooms to initiate parallax scrolling, which reveals the city of Bikini Bottom. It continues zooming to show a brown rock, a Moai head, and a pineapple, which each contain inhabitants.]`
## <h3>Sample Data 💬</h3>
|Speaker |Replica |EP_ID |
|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------|-------|
|**Spongebob** | I just met this girl. She wears a hat full of... air. |s1e3_22|
|**Patrick** | Do you mean she puts on "airs"? |s1e3_23|
|**Spongebob** | I guess so. |s1e3_24|
|**Patrick** | That's just fancy talk. If you wanna be fancy, hold your pinky up like this. The higher you hold it, the fancier you are. |s1e3_25|
## <h3>📊 Interactions with Dataset</h3>
**<h5>Using Pandas to filter rows</h5>**
1. To find all rows with a specific ep_id, you can use the following code:
```
import pandas as pd
#Read the CSV file into a Pandas DataFrame
df = pd.read_csv('dataset.csv')
#Define the ep_id you want to filter by
ep_id = 's1e2'
#Filter the DataFrame to get rows with an ep_id that starts with the defined ep_id
filtered_df = df[df['ep_id'].str.startswith(ep_id)]
#Print the filtered DataFrame
print(filtered_df)
```
2. To find rows where a specific character says a specific word or phrase, you can use the following code:
```
#Filter the DataFrame to get rows where a specific character says a specific word or phrase
speaker = 'SpongeBob'
word_or_phrase = 'jellyfish'
filtered_df = df[df['speaker'] == speaker]
filtered_df = filtered_df[filtered_df['replica'].str.contains(word_or_phrase)]
#Print the filtered DataFrame
print(filtered_df)
```
You can replace `SpongeBob` and `jellyfish` with any other speaker and word/phrase that you want to filter by.
## <h3>Data Sources 📝</h3>
The transcripts were sourced *Encyclopedia SpongeBobia*.
## <h3>Potential Uses 🧐</h3>
This Dataset could be used for a variety of natural language processing (NLP) tasks, including dialogue generation. It could also be used for educational purposes, such as studying the language and communication styles of different characters.
|
MarkK/spongebob_transcripts
|
[
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"cartoons",
"region:us"
] |
2023-03-18T17:56:59+00:00
|
{"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "conversational"], "tags": ["cartoons"]}
|
2023-03-23T09:39:43+00:00
|
b3cfb73209a8c51582fa1d9b7fe7e45fec5529b2
|
# Dataset Card for WikiBio GPT-3 Hallucination Dataset
- GitHub repository: https://github.com/potsawee/selfcheckgpt
- Paper: [SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models](https://arxiv.org/abs/2303.08896)
### Dataset Summary
- We generate Wikipedia-like passages using GPT-3 (text-davinci-003) using the prompt: ```This is a Wikipedia passage about {concept}``` where `concept` represents an individual from the WikiBio dataset.
- We split the generated passages into sentences, and we annotate each sentence into one of the 3 options: (1) accurate (2) minor_inaccurate (3) major_inaccurate.
- We report the data statistics, annotation process, and inter-annotator agreement in our paper.
## Update
- v3 (5 May 2023): 238 test IDs have been annotated in total.
- v2 (6 April 2023): 142 test IDs have been annotated, GPT-3 sampled passages are now included in this dataset.
- v1 (15 March 2023): 65 test IDs -- here is `wiki_bio_test_idx` of the documents in v1 [[Link]](https://drive.google.com/file/d/1N3_ZQmr9yBbsOP2JCpgiea9oiNIu78Xw/view?usp=sharing)
## Dataset Structure
Each instance consists of:
- `gpt3_text`: GPT-3 generated passage
- `wiki_bio_text`: Actual Wikipedia passage (first paragraph)
- `gpt3_sentences`: `gpt3_text` split into sentences using `spacy`
- `annotation`: human annotation at the sentence level
- `wiki_bio_test_idx`: ID of the concept/individual from the original wikibio dataset (testset)
- `gpt3_text_samples`: list of 20 sampled passages (do_sample = True & temperature = 1.0)
### Citation Information
```
@misc{manakul2023selfcheckgpt,
title={SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models},
author={Potsawee Manakul and Adian Liusie and Mark J. F. Gales},
year={2023},
eprint={2303.08896},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
potsawee/wiki_bio_gpt3_hallucination
|
[
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:cc-by-sa-3.0",
"arxiv:2303.08896",
"region:us"
] |
2023-03-18T18:05:21+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "gpt3_text", "dtype": "string"}, {"name": "wiki_bio_text", "dtype": "string"}, {"name": "gpt3_sentences", "sequence": "string"}, {"name": "annotation", "sequence": "string"}, {"name": "wiki_bio_test_idx", "dtype": "int64"}, {"name": "gpt3_text_samples", "sequence": "string"}], "splits": [{"name": "evaluation", "num_bytes": 5042581, "num_examples": 238}], "download_size": 2561507, "dataset_size": 5042581}}
|
2023-05-29T22:14:09+00:00
|
3a7358752a740dcfacd212670956f6f6fd0314b5
|
# kfj-pypi Dataset
This dataset contains a collection of PyPI packages scraped from PyPI. The dataset includes metadata about each package, including its name, version, description, author, license, and more. The dataset is intended to be used for research and development in various natural language processing (NLP) applications such as named entity recognition and text classification.
## Usage
To use this dataset, you can download it from [Hugging Face Datasets](https://huggingface.co/datasets/KingfernJohn/kfj-pypi-packages-metadata) using the `datasets` library in Python:
```python
from datasets import load_dataset
dataset = load_dataset("kfj-pypi-packages-metadata")
```
This will load the kfj-pypi dataset into a Python variable, which you can then use to access the metadata for each package.
## Info
The dataset contains metadata of 161,346 packages, with a total size of 743MB (.zip 304MB).
We skipped packages that returned no metadata to avoid empty files.
Please note that the dataset is currently being updated and more packages will be added soon.
version 0.1
## Structure
```json
{
"name": "",
"version": "",
"description": "",
"author": "",
"author_email": "",
"maintainer": "",
"maintainer_email": "",
"license": "",
"keywords": "",
"classifiers": "",
"download_url": "",
"platform": "",
"homepage": "",
"project_urls": "",
"requires_python": "",
"requires_dist": "",
"provides_dist": "",
"obsoletes_dist": "",
"summary": ""
}
```
|
KingfernJohn/kfj-pypi-packages-metadata
|
[
"language:en",
"license:apache-2.0",
"PyPi",
"package",
"dataset",
"named entity recognition",
"text classification",
"NLP",
"Large",
"doi:10.57967/hf/0456",
"region:us"
] |
2023-03-18T18:32:51+00:00
|
{"language": ["en"], "license": "apache-2.0", "pretty_name": "kfj pypi", "tags": ["PyPi", "package", "dataset", "named entity recognition", "text classification", "NLP", "Large"]}
|
2023-03-19T13:57:39+00:00
|
408961cc65ed11ab75e5d9ba789b0488038432b6
|
# Dataset Card for "wavenet_flashback"
https://cloud.google.com/text-to-speech/docs/reference/rest/v1/text/synthesize#AudioConfig
sv-SE-Wavenet-{voice}
https://spraakbanken.gu.se/resurser/flashback-dator
|
jzju/wavenet_flashback
|
[
"task_categories:automatic-speech-recognition",
"language:sv",
"region:us"
] |
2023-03-18T18:56:09+00:00
|
{"language": ["sv"], "task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "pitch", "dtype": "float64"}, {"name": "rate", "dtype": "float64"}, {"name": "voice", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36993063639.128, "num_examples": 96672}], "download_size": 34772655134, "dataset_size": 36993063639.128}}
|
2023-03-18T19:53:39+00:00
|
1c4becdc34ca7b83bdabf878780d7c8794207717
|
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
ioritree/so-vits-svc
|
[
"region:us"
] |
2023-03-18T20:38:08+00:00
|
{"title": "Sovits4.0 V2", "emoji": "\ud83d\udcda", "colorFrom": "blue", "colorTo": "purple", "sdk": "gradio", "sdk_version": "3.19.1", "app_file": "app.py", "pinned": false}
|
2023-03-18T20:50:15+00:00
|
ef737a9e36749c87affeb50a0c4e90a9ee25085c
|
liutong/liutong
|
[
"license:openrail",
"region:us"
] |
2023-03-18T21:22:52+00:00
|
{"license": "openrail"}
|
2023-03-18T21:42:11+00:00
|
|
2d8bcddece4c553549a7933379c86b71e714c0c3
|
# Dataset Card for "Alpaca_arabic_instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Yasbok/Alpaca_arabic_instruct
|
[
"language:ar",
"region:us"
] |
2023-03-18T21:27:13+00:00
|
{"language": "ar", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28245695, "num_examples": 52002}], "download_size": 14716254, "dataset_size": 28245695}}
|
2023-07-21T12:25:40+00:00
|
331bb9355538f0bd962aeea912c801c196c2abca
|
a98zhang/ibm_argument_example
|
[
"region:us"
] |
2023-03-18T21:35:46+00:00
|
{"pretty_name": "example_ibm"}
|
2023-03-18T21:37:45+00:00
|
|
2dd7b28dcf845ad13fb8d679b3aa3930996c2e6d
|
# Dataset Card for "cup-it-ds-classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ummagumm-a/cup-it-ds-classification
|
[
"region:us"
] |
2023-03-18T22:07:23+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 90265068, "num_examples": 140970}, {"name": "validation", "num_bytes": 22511525, "num_examples": 35244}], "download_size": 71988448, "dataset_size": 112776593}}
|
2023-03-19T08:42:21+00:00
|
112130eb39e3fd6af7e234ef7e7c9b2f8bebd377
|
# Dataset Card for "Alpaca-in-french"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tbboukhari/Alpaca-in-french
|
[
"region:us"
] |
2023-03-18T22:23:10+00:00
|
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "instruction", "dtype": "string"}, {"name": " saisir", "dtype": "string"}, {"name": " sortir", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 23689208, "num_examples": 52002}], "download_size": 14446335, "dataset_size": 23689208}}
|
2023-03-18T22:25:29+00:00
|
5ae57aa756509c7320ffe4d4289c9b3be9d3e772
|
csaybar/supersat
|
[
"license:mit",
"region:us"
] |
2023-03-18T23:09:35+00:00
|
{"license": "mit"}
|
2023-03-21T02:09:02+00:00
|
|
11e8bb4392b6891f396a659173ad3e0dd6fff7c7
|
# Content coming...
|
polytechXhf/onepiece-x-jojo-dataset
|
[
"license:apache-2.0",
"region:us"
] |
2023-03-18T23:33:54+00:00
|
{"license": "apache-2.0"}
|
2023-03-18T23:50:55+00:00
|
f4688c7ef0d8fc76e46878baee916a4186a793d9
|
for test
|
sanshanya/eyesdiffusion
|
[
"biology",
"region:us"
] |
2023-03-19T01:02:37+00:00
|
{"tags": ["biology"]}
|
2023-03-19T03:49:50+00:00
|
a754b3eaad00502435532e9b54c792029382796c
|
# Dataset Card for Fandom23K
*The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO) https://docs.ryokoai.com/docs/training/dataset#Fandom22K
- **Repository:** <https://github.com/RyokoAI/BigKnow2022>
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Ronsor/undeleted <[email protected]>
### Dataset Summary
Fandom23K is a dataset composed of 15,616,749 articles scraped from approximately 23,665 Fandom.com wikis between March 14 and March 18, 2023.
It is a subset of the upcoming BigKnow2022 dataset.
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
* text-classification
### Languages
* English
* Potentially other languages in much smaller quantities.
## Dataset Structure
### Data Instances
```json
{
"tag": "fandom.wikia2011",
"text": "# Add Your Wiki's Highlights\n\nWrite the text of your article here!-_-\n\n",
"title": "Add Your Wiki's Highlights"
}
{
"tag": "fandom.wikia2011",
"text": "# Add Your Wiki's Highlights!\n\nWikia wants to hear from you! What significant milestones did your wiki experience in 2011? What cool things did the community try out?\nCreate a page for the wiki you're most active on! Be sure to add it to the Entertainment, Gaming, or Lifestyle categories so it shows up in the right place!\n\n",
"title": "Add Your Wiki's Highlights!"
}
{
"tag": "fandom.wikia2011",
"text": "# Assassins Creed Wiki 2011\n\nIn 2011, Assassin's Creed Wiki tested new Wikia features such as Message Wall, Chat, and New Layouts.\n\n",
"title": "Assassins Creed Wiki 2011"
}
```
### Data Fields
* **text**: the actual article text
* **title**: the article title
* **tag**: text source tag, in the following format: `fandom.<wiki name>`
### Data Splits
No splitting of the data was performed.
## Dataset Creation
### Curation Rationale
Fandom23K provides an up-to-date corpus containing pop culture and media information spanning a variety of interests and
hobbies. Previous datasets containing such information are either part of a large and harder-to-handle whole, such as
Common Crawl, do not provide enough variety, or are simply outdated.
### Source Data
#### Initial Data Collection and Normalization
*More information about any referenced scripts, commands, or programs used may be found in the BigKnow2022 GitHub repository.*
First, a list of active Fandom wikis was gathered into a text file. Active is defined as "having at least 250 images on the wiki."
This list was gathered in early January 2023, despite the actual wiki content being more recent.
Second, the `scrape_fandom.py` script was used to generate and download an up to date dump for each of the wikis.
Third, `wikiextractor` was used to process these dumps into single XML files containing each article stripped of all formatting
besides links.
Fourth, `dump2jsonl` was used to convert the XML files into JSONL files with an article per line. Light markdown formatting was
applied, converting the HTML links to markdown-formatted links, and automatically making the article's title a header.
Finally, the JSONL files were concatenated into the Fandom23K dataset. The version uploaded to this repository, however, is split
into multiple files, numbered 00 through 04 inclusive.
#### Who are the source language producers?
The contributors of each wiki.
### Annotations
#### Annotation process
Wiki names and article titles were collected alongside the article text. Other than that automated process, no annotation was performed.
#### Who are the annotators?
There were no human annotators.
### Personal and Sensitive Information
The dataset was collected from public wiki data. As a result, we do not believe
it should contain any PII and did not inspect it further.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content requiring
knowledge of popular culture or a particular niche.
### Discussion of Biases
This dataset contains text from random Internet users and generally should not be used as an authoritative source of information.
Additionally, this dataset was not filtered at all. We recommmend its usage for research purposes only.
### Other Known Limitations
This dataset is based on a list of active wikis from January 2023, even though the actual wiki content may be more recent. Additionally,
smaller yet still active wikis may have been excluded.
## Additional Information
### Dataset Curators
Ronsor Labs
### Licensing Information
CC-BY-SA 3.0, except for any portions which state otherwise.
### Citation Information
```
@misc{ryokoai2023-bigknow2022,
title = {BigKnow2022: Bringing Language Models Up to Speed},
author = {Ronsor},
year = {2023},
howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}},
}
```
### Contributions
Thanks to @ronsor for gathering this dataset.
|
RyokoAI/Fandom23K
|
[
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-sa-3.0",
"wiki",
"training",
"region:us"
] |
2023-03-19T02:52:11+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "size_categories": ["10M<n<100M"], "task_categories": ["text-classification", "text-generation"], "pretty_name": "Fandom23K Wikis", "tags": ["wiki", "training"]}
|
2023-03-20T19:58:46+00:00
|
9256bde71dd62448448cf9a8f5eafd34cae671c6
|
# Dataset Card for "gpt2_dv_finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fifi777/gpt2_dv_finetune
|
[
"region:us"
] |
2023-03-19T02:53:25+00:00
|
{"dataset_info": {"features": [{"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "copies", "dtype": "string"}, {"name": "size", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "hash", "dtype": "int64"}, {"name": "line_mean", "dtype": "float64"}, {"name": "line_max", "dtype": "int64"}, {"name": "alpha_frac", "dtype": "float64"}, {"name": "autogenerated", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 9131521391.26331, "num_examples": 682408}, {"name": "valid", "num_bytes": 186361675.7366914, "num_examples": 13927}], "download_size": 422152149, "dataset_size": 9317883067.0}}
|
2023-03-19T04:09:25+00:00
|
922bc3842efc218932df0e0551a1942b0a068680
|
ywpl/Model_base_YWPL
|
[
"license:unknown",
"region:us"
] |
2023-03-19T03:12:29+00:00
|
{"license": "unknown"}
|
2023-04-03T09:10:53+00:00
|
|
31aeeb6cbd2801cee991397b7a39523ce015198a
|
samaikya/faces
|
[
"license:other",
"region:us"
] |
2023-03-19T03:59:48+00:00
|
{"license": "other"}
|
2023-03-19T03:59:48+00:00
|
|
361aa82822bf398ab53a1a1f427458249b4ed727
|
# Dataset Card for "big-animal-dataset-high-res-embedding-with-hidden-states"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Isamu136/big-animal-dataset-high-res-embedding-with-hidden-states
|
[
"region:us"
] |
2023-03-19T04:44:57+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}, {"name": "l14_embeddings", "sequence": "float32"}, {"name": "moco_vitb_imagenet_embeddings", "sequence": "float32"}, {"name": "ibot_b_16_embedding", "sequence": "float32"}, {"name": "ibot_b_16_last_self_attn", "sequence": "float32"}, {"name": "midas_dpt_swin2_large_384", "dtype": "image"}, {"name": "subject_noun", "dtype": "string"}, {"name": "moco_vitb_imagenet_embeddings_without_last_layer", "sequence": "float32"}, {"name": "moco_vitb_imagenet_hidden_state", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 19608883787.94, "num_examples": 26180}], "download_size": 17552223513, "dataset_size": 19608883787.94}}
|
2023-03-26T21:12:21+00:00
|
0a7f8a23a1377ac4bbd36ce11665524f18b73e7b
|
数据来自东方财富股吧的评论,经过人工label
|
Fearao/guba_eastmoney
|
[
"task_categories:text-classification",
"language:zh",
"region:us"
] |
2023-03-19T04:51:36+00:00
|
{"language": ["zh"], "task_categories": ["text-classification"]}
|
2023-03-19T04:53:07+00:00
|
e092d1953943f8a3de0471e2c50a57d2b016eccb
|
# Dataset Card for "OK-VQA_test_text_davinci_003_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/OK-VQA_test_text_davinci_003_mode_T_A_D_PNP_NO_FILTER_C_Q_rices_ns_100
|
[
"region:us"
] |
2023-03-19T05:06:11+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0", "num_bytes": 185064, "num_examples": 100}], "download_size": 102042, "dataset_size": 185064}}
|
2023-03-19T05:06:14+00:00
|
56dc4fe22bc19c856f600a5a24199af8089bb0db
|
Joe02/BLADE_refs
|
[
"license:other",
"region:us"
] |
2023-03-19T05:45:19+00:00
|
{"license": "other"}
|
2023-03-19T05:45:33+00:00
|
|
f06bf4c130eb0ce23e1860f2ed5e00b25ac0916c
|
# Dataset Card for "QM9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lisn519010/QM9
|
[
"task_categories:graph-ml",
"chemistry",
"biology",
"region:us"
] |
2023-03-19T05:47:26+00:00
|
{"task_categories": ["graph-ml"], "dataset_info": {"features": [{"name": "x", "sequence": {"sequence": "float32"}}, {"name": "edge_index", "sequence": {"sequence": "int64"}}, {"name": "edge_attr", "sequence": {"sequence": "float32"}}, {"name": "y", "sequence": {"sequence": "float32"}}, {"name": "pos", "sequence": {"sequence": "float32"}}, {"name": "z", "sequence": "int64"}, {"name": "name", "dtype": "string"}, {"name": "idx", "sequence": "int64"}], "splits": [{"name": "full", "num_bytes": 363615510, "num_examples": 130831}], "download_size": 55326724, "dataset_size": 363615510}, "tags": ["chemistry", "biology"]}
|
2023-03-25T11:33:30+00:00
|
eac57703ea733bd4176aa616cfb31fa16849d879
|
# Dataset Card for "bookcorpus_maxlen_32_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rab0na/bookcorpus_maxlen_32_tokenized
|
[
"region:us"
] |
2023-03-19T05:48:57+00:00
|
{"dataset_info": {"features": [{"name": "bert_token", "sequence": "int64"}, {"name": "gpt2_token", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 1848440.250435421, "num_examples": 6960}, {"name": "train", "num_bytes": 18480581597.76182, "num_examples": 69585613}], "download_size": 3934201942, "dataset_size": 18482430038.012257}}
|
2023-03-19T08:22:58+00:00
|
7da1e49f0cb59f11ac4a1a4418e49dd843f260fc
|
# Dataset Card for "cv_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
linhqyy/cv_data
|
[
"region:us"
] |
2023-03-19T06:10:18+00:00
|
{"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "input_length", "dtype": "float64"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 1188121256, "num_examples": 1237}, {"name": "train", "num_bytes": 2663428424, "num_examples": 2773}, {"name": "eval", "num_bytes": 1188121256, "num_examples": 1237}], "download_size": 814975678, "dataset_size": 5039670936}}
|
2023-03-19T06:15:01+00:00
|
d2d5173690d8547dc0d7b8bd8e8a2ede1c592cfe
|
An English dataset comprised of some names, words, and many sentences, to train various things on, or perform language statistics on.
Some sentences are taken from Wikipedia, various comments sections, etc, and some are written by me.
A fairly large portion also comes from the Bee Movie script.
|
pfox/generalconcept_7
|
[
"license:cc-by-sa-3.0",
"region:us"
] |
2023-03-19T06:58:02+00:00
|
{"license": "cc-by-sa-3.0"}
|
2023-03-19T07:10:58+00:00
|
7243bd3b99cfbe5fad6ef6f376a771f39da9aa44
|
# Dataset Card for "cup-it-ds-classification-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/cup-it-ds-classification-small
|
[
"region:us"
] |
2023-03-19T07:03:47+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4545195, "num_examples": 7930}, {"name": "validation", "num_bytes": 1259443, "num_examples": 2203}], "download_size": 3520634, "dataset_size": 5804638}}
|
2023-03-19T07:07:10+00:00
|
089b168ea1f154ed255a633c4c516f642ded5ccf
|
A wordlist containing 71 thousand English words, albeit with some duplicates and such, since it is uncleaned.
The wordlisr also contains no vulgar words.
|
pfox/71k-English-uncleaned-wordlist
|
[
"license:cc-by-sa-3.0",
"region:us"
] |
2023-03-19T07:04:48+00:00
|
{"license": "cc-by-sa-3.0"}
|
2023-03-19T07:06:44+00:00
|
f70d2cd2037331c32909205ea94c79d19d299d14
|
# Dataset Card for "cup-it-ds-classification-small-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/cup-it-ds-classification-small-2
|
[
"region:us"
] |
2023-03-19T07:07:28+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4545195, "num_examples": 7930}, {"name": "validation", "num_bytes": 1259443, "num_examples": 2203}], "download_size": 3520634, "dataset_size": 5804638}}
|
2023-03-19T07:07:32+00:00
|
70c4ebcf12fbe11c4fdb26b196af65a04b045354
|
# Dataset Card for "cup-it-ds-classification-small-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/cup-it-ds-classification-small-3
|
[
"region:us"
] |
2023-03-19T07:07:57+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2236990, "num_examples": 3965}, {"name": "validation", "num_bytes": 236878, "num_examples": 441}], "download_size": 1491693, "dataset_size": 2473868}}
|
2023-03-19T07:08:01+00:00
|
abede15fd1850141744641fdbffad654c69cd448
|
desiai/archiveoldsamachaar
|
[
"license:odc-by",
"region:us"
] |
2023-03-19T07:55:06+00:00
|
{"license": "odc-by"}
|
2023-03-19T08:59:35+00:00
|
|
42ab35c272ec2a3248521e36ffffed0115dab581
|
# Dataset Card for Auditor Sentiment
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
## Dataset Description
Auditor review sentiment collected by News Department
- **Point of Contact:**
Talked to COE for Auditing, currently [email protected]
### Dataset Summary
Auditor sentiment dataset of sentences from financial news. The dataset consists of several thousand sentences from English language financial news categorized by sentiment.
### Supported Tasks and Leaderboards
Sentiment Classification
### Languages
English
## Dataset Structure
### Data Instances
```
"sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .",
"label": "negative"
```
### Data Fields
- sentence: a tokenized line from the dataset
- label: a label corresponding to the class as a string: 'positive' - (2), 'neutral' - (1), or 'negative' - (0)
### Data Splits
A train/test split was created randomly with a 75/25 split
## Dataset Creation
### Curation Rationale
To gather our auditor evaluations into one dataset. Previous attempts using off-the-shelf sentiment had only 70% F1, this dataset was an attempt to improve upon that performance.
### Source Data
#### Initial Data Collection and Normalization
The corpus used in this paper is made out of English news reports.
#### Who are the source language producers?
The source data was written by various auditors.
### Annotations
#### Annotation process
This release of the auditor reviews covers a collection of 4840
sentences. The selected collection of phrases was annotated by 16 people with
adequate background knowledge on financial markets. The subset here is where inter-annotation agreement was greater than 75%.
#### Who are the annotators?
They were pulled from the SME list, names are held by [email protected]
### Personal and Sensitive Information
There is no personal or sensitive information in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
All annotators were from the same institution and so interannotator agreement
should be understood with this taken into account.
### Licensing Information
License: Demo.Org Proprietary - DO NOT SHARE
This dataset is based on the [financial phrasebank](https://huggingface.co/datasets/financial_phrasebank) dataset.
|
Tianzhou/auditor_sentiment
|
[
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] |
2023-03-19T09:03:19+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "sentiment-classification"], "pretty_name": "Auditor_Sentiment"}
|
2022-07-21T18:03:51+00:00
|
00533cad7c3bf99bd989352d21c9a47ec31139c8
|
VKCYBER/Infinite
|
[
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:gpl-3.0",
"region:us"
] |
2023-03-19T10:38:49+00:00
|
{"language": ["en"], "license": "gpl-3.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "pretty_name": "Gaja"}
|
2023-03-19T16:40:19+00:00
|
|
7a1615c34f1d6c7c6b735fc29a84b9cfbcb96b4f
|
# Umi AI: A WebUI Wildcard Mod!
Umi AI is a wildcard mod that allows you to create randomized characters from random species with modular clothing types. It will grow over time and eventually become the ultimate character randomizer and creator.
Umi replaces the [Dynamic Wildcards extension](https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards). It has all the same functionality, plus more. Umi also conflicts with the [Unprompted extension](https://github.com/ThereforeGames/unprompted). Pick one or the other.
Note that if you are reading this message a substantial amount of time after the date of Nov 13th, Umi may already have the majority of capabilities that Unprompted does. Additionally, it will almost certainly be easier to use. For example, what if you want to create a random choice of multiple options?
**Unprompted formatting**:
Photo of a [choose]red|blue|yellow|green[/choose]-haired [choose]man|woman[/choose]
**Umi formatting**:
Photo of a {red|blue|yellow|green}-haired {man|woman}
These two functions do exactly the same thing, but Umi's is smaller and easier to read. The goal of Umi is to be as easy to use as possible while providing consistently high-quality outputs of random characters and settings.
# Installing Umi
Installing Umi is easier than ever. It has been simplified into a 3-step process.
**Step 1: Determine if your PC meets the requirements to run AI generation.**
Is your GPU sufficiently powerful enough to run AI generation? [Download and run Speccy](https://www.ccleaner.com/speccy) and [post your PC specs in the Umi AI Discord](https://discord.gg/9K7j7DTfG2).
Assuming your PC meets the requirements...
**Step 2: Run the WebUI auto-installer.**
Download the latest version of Automatic1111's WebUI Autoinstaller from here. It will be the .exe file.
https://github.com/EmpireMediaScience/A1111-Web-UI-Installer/releases

Follow the Autoinstaller's steps to get WebUI up and running. There will be some parts that need to download large multi-GB files, so be patient. If you run into issues, [join the Umi AI Discord to ask for help](https://discord.gg/9K7j7DTfG2).
**Step 3: Install the Umi AI Extension.**
Once WebUI is open and running, navigate to the Extensions tab at the top, and the Install from URL sub-tab.

Paste the Umi AI URL in, like shown above.
https://github.com/Klokinator/UnivAICharGen.git
Press Install, and you'll be ready to start randomly generating with Umi AI!
At this point, you can just [join the Umi AI Discord](https://discord.gg/9K7j7DTfG2) to learn all the nuances of how to use Umi AI properly as well as share your characters!
|
Kizi-Art/Asset
|
[
"region:us"
] |
2023-03-19T10:39:51+00:00
|
{}
|
2023-03-19T10:46:37+00:00
|
0f80ef53576b6d6703db6fa478169b30443abaea
|
# CNN News Articles 2011-2022 Dataset
## Introduction
This dataset contains CNN News Articles from 2011 to 2022 after basic cleaning. The dataset includes the following information:
Category
Full text
The data was downloaded from Kaggle at this URL: https://www.kaggle.com/datasets/hadasu92/cnn-articles-after-basic-cleaning. The dataset was split into two sets:
Train set with 32,218 examples
Test set with 5,686 examples
## Usage
This dataset can be used for different natural language processing tasks such as text classification, text summarization, named entity recognition, and more. The dataset is available in Hugging Face Datasets with the ID AyoubChLin/CNN_News_Articles_2011-2022.
## Acknowledgements
The data was collected by the Kaggle user [hadasu92](https://github.com/hadasu). The splitting of the dataset into train and test sets was performed by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/).
|
AyoubChLin/CNN_News_Articles_2011-2022
|
[
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-03-19T11:01:10+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "CNN News Article from 20211 to 2022", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "business", "1": "entertainment", "2": "health", "3": "news", "4": "politics", "5": "sport"}}}}], "splits": [{"name": "train", "num_examples": 32218}, {"name": "test", "num_examples": 5686}]}, "train-eval-index": [{"config": "default", "task": "text-classification", "task_id": "multi_class_classification", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"text": "text", "label": "target"}}]}
|
2023-04-10T14:29:24+00:00
|
7b1c5c8f099e2ceb101cc82b6c965fae09b57fce
|
## Dataset Description
- **Homepage:** [Clinical Biases Dataset](https://huggingface.co/datasets/shainahub/clinical_bias)
### Who is the target audience for this dataset?
The target audience includes researchers and practitioners in the healthcare and natural language processing domains interested in studying biases in clinical texts and developing models to detect and mitigate such biases.
### What do I need to know to use this dataset?
Users should have a basic understanding of clinical texts, biases, and natural language processing.
## Data Fields
- `SUBJECT_ID`: A unique identifier for the subject.
- `TEXT`: The clinical text.
- `is_biased`: A boolean indicating whether the text is biased or not.
- `biased_words`: A list of biased words present in the text (if any).
## Data Splits
This dataset does not have predefined data splits (train, validation, test). Users can create their own splits according to their requirements.
## Dataset Creation
### Curation Rationale
The dataset was created to study biases in clinical texts and provide a resource for developing models to detect and mitigate such biases.
### Source Data
The dataset is derived from clinical texts collected from various sources.
### Licensing Information
The licensing information for this dataset is not specified.
### Previewing the Dataset
You can use the following code snippet to preview the dataset using Hugging Face Datasets library in Python:
```python
from datasets import load_dataset
dataset = load_dataset("shainahub/clinical_bias")
dataset_dict = dataset["train"][0]
print("SUBJECT_ID:", dataset_dict["SUBJECT_ID"])
print("TEXT:", dataset_dict["TEXT"])
print("is_biased:", dataset_dict["is_biased"])
print("biased_words:", dataset_dict["biased_words"])
```
```python
from datasets import load_dataset
dataset = load_dataset("shainahub/clinical_bias")
df = dataset['train'].to_pandas()
df.head()
```
it will give 40k rows.
The output should look like this:
```python
SUBJECT_ID: 2549
TEXT: CCU NSG TRANSFER SUMMARY UPDATE RESP FAILURE CLINICAL STATUS: Fever Oxygen saturations have been intermittently low on room air with improvement on oxygen High white blood cell count Multifocal pneumonia Gastrointestinal bleeding concerning for stress ulceration Hemodynamically stable on vasopressors, requiring increasing amounts to maintain mean arterial pressure. Heart rate increased to 100s with systolic blood pressure in the 90s. PLAN: 1. Continue current management 2. Initiate prophylaxis for stress ulceration 3. Initiate appropriate isolation for pneumonia
is_biased: False
biased_words: None
```
Loading the Dataset
You can use the following code snippet to load the dataset using Hugging Face Datasets library in Python:
```python
from datasets import load_dataset
dataset = load_dataset("shainahub/clinical_bias")
```
The dataset consists of four columns:
```python
SUBJECT_ID: a unique identifier for each clinical note.
TEXT: the text of the clinical note
is_biased: a boolean value indicating whether the note contains biased language or not
biased_words: if the note contains biased language, the words or phrases that are biased
```
|
shainahub/clinical_bias
|
[
"language:en",
"license:afl-3.0",
"doi:10.57967/hf/0459",
"region:us"
] |
2023-03-19T11:50:32+00:00
|
{"language": ["en"], "license": "afl-3.0", "dataset_info": {"features": [{"name": "SUBJECT_ID", "dtype": "int64"}, {"name": "TEXT", "dtype": "string"}, {"name": "is_biased", "dtype": "bool"}, {"name": "biased_words", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11586577, "num_examples": 40000}], "download_size": 6501927, "dataset_size": 11586577}}
|
2023-03-19T20:36:19+00:00
|
a0e003b2d46efaeffc97fd807a6f9c3201ce9322
|
nielsgl/images
|
[
"license:mit",
"region:us"
] |
2023-03-19T14:22:54+00:00
|
{"license": "mit"}
|
2023-03-19T14:22:54+00:00
|
|
efe635631acd2e3ffb7fe5d82b1bf380d1e25400
|
### Dataset contains:
```
[
'charger',
'lense',
'camera'
]
```
|
nekotov/camera-set
|
[
"language:en",
"license:mit",
"camera",
"canon",
"lense",
"charger",
"region:us"
] |
2023-03-19T14:23:20+00:00
|
{"language": ["en"], "license": "mit", "pretty_name": "Dataset of Canon: cameras,lenses, chargers.", "tags": ["camera", "canon", "lense", "charger"]}
|
2023-03-19T17:51:16+00:00
|
7bbe6852c4e15a947279c3ada7c3d5c0711b83f6
|
# Dataset Card for "Alpaca_french_instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tbboukhari/Alpaca_french_instruct
|
[
"language:fr",
"region:us"
] |
2023-03-19T15:06:24+00:00
|
{"language": "fr", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": " saisir", "dtype": "string"}, {"name": " sortir", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 23260190, "num_examples": 52002}], "download_size": 14152821, "dataset_size": 23260190}}
|
2023-09-05T14:52:14+00:00
|
6bbdefd3c80f59c7eda9e462875152d7456af026
|
saitsharipov/CelebA-HQ
|
[
"license:unknown",
"region:us"
] |
2023-03-19T15:19:53+00:00
|
{"license": "unknown", "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1409379538.427, "num_examples": 202599}], "download_size": 1392722635, "dataset_size": 1409379538.427}}
|
2023-03-19T16:05:00+00:00
|
|
cfc9fa0cfd1ec493bed4113a6b75aa9bd748af8b
|
# Dataset Card for "pass_k_with_MultiPL-E"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nuprl/pass_k_with_MultiPL-E
|
[
"region:us"
] |
2023-03-19T16:32:07+00:00
|
{"dataset_info": {"features": [{"name": "Experiment", "dtype": "string"}, {"name": "K", "dtype": "int64"}, {"name": "PassRate", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 64770, "num_examples": 690}], "download_size": 8011, "dataset_size": 64770}}
|
2023-03-19T16:32:23+00:00
|
df326f258a67b6844cf4e8e7666c17f8c60a02a6
|
# Dataset Card for "apps_partial_sorted_300_end"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minimario/apps_partial_sorted_300_end
|
[
"region:us"
] |
2023-03-19T16:47:07+00:00
|
{"dataset_info": {"features": [{"name": "problem", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "full_sample", "dtype": "string"}, {"name": "where_from", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1043051462, "num_examples": 780933}], "download_size": 34831859, "dataset_size": 1043051462}}
|
2023-03-19T17:01:06+00:00
|
bafc2b2803150333b34f5f271f8716d0e988534a
|
# Dataset Card for "apps_partial_sorted_0_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minimario/apps_partial_sorted_0_200
|
[
"region:us"
] |
2023-03-19T16:47:19+00:00
|
{"dataset_info": {"features": [{"name": "problem", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "full_sample", "dtype": "string"}, {"name": "where_from", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 100164623, "num_examples": 80215}], "download_size": 3452557, "dataset_size": 100164623}}
|
2023-03-19T17:01:08+00:00
|
7ba3fa3d223db2172f1da5d41352a04df81af455
|
trondizzy/test_test
|
[
"license:cc",
"region:us"
] |
2023-03-19T16:51:47+00:00
|
{"license": "cc"}
|
2023-03-19T16:52:57+00:00
|
|
432de6697923698029c84726c3d380c08ba41db0
|
# Dataset Card for "apps_partial_sorted_200_end"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minimario/apps_partial_sorted_200_end
|
[
"region:us"
] |
2023-03-19T17:32:25+00:00
|
{"dataset_info": {"features": [{"name": "problem", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "full_sample", "dtype": "string"}, {"name": "where_from", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1102484581, "num_examples": 828007}], "download_size": 36934694, "dataset_size": 1102484581}}
|
2023-03-19T17:38:25+00:00
|
8de45af5903137eae773350e6da8ae877b7309d0
|
# Dataset Card for "NoLabel"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rossevine/NoLabel
|
[
"region:us"
] |
2023-03-19T17:47:35+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 1120532893.684, "num_examples": 1083}], "download_size": 1064043642, "dataset_size": 1120532893.684}}
|
2023-04-22T18:02:59+00:00
|
dc4e3ffe365a899642bbf3ce74b98b5ce53bdb65
|
Maryamd/idm-crackpatch
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-03-19T18:13:02+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-03-19T18:13:02+00:00
|
|
57a4f29169678957f4b4596499d087bc0c99f98f
|
# Dataset Card for "robosmallreviewed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
homangab/robosmallreviewed
|
[
"region:us"
] |
2023-03-19T18:18:32+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 9692753.0, "num_examples": 23}], "download_size": 777864, "dataset_size": 9692753.0}}
|
2023-03-19T19:14:10+00:00
|
9217a80ee2e1f078bc3ba3f9272114e08a181fdd
|
# Dataset Card for "cup-it-ds-classification-pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ummagumm-a/cup-it-ds-classification-pairwise
|
[
"region:us"
] |
2023-03-19T19:30:40+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "struct": [{"name": "score", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "rejected", "struct": [{"name": "score", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 341356800, "num_examples": 281940}], "download_size": 196778839, "dataset_size": 341356800}}
|
2023-03-19T19:31:10+00:00
|
6052349e239167a9ca3a10a18ec9a96b24fe9721
|
# Dataset Card for "up-it-ds-sft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/up-it-ds-sft
|
[
"region:us"
] |
2023-03-19T19:35:36+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 196574167, "num_examples": 317184}, {"name": "validation", "num_bytes": 22058238, "num_examples": 35244}], "download_size": 135217201, "dataset_size": 218632405}}
|
2023-03-19T19:35:45+00:00
|
ad41705481d3109a8f012c767cf96e3e800f6e1f
|
# Dataset Card for "cup-it-ds-classification-pairwise-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ummagumm-a/cup-it-ds-classification-pairwise-test
|
[
"region:us"
] |
2023-03-19T20:12:08+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 65345832, "num_examples": 56016}], "download_size": 38356905, "dataset_size": 65345832}}
|
2023-03-19T22:04:04+00:00
|
5d3eda7aefa50cbf100379c80f3faa11f09bfa6c
|
# Dataset Card for "robotlarge"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
homangab/robotlarge
|
[
"region:us"
] |
2023-03-19T20:48:03+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 70799794.0, "num_examples": 168}], "download_size": 5701250, "dataset_size": 70799794.0}}
|
2023-03-19T20:50:02+00:00
|
53de24d594847f72d2e0a0c0a93582b1aadb0eb4
|
# Dataset Card for "cup-it-ds-classification-pairwise-train-val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ummagumm-a/cup-it-ds-classification-pairwise-train-val
|
[
"region:us"
] |
2023-03-19T21:00:07+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 386359153, "num_examples": 334804}, {"name": "validation", "num_bytes": 20767812, "num_examples": 17624}], "download_size": 233873327, "dataset_size": 407126965}}
|
2023-03-19T22:04:59+00:00
|
fb0889576937f2393606bf3b5b9b28cc95111b85
|
#
The [`tatsu-lab/alpaca` dataset](https://huggingface.co/datasets/tatsu-lab/alpaca) was split into train/test/val with the goal of training text-to-text generation models to generate instruction prompts corresponding to arbitrary text.
To do this, you would use
- `output` as **the text2text model** input column
- `instruction` as the text2text model target/output column
## modifications & filtering
Rows that used the column `input` in the original dataset, and rows where the `output` column contains less than 8 words were dropped.
Link to [function used to filter](https://gist.github.com/pszemraj/3633acb0cf3288d49b7bee550e756839) the original dataset after splitting
- The filter_dataset function reads datasets, counts tokens in specified columns, filters rows based on a minimum number of tokens, drops specified columns and/or rows with non-NaN values, and saves the modified datasets to a new directory. It returns summary statistics of the modified records.
## dataset info
Output of loading the dataset:
```python
DatasetDict({
train: Dataset({
features: ['instruction', 'output'],
num_rows: 23167
})
test: Dataset({
features: ['instruction', 'output'],
num_rows: 2822
})
validation: Dataset({
features: ['instruction', 'output'],
num_rows: 2866
})
})
```
## token counts in the `output` column
t5

bart-base

---
|
pszemraj/fleece2instructions
|
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"source_datasets:tatsu-lab/alpaca",
"language:en",
"license:cc-by-4.0",
"alpaca",
"instruction generation",
"region:us"
] |
2023-03-19T21:52:58+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "source_datasets": "tatsu-lab/alpaca", "task_categories": ["text-generation", "text2text-generation"], "tags": ["alpaca", "instruction generation"]}
|
2023-03-20T03:29:35+00:00
|
4589883f3d09d4ef6361784e03f0ead219836469
|
# Multi30k
This dataset contains the "multi30k" dataset, which is the "task 1" dataset from [here](https://www.statmt.org/wmt16/multimodal-task.html).
Each example consists of an "en" and a "de" feature. "en" is an English sentence, and "de" is the German translation of the English sentence.
### Data Splits
The Multi30k dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 29,000 |
| Validation | 1,014 |
| Test | 1,000 |
### Citation Information
```
@article{elliott-EtAl:2016:VL16,
author = {{Elliott}, D. and {Frank}, S. and {Sima'an}, K. and {Specia}, L.},
title = {Multi30K: Multilingual English-German Image Descriptions},
booktitle = {Proceedings of the 5th Workshop on Vision and Language},
year = {2016},
pages = {70--74},
year = 2016
}
```
|
bentrevett/multi30k
|
[
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"language:de",
"region:us"
] |
2023-03-19T22:38:35+00:00
|
{"language": ["en", "de"], "size_categories": ["10K<n<100K"], "task_categories": ["translation"]}
|
2023-03-24T14:50:27+00:00
|
4c4ac13d8f84db8205810959d0e6a2d6b797b659
|
# Dataset Card for "OIG_small_chip2_portuguese_brasil"
This dataset was translated to Portuguese-Brasil from [here](https://huggingface.co/datasets/0-hero/OIG-small-chip2)
The data was translated with *MarianMT* model and weights [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE)
The full details to replicate the translation are here: [translation_notebook](https://github.com/finardi/tutos/blob/master/translate_Laion_OIG.ipynb)
---
license: apache-2.0
---
|
paulofinardi/OIG_small_chip2_portuguese_brasil
|
[
"task_categories:conversational",
"task_categories:text2text-generation",
"language:pt",
"region:us"
] |
2023-03-19T22:45:05+00:00
|
{"language": ["pt"], "task_categories": ["conversational", "text2text-generation"], "dataset_info": {"features": [{"name": "user", "dtype": "string"}, {"name": "chip2", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 210289}]}}
|
2023-03-19T23:16:11+00:00
|
40ac4bdbe9bb3ebe567dce8f33cb2d72ee1765b0
|
Rahmaa/eli5_final
|
[
"license:openrail",
"region:us"
] |
2023-03-19T23:06:03+00:00
|
{"license": "openrail"}
|
2023-03-19T23:54:51+00:00
|
|
fbf0e892cd68180121069c953b5046cc9a4d8f74
|
# Dataset Card for "face_synthetics_spiga_smol"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pcuenq/face_synthetics_spiga_smol
|
[
"region:us"
] |
2023-03-19T23:31:14+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_seg", "dtype": "image"}, {"name": "landmarks", "dtype": "string"}, {"name": "spiga", "sequence": {"sequence": "float64"}}, {"name": "spiga_seg", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 32033260.0, "num_examples": 100}], "download_size": 31962985, "dataset_size": 32033260.0}}
|
2023-03-19T23:31:22+00:00
|
fa12d17433908ffba1e9876af2ee8fc28381af94
|
# Dataset Card for "OK-VQA_train_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/OK-VQA_train_embeddings
|
[
"region:us"
] |
2023-03-20T00:02:09+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "vision_embeddings", "sequence": "float32"}], "splits": [{"name": "openai_clip_vit_large_patch14", "num_bytes": 1513678502.0, "num_examples": 9009}], "download_size": 1517323156, "dataset_size": 1513678502.0}}
|
2023-03-20T00:03:21+00:00
|
d9914af18cc0924ee4c5abcf7110a678f2d0a98b
|
# Dataset Card for "fill50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
doudou1206/fill50k
|
[
"region:us"
] |
2023-03-20T00:37:57+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "guide", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 454411979.0, "num_examples": 50000}], "download_size": 316021533, "dataset_size": 454411979.0}}
|
2023-03-20T00:42:05+00:00
|
ac25598fcd13581b15e1333d170fad94bef891d8
|
Shushant/BiomedicalQuestionAnsweringDataset
|
[
"license:bsl-1.0",
"region:us"
] |
2023-03-20T00:42:39+00:00
|
{"license": "bsl-1.0"}
|
2023-03-20T00:44:25+00:00
|
|
a0a36bab05c145ba4f9aa3262ce5bedac408a480
|
# Dataset Card for "OK-VQA_test_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/OK-VQA_test_embeddings
|
[
"region:us"
] |
2023-03-20T01:07:01+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "vision_embeddings", "sequence": "float32"}], "splits": [{"name": "openai_clip_vit_large_patch14", "num_bytes": 848197053.0, "num_examples": 5046}], "download_size": 849997989, "dataset_size": 848197053.0}}
|
2023-03-20T01:07:31+00:00
|
7bcf859b9d441608554523b183a9414acfbb1d4d
|
# Dataset Card for "CS4248-T15-LUN"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
myvision/CS4248-T15-LUN
|
[
"region:us"
] |
2023-03-20T02:10:29+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 159234160, "num_examples": 48854}, {"name": "test", "num_bytes": 9048910, "num_examples": 3000}], "download_size": 104858010, "dataset_size": 168283070}}
|
2023-03-20T05:36:56+00:00
|
80bc1896ed99add8b4e25fec9ff74b492b7d2954
|
# Dataset Card for "car_in_photozone"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
BettercallSaulGM/car_in_photozone
|
[
"region:us"
] |
2023-03-20T02:11:13+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 130749280.0, "num_examples": 1000}], "download_size": 130644873, "dataset_size": 130749280.0}}
|
2023-03-20T05:48:49+00:00
|
872849fb163122db777a11ed318873962be7dfb8
|
LangChainDatasets/llm-math
|
[
"license:mit",
"region:us"
] |
2023-03-20T03:46:44+00:00
|
{"license": "mit"}
|
2023-03-20T03:47:19+00:00
|
|
7a9650aaecea8200df7be9edf63dc105d0b907fe
|
# Dataset Card for "bengali-clay-cups-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
keras-dreambooth/bengali-clay-cups-dataset
|
[
"region:us"
] |
2023-03-20T04:18:54+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 100118.0, "num_examples": 5}], "download_size": 100028, "dataset_size": 100118.0}}
|
2023-03-20T04:19:04+00:00
|
2a58adb2b380b1faceafb0aec2a12a1b55e957c8
|
# Dataset Card for "face_synthetics_spiga"
This is a copy of [Microsoft FaceSynthetics dataset](https://github.com/microsoft/FaceSynthetics) with [SPIGA](https://github.com/andresprados/SPIGA) landmark annotations. For a copy of the original FaceSynthetics dataset with no extra annotations, please refer to [pcuenq/face_synthetics](https://huggingface.co/pcuenq/face_synthetics).
Please, refer to the original [license](LICENSE.txt), which we replicate in this repo. The SPIGA annotations were created by Hugging Face Inc. and are distributed under the MIT license.
This dataset was prepared using the code below. It iterates through the dataset to perform landmark detection using SPIGA, and then to create visualizations of the features. Visualization is performed using Matplotlib to render to memory buffers.
```Python
import numpy as np
from datasets import load_dataset
from spiga.inference.config import ModelConfig
from spiga.inference.framework import SPIGAFramework
dataset_name = "pcuenq/face_synthetics"
faces = load_dataset(dataset_name)
faces = faces["train"]
# ## Obtain SPIGA features
processor = SPIGAFramework(ModelConfig("300wpublic"))
# We obtain the bbox from the existing landmarks in the dataset.
# We could use `dlib`, but this should be faster.
# Note that the `landmarks` are stored as strings.
def parse_landmarks(landmarks_str):
landmarks = landmarks_str.strip().split('\n')
landmarks = [k.split(' ') for k in landmarks]
landmarks = [(float(x), float(y)) for x, y in landmarks]
return landmarks
def bbox_from_landmarks(landmarks_str):
landmarks = parse_landmarks(landmarks_str)
landmarks_x, landmarks_y = zip(*landmarks)
x_min, x_max = min(landmarks_x), max(landmarks_x)
y_min, y_max = min(landmarks_y), max(landmarks_y)
width = x_max - x_min
height = y_max - y_min
# Give it a little room; I think it works anyway
x_min -= 5
y_min -= 5
width += 10
height += 10
bbox = (x_min, y_min, width, height)
return bbox
def spiga_process(example):
image = example["image"]
image = np.array(image)
# BGR
image = image[:, :, ::-1]
bbox = bbox_from_landmarks(example["landmarks"])
features = processor.inference(image, [bbox])
landmarks = features["landmarks"][0]
example["spiga"] = landmarks
return example
# For some reason this map doesn't work with num_proc > 1 :(
# TODO: run inference on GPU
faces = faces.map(spiga_process)
# ## "Segmentation"
# We use bezier paths to draw contours and areas.
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.path import Path
import PIL
def get_patch(landmarks, color='lime', closed=False):
contour = landmarks
ops = [Path.MOVETO] + [Path.LINETO]*(len(contour)-1)
facecolor = (0, 0, 0, 0) # Transparent fill color, if open
if closed:
contour.append(contour[0])
ops.append(Path.CLOSEPOLY)
facecolor = color
path = Path(contour, ops)
return patches.PathPatch(path, facecolor=facecolor, edgecolor=color, lw=4)
# Draw to a buffer.
def conditioning_from_landmarks(landmarks, size=512):
# Precisely control output image size
dpi = 72
fig, ax = plt.subplots(1, figsize=[size/dpi, size/dpi], tight_layout={'pad':0})
fig.set_dpi(dpi)
black = np.zeros((size, size, 3))
ax.imshow(black)
face_patch = get_patch(landmarks[0:17])
l_eyebrow = get_patch(landmarks[17:22], color='yellow')
r_eyebrow = get_patch(landmarks[22:27], color='yellow')
nose_v = get_patch(landmarks[27:31], color='orange')
nose_h = get_patch(landmarks[31:36], color='orange')
l_eye = get_patch(landmarks[36:42], color='magenta', closed=True)
r_eye = get_patch(landmarks[42:48], color='magenta', closed=True)
outer_lips = get_patch(landmarks[48:60], color='cyan', closed=True)
inner_lips = get_patch(landmarks[60:68], color='blue', closed=True)
ax.add_patch(face_patch)
ax.add_patch(l_eyebrow)
ax.add_patch(r_eyebrow)
ax.add_patch(nose_v)
ax.add_patch(nose_h)
ax.add_patch(l_eye)
ax.add_patch(r_eye)
ax.add_patch(outer_lips)
ax.add_patch(inner_lips)
plt.axis('off')
fig.canvas.draw()
buffer, (width, height) = fig.canvas.print_to_buffer()
assert width == height
assert width == size
buffer = np.frombuffer(buffer, np.uint8).reshape((height, width, 4))
buffer = buffer[:, :, 0:3]
plt.close(fig)
return PIL.Image.fromarray(buffer)
def spiga_segmentation(example):
landmarks = example["spiga"]
example['spiga_seg'] = conditioning_from_landmarks(landmarks)
return example
faces = faces.map(spiga_segmentation, num_proc=16)
faces.push_to_hub(f"{dataset_name}_spiga")
```
|
pcuenq/face_synthetics_spiga
|
[
"region:us"
] |
2023-03-20T05:32:12+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_seg", "dtype": "image"}, {"name": "landmarks", "dtype": "string"}, {"name": "spiga", "sequence": {"sequence": "float64"}}, {"name": "spiga_seg", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 31081737215.0, "num_examples": 100000}], "download_size": 31009656222, "dataset_size": 31081737215.0}}
|
2023-03-20T08:53:26+00:00
|
dee822fcf96efd25e994ba9a59237e4615f47435
|
# Dataset Card for "ko_alpaca_data"
## Dataset Description
- **Repository:** [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)
- **Huggingface:** [beomi/KoAlpaca](https://huggingface.co/beomi/KoAlpaca)
- **Size of downloaded dataset files:** 8.10 MB
- **Size of the generated dataset:** 13.15 MB
### Dataset Summary
Korean translation of [alpaca data](https://huggingface.co/datasets/tatsu-lab/alpaca).
repository: [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)<br>
huggingface: [beomi/KoAlpaca](https://huggingface.co/beomi/KoAlpaca)
1. Translate dataset
Translated 'instruction' and 'input' in the dataset via the DeepL API, except for 'output', which we did not translate because it is the output of OpenAI's `text-davinci-003` model.
2. Generate output data
Then, using the instruction and input, generate output data via the OpenAI ChatGPT API (gpt-3.5-turbo).
Below is the prompt we used to generate the answer.
```python
PROMPT = """\
다양한 작업에 대한 답변을 생성해주세요. 이러한 작업 지침은 ChatGPT 모델에 주어지며, ChatGPT 모델이 지침을 완료하는지 평가합니다.
요구 사항은 다음과 같습니다:
1. 다양성을 극대화하기 위해 각 지시에 대해 동사를 반복하지 않도록 하세요.
2. 지시에 사용되는 언어도 다양해야 합니다. 예를 들어, 질문과 명령형 지시를 결합해야 합니다.
3. 지시 사항의 유형이 다양해야 합니다. 목록에는 개방형 생성, 분류, 편집 등과 같은 다양한 유형의 작업이 포함되어야 합니다.
2. GPT 언어 모델은 지시를 완료할 수 있어야 합니다. 예를 들어 어시스턴트에게 시각적 또는 오디오 출력을 생성하도록 요청하지 마세요. 또 다른 예로, 어시스턴트가 어떤 작업도 수행할 수 없으므로 오후 5시에 깨우거나 미리 알림을 설정하도록 요청하지 마세요.
3. 답변은 한국어로 작성해야 합니다.
4. 답변을 1~2문장으로 작성하세요. 명령문이나 질문도 허용됩니다.
5. 지시 사항에 대한 적절한 입력을 생성해야 합니다. 입력 필드에는 지시에 대한 구체적인 예가 포함되어야 합니다. 실제 데이터를 포함해야 하며 단순한 자리 표시자를 포함해서는 안 됩니다. 입력은 지시 사항을 어렵게 만들 수 있는 상당한 내용을 제공해야 하지만 100단어를 넘지 않는 것이 이상적입니다.
6. 일부 지시사항은 추가 입력이 있고, 일부 지시에는 입력 필드가 비어있습니다. 예를 들어 "세계에서 가장 높은 봉우리는 무엇인가?"라는 일반적인 정보를 묻는 지시의 경우 구체적인 맥락을 제공할 필요가 없어, 입력 필드가 비어있을 수 있습니다.
7. 출력은 명령어와 입력에 대한 적절한 응답이어야 합니다.
아래에 10개의 명령어와 입력(옵션)에 따라 적절한 응답을 생성하세요.
응답은 아래와 같은 형식으로 10가지를 0번 부터 9번 까지, 번호에 따라 해당 번호의 명령어와 입력에 알맞게 작성하세요.
각 응답 사이는 ### 으로 내용을 분리해주세요.
응답0: 첫 번째 응답내용###
응답1: 두 번째 응답내용###
...
응답9: 마지막 응답내용"""
```
### Lisence
CC-BY-NC-4.0
### Data Splits
| | train |
| --------- | -------- |
| # of data | 49620 |
\# Note that the number is not the same as the original data(52002)
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("Bingsu/ko_alpaca_data", split="train")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 49620
})
```
```python
>>> ds[0]
{'instruction': '건강을 유지하기 위한 세 가지 팁을 알려주세요.',
'input': '',
'output': '세 가지 팁은 아침식사를 꼭 챙기며, 충분한 수면을 취하고, 적극적으로 운동을 하는 것입니다.'}
```
|
Bingsu/ko_alpaca_data
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ko",
"license:cc-by-nc-4.0",
"region:us"
] |
2023-03-20T05:36:21+00:00
|
{"language": ["ko"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "ko-alpaca-data", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13791136, "num_examples": 49620}], "download_size": 8491044, "dataset_size": 13791136}}
|
2023-03-30T22:21:40+00:00
|
bf0e3997f75fe181fdc74961b7b387919df50c7c
|
# Dataset Card for "my-image-captioning-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SKyu/my-image-captioning-dataset
|
[
"size_categories:1K<n<10K",
"region:us"
] |
2023-03-20T05:45:04+00:00
|
{"size_categories": ["1K<n<10K"], "pretty_name": "jl_pics", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 417257082.9, "num_examples": 3100}], "download_size": 480865927, "dataset_size": 417257082.9}}
|
2023-03-20T06:24:06+00:00
|
aa39d5da2a4cdeb5386ae577dd8b0c00a205d491
|
panes/demo
|
[
"license:bsd",
"region:us"
] |
2023-03-20T06:58:03+00:00
|
{"license": "bsd"}
|
2023-03-20T06:59:25+00:00
|
|
1c241480631422667f23a22ac27745636e599f92
|
SaeedMLK/seq2seq_ccmatrix_ar_en
|
[
"task_categories:translation",
"language:ar",
"language:en",
"region:us"
] |
2023-03-20T07:13:37+00:00
|
{"language": ["ar", "en"], "task_categories": ["translation"]}
|
2023-03-20T07:22:43+00:00
|
|
266c46ff308a0bf86139e205bc343d7d635be161
|
# Dataset Card for "shrutilipi_mr-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bhatvineet/shrutilipi_mr-small
|
[
"region:us"
] |
2023-03-20T08:00:17+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcriptions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1894437650.2116015, "num_examples": 7296}, {"name": "test", "num_bytes": 574162857.8593984, "num_examples": 2433}], "download_size": 2301540908, "dataset_size": 2468600508.071}}
|
2023-03-20T08:03:52+00:00
|
928b277a3b0d39c8faa736aa4d33c0c6dd432996
|
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
TUKE-DeutscheTelekom/squad-sk
|
[
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"task_ids:document-retrieval",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:sk",
"license:cc-by-sa-4.0",
"license:cc-by-4.0",
"wikipedia",
"region:us"
] |
2023-03-20T08:32:48+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced", "found"], "language": ["sk"], "license": ["cc-by-sa-4.0", "cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-retrieval"], "task_ids": ["open-domain-qa", "extractive-qa", "document-retrieval"], "paperswithcode_id": "squad-sk", "pretty_name": "squad-sk", "tags": ["wikipedia"], "train-eval-index": [{"col_mapping": {"answers": {"answer_start": "answer_start", "text": "text"}, "context": "context", "question": "question"}, "config": "squad_v2", "metrics": [{"name": "SQuAD v2", "type": "squad_v2"}], "splits": {"eval_split": "validation", "train_split": "train"}, "task": "question-answering", "task_id": "extractive_question_answering"}]}
|
2023-10-18T11:43:46+00:00
|
9b60e02086240d6945fd0c51cff77a836c5d3c09
|
# Dataset Card for "codeparrot-train-v2-near-dedup-safe"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
iohadrubin/codeparrot-train-v2-near-dedup-safe
|
[
"region:us"
] |
2023-03-20T08:38:40+00:00
|
{"dataset_info": {"features": [{"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "copies", "dtype": "string"}, {"name": "size", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "hash", "dtype": "int64"}, {"name": "line_mean", "dtype": "float64"}, {"name": "line_max", "dtype": "int64"}, {"name": "alpha_frac", "dtype": "float64"}, {"name": "autogenerated", "dtype": "bool"}, {"name": "ratio", "dtype": "float64"}, {"name": "config_test", "dtype": "bool"}, {"name": "has_no_keywords", "dtype": "bool"}, {"name": "few_assignments", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 21008185899, "num_examples": 2660741}], "download_size": 7856191050, "dataset_size": 21008185899}}
|
2023-03-20T10:11:56+00:00
|
fff5fd1cb6a31bfbcd16873834d75f5ab4bc69ba
|
Fork of https://www.kaggle.com/datasets/motono0223/gustavosta-stable-diffusion-prompts-sd2-v2
|
affjljoo3581/gustavosta-stable-diffusion-prompts-sd2-v2
|
[
"region:us"
] |
2023-03-20T09:00:14+00:00
|
{}
|
2023-03-20T09:28:21+00:00
|
6470cbe8e8f2247be6ffa584b4fdc5dd2f8197e9
|
# Dataset Card for Fragment Of Bookcorpus
## Dataset Description
A smaller sample of the bookcorpus dataset, Which includes around 100,000 lines of text.
^^^(In comparison to the original bookcorpus' 74.1~ Million lines of text)^^^
### Dataset Summary
Modified and Uploaded to the hugggingface library as a part of a project. Essentially aiming at Open-Ended conversation data.
This dataset is basically a fragment of the infamous bookcorpus dataset.
Which aims to serve as a testing sample for those who may not want to download the entire bookcorpus dataset just for a small sample of it.
### Languages
The text is written in the English language.
## Dataset Structure
A simple ".txt" file which split each sentence into a new line. For a grand total of 100,000 lines.
### Data Fields
The data was originally modified for training on Masked Language Modeling with BERT.
However, It may be used for variety of other tasks that may require a similar dataset pattern.
### Data Splits
Currently, The Dataset is one text file which is a split of the bigger (original) bookcorpus Dataset.
Hence, There is only the train split (the one text file) available for download from this Dataset.
## Dataset Creation
The Dataset was created from a part of the bookcorpus Dataset and was slightly modified in the way the sentences are organized.
### Source Data
The source of the Data comes from the infamous BookCorpus Dataset, Available on HuggingFace at; "https://huggingface.co/datasets/bookcorpus"
### Personal and Sensitive Information
The rights and Data itself is not owned by me directly. I have simply modified it according to my needs.
### Licensing Information
All rights of the Data itself belong to the owners and those who contributed to the Dataset on-
-HuggingFace over at; "https://huggingface.co/datasets/bookcorpus"
|
Seraphiive/FragmentOfBOOKCORPUS
|
[
"task_categories:fill-mask",
"size_categories:1M<n<10M",
"language:en",
"region:us"
] |
2023-03-20T09:12:51+00:00
|
{"language": ["en"], "size_categories": ["1M<n<10M"], "task_categories": ["fill-mask"], "pretty_name": "FragmentOfBookCorpus"}
|
2023-03-20T10:09:01+00:00
|
3b55648e475b7a0bebe43fb725c1f96d02b5ff78
|
# Dataset Card for "reklambox-balanced-no-stopwords"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklambox-balanced-no-stopwords
|
[
"region:us"
] |
2023-03-20T10:38:40+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_name", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 401120, "num_examples": 1102}, {"name": "test", "num_bytes": 140041, "num_examples": 276}], "download_size": 335528, "dataset_size": 541161}}
|
2023-03-20T10:38:51+00:00
|
4706f0179085cb04b536d180f9ba1ccff6de0182
|
# Dataset Card for "celeba_with_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
simpletransformers/celeba_with_captions
|
[
"region:us"
] |
2023-03-20T11:26:29+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19563162, "num_examples": 24000}], "download_size": 4847318, "dataset_size": 19563162}}
|
2023-03-20T12:34:33+00:00
|
19a260327b3d3632b69ad4c1f030e19a7ae65c57
|
# Dataset Card for "letter_recognition"
Images in this dataset was generated using the script defined below. The original dataset in CSV format and more information of the original dataset is available at [A-Z Handwritten Alphabets in .csv format](https://www.kaggle.com/datasets/sachinpatel21/az-handwritten-alphabets-in-csv-format).
```python
import os
import pandas as pd
import matplotlib.pyplot as plt
CHARACTER_COUNT = 26
data = pd.read_csv('./A_Z Handwritten Data.csv')
mapping = {str(i): chr(i+65) for i in range(26)}
def generate_dataset(folder, end, start=0):
if not os.path.exists(folder):
os.makedirs(folder)
print(f"The folder '{folder}' has been created successfully!")
else:
print(f"The folder '{folder}' already exists.")
for i in range(CHARACTER_COUNT):
dd = data[data['0']==i]
for j in range(start, end):
ddd = dd.iloc[j]
x = ddd[1:].values
x = x.reshape((28, 28))
plt.axis('off')
plt.imsave(f'{folder}/{mapping[str(i)]}_{j}.jpg', x, cmap='binary')
generate_dataset('./train', 1000)
generate_dataset('./test', 1100, 1000)
```
|
pittawat/letter_recognition
|
[
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] |
2023-03-20T11:44:24+00:00
|
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D", "4": "E", "5": "F", "6": "G", "7": "H", "8": "I", "9": "J", "10": "K", "11": "L", "12": "M", "13": "N", "14": "O", "15": "P", "16": "Q", "17": "R", "18": "S", "19": "T", "20": "U", "21": "V", "22": "W", "23": "X", "24": "Y", "25": "Z"}}}}], "splits": [{"name": "train", "num_bytes": 22453522, "num_examples": 26000}, {"name": "test", "num_bytes": 2244964.8, "num_examples": 2600}], "download_size": 8149945, "dataset_size": 24698486.8}}
|
2023-03-21T06:15:35+00:00
|
47078cb55738e8041dae8d87dae6e11d97574468
|
# BERTIN Alpaca Spanish
This dataset is a translation to Spanish of [alpaca_data_cleaned.json](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json), a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca).
An [earlier version](https://huggingface.co/datasets/bertin-project/alpaca-spanish/blob/main/nllb/spa_train.json.gz) used [Facebook's NLLB 1.3B model](https://huggingface.co/facebook/nllb-200-1.3B), but the current version uses OpenAI's `gpt-3.5-turbo`, hence this dataset cannot be used to create models that compete in any way against OpenAI.
|
bertin-project/alpaca-spanish
|
[
"task_categories:text-generation",
"language:es",
"license:cc-by-4.0",
"instruction-finetuning",
"region:us"
] |
2023-03-20T11:51:06+00:00
|
{"language": ["es"], "license": "cc-by-4.0", "task_categories": ["text-generation"], "pretty_name": "BERTIN Alpaca Spanish", "tags": ["instruction-finetuning"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21439975, "num_examples": 51942}], "download_size": 13178075, "dataset_size": 21439975}}
|
2023-03-24T11:38:19+00:00
|
aed2537b0fba9a4be9fb90dc9d62e6a0c1db6fee
|
guangyil/yelp_short
|
[
"license:artistic-2.0",
"region:us"
] |
2023-03-20T12:04:55+00:00
|
{"license": "artistic-2.0", "dataset_info": {"features": [{"name": "bert_token", "sequence": "int64"}, {"name": "gpt2_token", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 89488944.91780378, "num_examples": 446811}, {"name": "test", "num_bytes": 89727.08219622188, "num_examples": 448}], "download_size": 21436068, "dataset_size": 89578672.0}}
|
2023-03-20T12:27:43+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.