sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
97df168a7996295938ba74889f947ea05cd7dad0
|
This is a checkpoint of the databricks-dolly-15k dataset
|
umarzein/databricks-dolly-15k-en
|
[
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-05-17T05:28:15+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0"}
|
2023-05-17T05:30:05+00:00
|
47bf9ab6284965c5ffe41f3d23716b2362074c8d
|
# Dataset Card for "simple_facenet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/simple_facenet
|
[
"region:us"
] |
2023-05-17T05:33:28+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 702629475.6255275, "num_examples": 17334}, {"name": "test", "num_bytes": 8106951.374472453, "num_examples": 200}], "download_size": 710565269, "dataset_size": 710736427.0}}
|
2023-05-18T03:16:30+00:00
|
ad6e08c4e5805257a4b46c8dde7c472fa0d888ec
|
MoCe/optimized-sd-config
|
[
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-05-17T05:36:57+00:00
|
{"license": "cc-by-nc-sa-4.0"}
|
2023-05-17T05:38:44+00:00
|
|
7542436257173e638085955b5605b910f71f29e7
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
shi3z/alpaca_cleaned_ja_json
|
[
"task_categories:text-generation",
"language:ja",
"license:cc-by-4.0",
"region:us"
] |
2023-05-17T05:37:34+00:00
|
{"language": ["ja"], "license": "cc-by-4.0", "task_categories": ["text-generation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "alpaca_cleaned_ja.json"}, {"split": "test", "path": "alpaca_cleaned_ja.json"}]}]}
|
2023-08-25T22:18:42+00:00
|
54209fafb0425a1f342e36a96bbfc5ca46191250
|
# Dataset Card for "gaps_spa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bjoernp/gaps_spa
|
[
"region:us"
] |
2023-05-17T05:56:05+00:00
|
{"dataset_info": {"features": [{"name": "sentences", "dtype": "string"}, {"name": "sentences_sp", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 59056357510, "num_examples": 231500660}], "download_size": 34172826813, "dataset_size": 59056357510}}
|
2023-05-17T06:19:24+00:00
|
f473d3c5753151dd5afc80dce336534ec8a8e541
|
JisuofthePark/UNEEK_ESL
|
[
"task_categories:feature-extraction",
"language:en",
"region:us"
] |
2023-05-17T06:35:59+00:00
|
{"language": ["en"], "task_categories": ["feature-extraction"]}
|
2023-05-23T16:46:52+00:00
|
|
acbae218c95c8abf21d12ed162daf1670f1d28ae
|
LudditeDrawslave/cookies
|
[
"license:unknown",
"region:us"
] |
2023-05-17T06:43:45+00:00
|
{"license": "unknown"}
|
2023-05-17T06:44:19+00:00
|
|
8c06b81fdf47ac0aea66106e38cb0b456e65458f
|
# Dataset Card for "deepfashion_with_captions_blowout_stacked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lirus18/deepfashion_with_captions_blowout_stacked
|
[
"region:us"
] |
2023-05-17T07:28:06+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "openpose", "dtype": "image"}, {"name": "cloth", "dtype": "image"}, {"name": "cloth_pose", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7752691006.179, "num_examples": 13679}], "download_size": 7538168576, "dataset_size": 7752691006.179}}
|
2023-05-17T07:31:15+00:00
|
59c22aeae7c6d75d540e3fddeb75441427e2c20c
|
archxin111/111
|
[
"license:openrail",
"region:us"
] |
2023-05-17T07:50:23+00:00
|
{"license": "openrail"}
|
2023-05-17T08:02:07+00:00
|
|
fd67b383eadb0441ab6c1d88f2a5197afff78dec
|
Alasty/test_image_cc_sbu_align
|
[
"license:wtfpl",
"region:us"
] |
2023-05-17T08:11:28+00:00
|
{"license": "wtfpl"}
|
2023-05-18T06:47:51+00:00
|
|
f794e565532813aad946a5f1ecdd8c166fcf2f60
|
# Dataset Card for "sidewalk-imagery16"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
G12345/sidewalk-imagery16
|
[
"region:us"
] |
2023-05-17T08:12:06+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3138225.0, "num_examples": 10}], "download_size": 3139736, "dataset_size": 3138225.0}}
|
2023-05-17T08:12:07+00:00
|
1fb6f089f79d4df5fb2b2f8721ff1d95a9c58ed7
|
# Dataset Card for "for_finetune_mpt7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Jumtra/for_finetune_mpt7b
|
[
"region:us"
] |
2023-05-17T08:17:51+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 93324064.17727587, "num_examples": 148853}, {"name": "test", "num_bytes": 4912188.822724139, "num_examples": 7835}], "download_size": 47575022, "dataset_size": 98236253.0}}
|
2023-05-17T08:17:58+00:00
|
0c98940ee8492f7519bddf0c41de5d5eba4856e2
|
phamson02/vietnamese-poetry-corpus
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:vi",
"license:cc-by-4.0",
"art",
"region:us"
] |
2023-05-17T08:39:50+00:00
|
{"language": ["vi"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "vietnamese-poetry-corpus", "tags": ["art"]}
|
2023-06-28T06:41:18+00:00
|
|
fd9b252391251cda1dbfccd0f1234a5c7c37ad0a
|
# Dataset Card for "sidewalk-imagery17"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
G12345/sidewalk-imagery17
|
[
"region:us"
] |
2023-05-17T08:43:50+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3138225.0, "num_examples": 10}], "download_size": 0, "dataset_size": 3138225.0}}
|
2023-05-17T08:43:56+00:00
|
efaac0deefb11af4bf3f7f245bea00a0dbb3b94b
|
This dataset is licensed under CC BY SA 4.0
Last Update : 2023-05-17
以下のデータをマージして作成したデータセットです。
databricks-dolly-15k-ja (CC BY 3.0)
https://github.com/kunishou/databricks-dolly-15k-ja
oasst1-ja-89k Repository (apach 1.0)
https://github.com/kunishou/oasst1-89k-ja
JGLUE-JSQuAD (CC BY 4.0)
https://github.com/yahoojapan/JGLUE
|
Jumtra/dolly_oast_jglue_ja
|
[
"license:cc-by-sa-4.0",
"region:us"
] |
2023-05-17T08:51:27+00:00
|
{"license": "cc-by-sa-4.0"}
|
2023-05-19T02:45:15+00:00
|
8365b1ad05287c80fe81d7cd35f5dc3a099156a0
|
# Dataset Card for "achraf-ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AchrafLou/achraf-ds
|
[
"region:us"
] |
2023-05-17T08:55:32+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 14823600.03, "num_examples": 3289}], "download_size": 15234205, "dataset_size": 14823600.03}}
|
2023-05-17T08:55:41+00:00
|
93be38014cca5314163df91c4e990c95e4537831
|
# Dataset Card for "flores200_devtest_mt5-600m-flores200-baseline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hlillemark/flores200_devtest_mt5-600m-flores200-baseline
|
[
"region:us"
] |
2023-05-17T08:58:13+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "source_lang", "dtype": "string"}, {"name": "target_lang", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "chrf_unreduced", "dtype": "string"}], "splits": [{"name": "devtest", "num_bytes": 734237740, "num_examples": 1000000}], "download_size": 514219403, "dataset_size": 734237740}}
|
2023-05-17T08:59:00+00:00
|
9ddaf0b22e34ac11b3a87ba909bf53ce4bfc1538
|
# Dataset Card for "image_captioned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AchrafLou/image_captioned
|
[
"region:us"
] |
2023-05-17T09:00:19+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 35535.0, "num_examples": 6}], "download_size": 32692, "dataset_size": 35535.0}}
|
2023-05-17T09:00:21+00:00
|
a451eadbb53f0ead419ba8dc00f05e5ec60b7273
|
# AutoTrain Dataset for project: pr_final_covid-19
## Dataset Description
This dataset has been automatically processed by AutoTrain for project pr_final_covid-19.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<299x299 L PIL image>",
"target": 0
},
{
"image": "<299x299 L PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Covid', 'Covid_test', 'Lung_Opacity', 'Lung_Opacity_test', 'Normal', 'Normal_test'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 399 |
| valid | 99 |
|
Flooki10/autotrain-data-pr_final_covid-19
|
[
"task_categories:image-classification",
"region:us"
] |
2023-05-17T09:06:07+00:00
|
{"task_categories": ["image-classification"]}
|
2023-05-17T09:08:22+00:00
|
c523e9a0bdb7119da151cfa0ab67838deeff67b6
|
# Dataset Card for "hebrew-words-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sivan22/hebrew-words-dataset
|
[
"region:us"
] |
2023-05-17T09:34:59+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2853411.0, "num_examples": 312}], "download_size": 2862168, "dataset_size": 2853411.0}}
|
2023-05-21T02:55:44+00:00
|
7fe14741f812f7d16eff05e790693ef3c3bd3f52
|
# zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr__alp-cd978d-c2048
This dataset is a part of the `zh-tw-llm-dev` project.
* Tokenizer: `zh-tw-llm-dev-tokenizer-a8k-d40d11`
* Built with: `translations`, `wikipedia`, `alpaca`
* Rows: `400`
* Max length: `2048`
* Full config:
```json
{"build_with": ["translations", "wikipedia", "alpaca"], "preview_length": 256, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "rows_limit": 100}, "wikipedia_settings": {"source_dataset": "zetavg/zh-tw-wikipedia", "exclude": [{"match": "小行星", "in": "markdown", "in_range": [0, 40]}, {"match": "是中華人民共和國", "in": "markdown", "in_range": [0, 80]}], "rows_limit": 100}, "alpaca_settings": {"source_dataset": "zetavg/traditional-chinese-alpaca-en-align", "template": "short", "train_on_inputs": false, "rows_limit": 100}}
```
|
zh-tw-llm-dv-dv/zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr__alp-cd978d-c2048
|
[
"region:us"
] |
2023-05-17T09:44:03+00:00
|
{"dataset_info": {"dataset_size": 1430500.0, "download_size": 483276, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}], "splits": [{"name": "train", "num_bytes": 1430500.0, "num_examples": 400}]}}
|
2023-05-17T09:44:12+00:00
|
85d7a01d5fbded78bcaaee143cf7d8e9daf6c1cd
|
# zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr__alp-a1a0fd-c2048
This dataset is a part of the `zh-tw-llm-dev` project.
* Tokenizer: `zh-tw-llm-dev-tokenizer-a8k-d40d11`
* Built with: `translations`, `wikipedia`, `alpaca`
* Rows: `400`
* Max length: `2048`
* Full config:
```json
{"build_with": ["translations", "wikipedia", "alpaca"], "preview_length": 256, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "rows_limit": 100}, "wikipedia_settings": {"source_dataset": "zetavg/zh-tw-wikipedia", "exclude": [{"content_length_longer_than": 512}, {"match": "小行星", "in": "markdown", "in_range": [0, 40]}, {"match": "是中華人民共和國", "in": "markdown", "in_range": [0, 80]}], "rows_limit": 100}, "alpaca_settings": {"source_dataset": "zetavg/traditional-chinese-alpaca-en-align", "template": "short", "train_on_inputs": false, "rows_limit": 100}}
```
|
zh-tw-llm-dv-dv/zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr__alp-a1a0fd-c2048
|
[
"region:us"
] |
2023-05-17T09:52:18+00:00
|
{"dataset_info": {"dataset_size": 1917406.0, "download_size": 623887, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}], "splits": [{"name": "train", "num_bytes": 1917406.0, "num_examples": 400}]}}
|
2023-05-17T09:52:29+00:00
|
a2875ff4a98879fd7e932d6df582a368a2699178
|
# Dataset Card for "image_captionedd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AchrafLou/image_captionedd
|
[
"region:us"
] |
2023-05-17T10:00:43+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 127098.0, "num_examples": 21}], "download_size": 109225, "dataset_size": 127098.0}}
|
2023-05-17T10:00:47+00:00
|
01fdb5173c9452fed1bf09bf8b0cd418bd5e6e99
|
# Dataset Card for "image_captioneddd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AchrafLou/image_captioneddd
|
[
"region:us"
] |
2023-05-17T10:04:40+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 119741.0, "num_examples": 20}], "download_size": 110310, "dataset_size": 119741.0}}
|
2023-05-17T10:04:43+00:00
|
8124ccbea4e0084f8ffe25630d4ff026c0e8639f
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:https://github.com/tjunlp-lab/TGEA**
- **Paper:Ge, H., Zhao, X., Liu, C., Zeng, Y., Liu, Q., & Xiong, D. (2022). TGEA 2.0: A Large-Scale Diagnostically Annotated Dataset with Benchmark Tasks for Text Generation of Pretrained Language Models. Advances in Neural Information Processing Systems, 35, 31612-31626.**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
In order to diagnostically analyze and improve the capability of pretrained language models (PLMs) in text generation, we propose TGEA 2.0, to date the largest dataset built on machine-authored texts by PLMs with fine-grained semantic annotations on a wide variety of pathological generation errors. We collect 170K nominal, phrasal and sentential prompts from 6M natural sentences in 3 domains. These prompts are fed into 4 generative PLMs with their best decoding strategy to generate paragraphs.
195,629 sentences are extracted from these generated paragraphs for manual
annotation, where 36K erroneous sentences are detected, 42K erroneous spans are
located and categorized into an error type defined in a two-level error taxonomy.
We define a Minimal Set of }rror-related
Words (MiSEW) for each erroneous span, which not only provides
error-associated words but also rationalizes the reasoning behind the error.
Quality control with a pre-annotation and feedback loop is performed before and
during the entire annotation process. With the diagnostically annotated dataset,
we propose 5 diagnosis benchmark tasks (i.e., erroneous text detection,
MiSEW extraction, erroneous span location and correction together with error type
classification) and 2 pathology mitigation benchmark tasks (pairwise comparison
and word prediction). Experiment results on these benchmark tasks demonstrate th
at TGEA 2.0 is a challenging dataset that could facilitate further research on au
tomatic diagnosis and pathology mitigation over machine texts.
### Languages
Chinese
### Cite
If you use the source codes here in your work, please cite the corresponding paper. The bibtex is listed below:
```
@inproceedings{DBLP:conf/nips/GeZLZ0X22,
author = {Huibin Ge and
Xiaohu Zhao and
Chuang Liu and
Yulong Zeng and
Qun Liu and
Deyi Xiong},
title = {{TGEA} 2.0: {A} Large-Scale Diagnostically Annotated Dataset with
Benchmark Tasks for Text Generation of Pretrained Language Models},
booktitle = {NeurIPS},
year = {2022},
url = {http://papers.nips.cc/paper\_files/paper/2022/hash/cd556f38dba3a6c367c42fa85fc0801c-Abstract-Datasets\_and\_Benchmarks.html},
timestamp = {Thu, 11 May 2023 17:08:22 +0200},
biburl = {https://dblp.org/rec/conf/nips/GeZLZ0X22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Data Splits
| Train | Dev | Test |
| ---- | ---- | ---- |
| 156,502 | 19,563 |19,564 |
|
jerma66/TGEA2.0
|
[
"language:sc",
"language:ch",
"language:zh",
"license:cc-by-4.0",
"region:us"
] |
2023-05-17T10:04:59+00:00
|
{"language": ["sc", "ch", "zh"], "license": "cc-by-4.0"}
|
2023-05-17T11:16:40+00:00
|
01067b176059af39ee388c0f6106e6fd7eaf1e19
|
# Dataset Card for "ml-arxiv-papers"
This is a dataset containing ML ArXiv papers. The dataset is a version of the original one from [CShorten](https://huggingface.co/datasets/CShorten/ML-ArXiv-Papers), which is a part of the ArXiv papers dataset from [Kaggle](https://www.kaggle.com/datasets/Cornell-University/arxiv).
Three steps are made to process the source data:
1. useless columns removal;
2. train-test split;
3. '\n' removal and trimming spaces on sides of the text.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aalksii/ml-arxiv-papers
|
[
"task_categories:summarization",
"task_categories:text2text-generation",
"language:en",
"arxiv",
"ML",
"region:us"
] |
2023-05-17T10:13:50+00:00
|
{"language": ["en"], "task_categories": ["summarization", "text2text-generation"], "pretty_name": "ML ArXiv Papers", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 130808836.19633989, "num_examples": 105832}, {"name": "test", "num_bytes": 14535413.803660113, "num_examples": 11760}], "download_size": 81252051, "dataset_size": 145344250}, "tags": ["arxiv", "ML"]}
|
2023-05-19T10:47:18+00:00
|
c0c04cb9e597dfbf59ce7af14646f61480994f1a
|
aignosi/langchaing-docs-chatgpt-plugin
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-17T10:26:27+00:00
|
{"license": "apache-2.0"}
|
2023-05-18T17:34:15+00:00
|
|
b8f467556ecdf5ba21724a456b8e2255546a8155
|
# zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr_wiki_sg_alp-f36645-c2048
This dataset is a part of the `zh-tw-llm-dev` project.
* Tokenizer: `zh-tw-llm-dev-tokenizer-a8k-d40d11`
* Built with: `translations`, `wikipedia`, `sharegpt`, `alpaca`
* Rows: `500`
* Max length: `2048`
* Full config:
```json
{"build_with": ["translations", "wikipedia", "sharegpt", "alpaca"], "preview_length": 256, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "rows_limit": 100}, "wikipedia_settings": {"source_dataset": "zetavg/zh-tw-wikipedia", "exclude": [{"content_length_longer_than": 512}, {"match": "小行星", "in": "markdown", "in_range": [0, 40]}, {"match": "是中華人民共和國", "in": "markdown", "in_range": [0, 80]}], "rows_limit": 100}, "sharegpt_settings": {"source_dataset": "zetavg/ShareGPT-Processed", "train_on_inputs": false, "languages": [{"en": 100}, "zh_Hant"], "rows_limit": 100}, "alpaca_settings": {"source_dataset": "zetavg/traditional-chinese-alpaca-en-align", "template": "short", "train_on_inputs": false, "rows_limit": 100}}
```
|
zh-tw-llm-dv-dv/zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr_wiki_sg_alp-f36645-c2048
|
[
"region:us"
] |
2023-05-17T11:44:52+00:00
|
{"dataset_info": {"dataset_size": 3426981.0, "download_size": 1117606, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}], "splits": [{"name": "train", "num_bytes": 3426981.0, "num_examples": 500}]}}
|
2023-05-17T12:46:48+00:00
|
62839fb656d0e10e59d26a7c015056960ca5cfb1
|
Neekey/test_datase
|
[
"license:mit",
"region:us"
] |
2023-05-17T12:03:34+00:00
|
{"license": "mit"}
|
2023-05-17T12:03:34+00:00
|
|
73bd9cd121d2ee335551b379a8ead6f27bab10cb
|
# Dataset Card for "10a6ef7d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/10a6ef7d
|
[
"region:us"
] |
2023-05-17T12:10:57+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1339, "dataset_size": 186}}
|
2023-05-17T12:11:02+00:00
|
a853c599688b1e99356feb2ced084757ea1d77b7
|
- subset from https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K
- train: 21000
- val seen: 3000
- val unseen: 2100
- test: 6000
|
LinZhao/LLaVA-Instruct-21K-COCO-SubSet
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-05-17T12:13:18+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"]}
|
2023-05-17T12:46:53+00:00
|
2b68db2853b3eebfe8ece8b805a29f5c07ae53f4
|
# Dataset Card for "0f9134d7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/0f9134d7
|
[
"region:us"
] |
2023-05-17T12:41:54+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 184, "num_examples": 10}], "download_size": 1343, "dataset_size": 184}}
|
2023-05-17T12:41:56+00:00
|
af59ef0610c7c93cccf5a5c4bd53c329c4d8380d
|
## Privaseer Dataset Demo
Huggingface version of the demo [Privaseer](https://privaseer.ist.psu.edu/) dataset.
<pre>
@inproceedings{srinath-etal-2021-privacy,
title = "Privacy at Scale: Introducing the {P}riva{S}eer Corpus of Web Privacy Policies",
author = "Srinath, Mukund and
Wilson, Shomir and
Giles, C Lee",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.532",
doi = "10.18653/v1/2021.acl-long.532",
pages = "6829--6839",
abstract = "Organisations disclose their privacy practices by posting privacy policies on their websites. Even though internet users often care about their digital privacy, they usually do not read privacy policies, since understanding them requires a significant investment of time and effort. Natural language processing has been used to create experimental tools to interpret privacy policies, but there has been a lack of large privacy policy corpora to facilitate the creation of large-scale semi-supervised and unsupervised models to interpret and simplify privacy policies. Thus, we present the PrivaSeer Corpus of 1,005,380 English language website privacy policies collected from the web. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies, and it surpasses the aggregate of unique websites represented in all other publicly available privacy policy corpora combined. We describe a corpus creation pipeline with stages that include a web crawler, language detection, document classification, duplicate and near-duplicate removal, and content extraction. We employ an unsupervised topic modelling approach to investigate the contents of policy documents in the corpus and discuss the distribution of topics in privacy policies at web scale. We further investigate the relationship between privacy policy domain PageRanks and text features of the privacy policies. Finally, we use the corpus to pretrain PrivBERT, a transformer-based privacy policy language model, and obtain state of the art results on the data practice classification and question answering tasks.",}
</pre>
|
alzoubi36/privaseer_demo
|
[
"language:en",
"license:gpl-3.0",
"region:us"
] |
2023-05-17T12:43:43+00:00
|
{"language": "en", "license": "gpl-3.0", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "hash", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 38674924, "num_examples": 4000}], "download_size": 18262815, "dataset_size": 38674924}}
|
2024-02-10T07:38:21+00:00
|
88a3c224b12bb66e627d69a59eb5b7d896079f89
|
# Dataset Card for "safetensors-dependents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
open-source-metrics/safetensors-dependents
|
[
"region:us"
] |
2023-05-17T12:46:09+00:00
|
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 1483, "num_examples": 36}, {"name": "repository", "num_bytes": 46161, "num_examples": 1021}], "download_size": 29567, "dataset_size": 47644}}
|
2024-02-17T03:11:37+00:00
|
2e708ef67b4e497ebdb1392b06fd8c9de85497bd
|
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
|
sno0owing/kitti_seg
|
[
"region:us"
] |
2023-05-17T12:50:41+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 327499999.0, "num_examples": 1000}], "download_size": 327439822, "dataset_size": 327499999.0}}
|
2023-06-13T07:22:08+00:00
|
3e7f39ebf36efb8e69bf236623138cb7567acb60
|
# Dataset Card for "deberta-v3-base-injection-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
deepset/prompt-injections
|
[
"region:us"
] |
2023-05-17T12:55:19+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 71720, "num_examples": 546}, {"name": "test", "num_bytes": 15981, "num_examples": 116}], "download_size": 51215, "dataset_size": 87701, "license": "cc-by-4.0"}}
|
2023-07-31T14:04:06+00:00
|
f9b0faa27d52672d36a186ab543a449d79615a10
|
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tanupriyasingh1234/celeb-identities
|
[
"region:us"
] |
2023-05-17T13:03:33+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Aishwarya_Rai", "1": "Constance_Wu", "2": "Emily_Blunt", "3": "Lupita_Nyong", "4": "Tom_Cruise", "5": "Tom_Hiddleston"}}}}], "splits": [{"name": "train", "num_bytes": 2162392.0, "num_examples": 18}], "download_size": 2161165, "dataset_size": 2162392.0}}
|
2023-05-17T13:03:35+00:00
|
ee38f3a5c87409c6cc9709b8920664fd21f62a3f
|
`yangwang825/reuters-21578` is an 8-class subset of the Reuters 21578 news dataset.
|
yangwang825/reuters-21578
|
[
"task_categories:text-classification",
"language:en",
"region:us"
] |
2023-05-17T13:25:37+00:00
|
{"language": ["en"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "acq", "1": "crude", "2": "earn", "3": "grain", "4": "interest", "5": "money-fx", "6": "ship", "7": "trade"}}}}]}}
|
2023-05-19T01:04:58+00:00
|
1637d6b1497ad2ac87ff4fc1854f89a0d4327e79
|
TMZN/quantangshi
|
[
"license:gpl-2.0",
"region:us"
] |
2023-05-17T13:57:21+00:00
|
{"license": "gpl-2.0"}
|
2023-05-17T13:57:22+00:00
|
|
4ec77ed8a93fa8122a4d7476df8c3dde21a33061
|
# Dataset Card for Tapir-Cleaned
This is a revised version of the DAISLab dataset of IFTTT rules, which has been thoroughly cleaned, scored, and adjusted for the purpose of instruction-tuning.
## Tapir Dataset Summary
Tapir is a subset of the larger DAISLab dataset, which comprises 242,480 recipes extracted from the IFTTT platform.
After a thorough cleaning process that involved the removal of redundant and inconsistent recipes, the refined dataset was condensed to include 32,403 high-quality recipes.
This curated set of instruction data is particularly useful for conducting instruction-tuning exercises for language models,
allowing them to more accurately follow instructions and achieve superior performance.
The last version of Tapir includes a correlation score that helps to identify the most appropriate description-rule pairs for instruction tuning.
Description-rule pairs with a score greater than 0.75 are deemed good enough and are prioritized for further analysis and tuning.
### Supported Tasks and Leaderboards
The Tapir dataset designed for instruction training pretrained language models
### Languages
The data in Tapir are mainly in English (BCP-47 en).
# Dataset Structure
### Data Instances
```json
{
"instruction":"From the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.",
"input":"If it's raining outside, you'll want some nice warm colors inside!",
"output":"IF Weather Underground Current condition changes to THEN LIFX Change color of lights",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nFrom the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.\n\n### Input:\nIf it's raining outside, you'll want some nice warm colors inside!\n\n### Response:\nIF Weather Underground Current condition changes to THEN LIFX Change color of lights",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform.
* `input`: context or input for the task. Each of the 32k input is unique.
* `output`: the answer taken from the original Tapir Dataset formatted as an IFTTT recipe.
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors of Alpaca for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| tapir | 32403 |
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{tapir,
author = {Mattia Limone, Gaetano Cimino, Annunziata Elefante},
title = {TAPIR: Trigger Action Platform for Information Retrieval},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/MattiaLimone/ifttt_recommendation_system}},
}
```
|
MattiaL/tapir-cleaned-top90
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"region:us"
] |
2023-05-17T13:59:53+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "Tapir-Cleaned", "tags": ["instruction-finetuning"]}
|
2023-05-17T14:07:30+00:00
|
6404494342dd0c66a4fa0e6fe1e1f5d1cf1579b6
|
# Dataset Card for "a01758e5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/a01758e5
|
[
"region:us"
] |
2023-05-17T14:37:52+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 180, "num_examples": 10}], "download_size": 1331, "dataset_size": 180}}
|
2023-05-17T14:37:55+00:00
|
bc82319d39fae328b3fa65ebbdc008810f1bc416
|
---
## Cashew Disease Identication with Artificial Intelligence (CADI-AI) Dataset
This repository contains a comprehensive dataset of cashew images captured by drones, accompanied by meticulously annotated labels.
Each high-resolution image in the dataset has a resolution of 1600x1300 pixels, providing fine details for analysis and model training.
To facilitate efficient object detection, each image is paired with a corresponding text file in YOLO format.
The YOLO format file contains annotations, including class labels and bounding box coordinates.
### Dataset Labels
```
['abiotic', 'insect', 'disease']
```
### Number of Images
```json
{'train': 3788, 'valid': 710, 'test': 238}
```
### Number of Instances Annotated
```json
{'insect':1618, 'abiotic':13960, 'disease':7032}
```
### Folder structure after unzipping repective folders
```markdown
Data/
└── train/
├── images
├── labels
└── val/
├── images
├── labels
└── test/
├── images
├── labels
```
### Dataset Information
The dataset was created by a team of data scientists from the KaraAgro AI Foundation,
with support from agricultural scientists and officers.
The creation of this dataset was made possible through funding of the
Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) through their projects
[Market-Oriented Value Chains for Jobs & Growth in the ECOWAS Region (MOVE)](https://www.giz.de/en/worldwide/108524.html) and
[FAIR Forward - Artificial Intelligence for All](https://www.bmz-digital.global/en/overview-of-initiatives/fair-forward/), which GIZ implements on
behalf the German Federal Ministry for Economic Cooperation and Development (BMZ).
For detailed information regarding the dataset, we invite you to explore the accompanying datasheet available [here](https://drive.google.com/file/d/1viv-PtZC_j9S_K1mPl4R1lFRKxoFlR_M/view?usp=sharing).
This comprehensive resource offers a deeper understanding of the dataset's composition, variables, data collection methodologies, and other relevant details.
|
KaraAgroAI/CADI-AI
|
[
"task_categories:object-detection",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-sa-4.0",
"object detection",
"vision",
"region:us"
] |
2023-05-17T14:38:09+00:00
|
{"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["object-detection"], "tags": ["object detection", "vision"], "extra_gated_heading": "Acknowledge license to accept the repository", "extra_gated_button_content": "Acknowledge license", "extra_gated_fields": {"I agree to attribute the creator of this repository": "checkbox"}}
|
2023-06-09T11:36:22+00:00
|
52d0032225bfe6d3ea61f64da9d065d6d8466125
|
# Dataset Card for WS353-semantics-sim-and-rel with ~2K entries.
### Dataset Summary
License: Apache-2.0. Contains CSV of a list of word1, word2, their `connection score`, type of connection and language.
- ### Original Datasets are available here:
- https://leviants.com/multilingual-simlex999-and-wordsim353/
### Paper of original Dataset:
- https://arxiv.org/pdf/1508.00106v5.pdf
|
0x22almostEvil/ws-semantics-simnrel
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"language:ru",
"language:de",
"language:it",
"license:apache-2.0",
"semantics",
"arxiv:1508.00106",
"region:us"
] |
2023-05-17T14:38:22+00:00
|
{"language": ["en", "ru", "de", "it"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "tags": ["semantics"]}
|
2023-05-20T08:35:49+00:00
|
5ac3dbc66800d3d59be8b1254dcc953b9ac2b508
|
tvmalaysiaorg/TV6-Live
|
[
"license:bigscience-openrail-m",
"region:us"
] |
2023-05-17T14:39:03+00:00
|
{"license": "bigscience-openrail-m"}
|
2023-05-17T14:42:25+00:00
|
|
1debdaf9078b92bfcc56e87e11368bd6480a1296
|
# Dataset Card for "chrf-referenceless-salt-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Sunbird/chrf-referenceless-salt-train
|
[
"region:us"
] |
2023-05-17T14:39:06+00:00
|
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "chrf", "dtype": "float64"}, {"name": "hypothesis", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22291130, "num_examples": 119735}], "download_size": 14893536, "dataset_size": 22291130}}
|
2023-05-17T14:39:14+00:00
|
3176499ed1242e390c0ae88a209b96612d1c5815
|
## Privaseer Dataset
Huggingface version of the [Privaseer](https://privaseer.ist.psu.edu/) dataset.
<pre>
@inproceedings{srinath-etal-2021-privacy,
title = "Privacy at Scale: Introducing the {P}riva{S}eer Corpus of Web Privacy Policies",
author = "Srinath, Mukund and
Wilson, Shomir and
Giles, C Lee",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.532",
doi = "10.18653/v1/2021.acl-long.532",
pages = "6829--6839",
abstract = "Organisations disclose their privacy practices by posting privacy policies on their websites. Even though internet users often care about their digital privacy, they usually do not read privacy policies, since understanding them requires a significant investment of time and effort. Natural language processing has been used to create experimental tools to interpret privacy policies, but there has been a lack of large privacy policy corpora to facilitate the creation of large-scale semi-supervised and unsupervised models to interpret and simplify privacy policies. Thus, we present the PrivaSeer Corpus of 1,005,380 English language website privacy policies collected from the web. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies, and it surpasses the aggregate of unique websites represented in all other publicly available privacy policy corpora combined. We describe a corpus creation pipeline with stages that include a web crawler, language detection, document classification, duplicate and near-duplicate removal, and content extraction. We employ an unsupervised topic modelling approach to investigate the contents of policy documents in the corpus and discuss the distribution of topics in privacy policies at web scale. We further investigate the relationship between privacy policy domain PageRanks and text features of the privacy policies. Finally, we use the corpus to pretrain PrivBERT, a transformer-based privacy policy language model, and obtain state of the art results on the data practice classification and question answering tasks.",}
</pre>
|
alzoubi36/privaseer
|
[
"license:gpl-3.0",
"region:us"
] |
2023-05-17T14:42:14+00:00
|
{"license": "gpl-3.0", "dataset_info": {"features": [{"name": "hash", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 17080868768, "num_examples": 2180300}], "download_size": 8133175578, "dataset_size": 17080868768}}
|
2023-06-21T11:32:56+00:00
|
18a4d9d5e9283f5efb5b7c2a4682d52360785e29
|
# :smiley: Análisis de tweets de felicidad
El [corpus](https://github.com/GIL-UNAM/TwitterHappiness/blob/main/Dataset.csv) para este proyecto es una recopilación de 10048 tuits obtenidos de la búsqueda del tag #felicidad. El corpus recopilado fue dado a 3 voluntarios a quienes se les pidió etiquetaran cada tuit, según su criterio, en los que expresaran **alegría (A)**, **publicidad (P)**, **felicitaciones (F)**, **consejos (C)** y **no alegría o sarcasmos (N)**. Al finalizar el etiquetado se realizó un filtro para obtener aquellos tuits que coincidian en más de una etiqueta y aquellos en los que ocurrió lo contrario se clasificaron en una sexta categoría nombrada **No Agreement (NA)**. Por último, se realizó un preprocesamiento al corpus tokenizando, eliminando signos de puntuación e hiperenlaces además de una extracción de raíces. Lo anterior descrito puede ser encontrado en el archivo [Pre-procesamiento](https://github.com/GIL-UNAM/TwitterHappiness/blob/main/Pre-procesamiento.py).
Como parte del análisis, en el archivo [Frecuencias Relativas](https://github.com/GIL-UNAM/TwitterHappiness/blob/main/Frecuencias%20Relativas.py), se encuentra el código para obtener las frecuencias de las palabras dentro de cada categoría y dentro del corpus total además de las frecuencias relativas de cada categoría con respecto al corpus total.
Por último, dentro del archivo [Sistemas de aprendizaje](https://github.com/GIL-UNAM/TwitterHappiness/blob/main/Sistemas%20de%20aprendizaje.py), se muestra el código de como se han aplicado los sistemas de aprendizaje **Naive Bayes (NB)**, **Logistic Regression (LR)**, **Random Forest (RF)** y **Support Vector Machine (SVM)** con conjuntos de <em>train-tests</em> en estratificaciones de 3 capas y obteniendo un porcentaje de exactitud y un score para cada sistema de aprendizaje en cada capa.
La lista de los principales paquetes empleados en la ejecución de los códigos se pueden encontrar en el archivo [Pre-requisitos](https://github.com/GIL-UNAM/TwitterHappiness/blob/main/Pre-requisitos.md).
## :pencil: Cómo citar
## :neckbeard: Colaboradores
- Gemma Bel-Enguix, Instituto de Ingeniería - UNAM
- Helena Gómez Adorno, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas - UNAM
- Karla Mendoza Grageda, Facultad de Ciencias - UNAM
- Grigori Sidorov, Instituto Politécnico Nacional - UNAM
|
GIL-UNAM/TwitterHappiness
|
[
"region:us"
] |
2023-05-17T14:46:21+00:00
|
{}
|
2023-05-17T14:53:50+00:00
|
9e9c7ad0c3ca37a96edd4fd9de66b24d455bc8de
|
# Dataset Card for "national_speech_corpusv2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
casual/national_speech_corpusv2
|
[
"region:us"
] |
2023-05-17T14:54:03+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 541423622.36, "num_examples": 3538}], "download_size": 557460372, "dataset_size": 541423622.36}}
|
2023-05-17T15:30:36+00:00
|
ab81e54b3a328f34409c413297a2fd395ac95fa4
|
# Dataset Card for "flores200_devtest_mt5-600m-flores200-packed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hlillemark/flores200_devtest_mt5-600m-flores200-packed
|
[
"region:us"
] |
2023-05-17T14:59:25+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "source_lang", "dtype": "string"}, {"name": "target_lang", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "chrf_unreduced", "dtype": "string"}], "splits": [{"name": "devtest", "num_bytes": 743880583, "num_examples": 1000000}], "download_size": 520688518, "dataset_size": 743880583}}
|
2023-05-17T15:00:12+00:00
|
4aed3e7c379cf23900c0625d6348b7aac8d394fe
|
# Dataset Card for "37dd4157"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/37dd4157
|
[
"region:us"
] |
2023-05-17T15:15:05+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 188, "num_examples": 10}], "download_size": 1336, "dataset_size": 188}}
|
2023-05-17T15:15:06+00:00
|
9ef7e26ec7226b76216668b44465573dea92d243
|
# Dataset Card for "720c5d3f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/720c5d3f
|
[
"region:us"
] |
2023-05-17T15:21:41+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1341, "dataset_size": 182}}
|
2023-05-17T15:21:42+00:00
|
1b96fd986aacbd0d298733704c758e3628457795
|
# Dataset Card for "c852cf92"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/c852cf92
|
[
"region:us"
] |
2023-05-17T15:32:49+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 184, "num_examples": 10}], "download_size": 1338, "dataset_size": 184}}
|
2023-05-17T15:32:50+00:00
|
52ae997e9dabb356a401cbd43b98ea096bc38902
|
# Dataset Card for "2f97018c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/2f97018c
|
[
"region:us"
] |
2023-05-17T15:34:00+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 188, "num_examples": 10}], "download_size": 1341, "dataset_size": 188}}
|
2023-05-17T15:34:01+00:00
|
adbc704e93a3863e35ee441224e69e210687705d
|
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
laurenmit/project_7_dataset_1500
|
[
"region:us"
] |
2023-05-17T15:39:43+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 660275.25, "num_examples": 1131}, {"name": "valid", "num_bytes": 109753.97612732096, "num_examples": 188}, {"name": "test", "num_bytes": 110337.77387267904, "num_examples": 189}], "download_size": 539376, "dataset_size": 880367.0}}
|
2023-05-17T15:52:38+00:00
|
ce353ca48e01613f322d93295a4dcb28de76a376
|
About this file
https://www.kaggle.com/datasets/yasserh/song-popularity-dataset
Humans have greatly associated themselves with Songs & Music. It can improve mood, decrease pain and anxiety, and facilitate opportunities for emotional expression. Research suggests that music can benefit our physical and mental health in numerous ways.
Lately, multiple studies have been carried out to understand songs & it's popularity based on certain factors. Such song samples are broken down & their parameters are recorded to tabulate. Predicting the Song Popularity is the main aim.
|
rjacquemin/tests-song
|
[
"region:us"
] |
2023-05-17T15:39:46+00:00
|
{}
|
2023-05-17T16:29:31+00:00
|
ff48b897e2f6049feb50c9d4ff41581c04792f0e
|
# Multilingual TEDx (Portuguese speech and transcripts)
**NOTE:** This dataset contains only the Portuguese portion of the mTEDx dataset, already processed and segmented into parts.
**Multilingual TEDx (mTEDx)** is a multilingual speech recognition and translation corpus to facilitate the training of ASR and SLT models in additional languages.
The corpus comprises audio recordings and transcripts from [TEDx Talks](https://www.ted.com/watch/tedx-talks) in 8 languages (Spanish, French, Portuguese, Italian, Russian, Greek, Arabic, German) with translations into up to 5 languages (English, Spanish, French, Portguese, Italian).
The audio recordings are automatically aligned at the sentence level with their manual transcriptions and translations.
Each .tgz file contains two directories: data and docs. docs contains a README detailing the files provided in data and their structure.
Test sets for all [IWSLT 2021](https://iwslt.org/2021/multilingual) language pairs can be found in mtedx_iwslt2021.tgz.
For more information on the dataset please see the [dataset paper](https://arxiv.org/abs/2102.01757).
Contact: Elizabeth Salesky, Matthew Wiesner. [[email protected], [email protected]](mailto:[email protected];[email protected];)
Citation: If you use the Multilingual TEDx corpus in your work, please cite the dataset paper:
```latex
@inproceedings{salesky2021mtedx,
title={Multilingual TEDx Corpus for Speech Recognition and Translation},
author={Elizabeth Salesky and Matthew Wiesner and Jacob Bremerman and Roldano Cattoni and Matteo Negri and Marco Turchi and Douglas W. Oard and Matt Post},
booktitle={Proceedings of Interspeech},
year={2021},
}
```
|
dominguesm/mTEDx-ptbr
|
[
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"language:pt",
"license:cc-by-nc-4.0",
"automatic-speech-recognition",
"audio-classification",
"Portuguese",
"ASR",
"arxiv:2102.01757",
"region:us"
] |
2023-05-17T15:52:33+00:00
|
{"language": ["pt"], "license": "cc-by-nc-4.0", "task_categories": ["automatic-speech-recognition", "audio-classification"], "pretty_name": "mTEDx PTBR", "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 109304535928.432, "num_examples": 90244}, {"name": "validation", "num_bytes": 1051506219.236, "num_examples": 1013}, {"name": "test", "num_bytes": 1226193261.48, "num_examples": 1020}], "download_size": 93176985982, "dataset_size": 111582235409.148}, "tags": ["automatic-speech-recognition", "audio-classification", "Portuguese", "ASR"]}
|
2024-02-11T12:28:59+00:00
|
8b75ce0838082d1bd57669283c8174ee86fa3dfc
|
# Dataset Card for "chai-examples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/chai-examples
|
[
"region:us"
] |
2023-05-17T16:01:05+00:00
|
{"dataset_info": {"features": [{"name": "bot_id", "dtype": "string"}, {"name": "system", "dtype": "string"}, {"name": "conversation", "list": [{"name": "from", "dtype": "string"}, {"name": "role_type", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 59958, "num_examples": 3}], "download_size": 35299, "dataset_size": 59958}}
|
2023-05-17T16:11:35+00:00
|
3c31729eb0c38dc17f0e91c399722e40a7212f2b
|
# Dataset Card for "dataset_with_ocr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Pratha1m/dataset_with_ocr
|
[
"region:us"
] |
2023-05-17T16:05:23+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "sequence": {"sequence": {"sequence": "uint8"}}}, {"name": "answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 147847251, "num_examples": 904}, {"name": "test", "num_bytes": 30521871, "num_examples": 190}], "download_size": 37387817, "dataset_size": 178369122}}
|
2023-05-21T10:38:37+00:00
|
84b34fd6edbf9322416c3f335ed59fb798bdaab8
|
# Dataset Card for "c4_t5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hlillemark/c4_t5
|
[
"region:us"
] |
2023-05-17T16:12:28+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 739721593504, "num_examples": 364868892}, {"name": "validation", "num_bytes": 60051536, "num_examples": 30000}], "download_size": 376936706386, "dataset_size": 739781645040}}
|
2023-05-18T03:57:40+00:00
|
8e269036a7154faaeaa4497cc45d10b7c694c74f
|
A description of this dataset can be found at https://arxiv.org/abs/2305.07759
Copied from roneneldan/TinyStoriesInstruct
Modified with:
```
import ftfy.bad_codecs
from datasets import Dataset, DatasetDict
train = open('./TinyStories-Instruct-train.txt', 'r', encoding='sloppy-windows-1252').read()
train = train.split('<|endoftext|>')
train = [l.strip() for l in train]
valid = open('./TinyStories-Instruct-valid.txt', 'r', encoding='sloppy-windows-1252').read()
valid = valid.split('<|endoftext|>')
valid = [l.strip() for l in valid]
dataset = DatasetDict({
'train': Dataset.from_dict({'text': train }),
'validation': Dataset.from_dict({'text': valid}),
})
dataset.save_to_disk('./TinyStories-Instruct')
```
|
skeskinen/TinyStories-Instruct-hf
|
[
"arxiv:2305.07759",
"region:us"
] |
2023-05-17T16:17:07+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2648754575, "num_examples": 2476533}, {"name": "validation", "num_bytes": 26745785, "num_examples": 25028}], "download_size": 1325495040, "dataset_size": 2675500360}}
|
2023-05-17T17:36:50+00:00
|
be1c10219c2c9bcbc442313cc6cbda64ae58a7ca
|
# Dataset Card for "rsicd_deduplicate_97"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Braddy/rsicd_deduplicate_97
|
[
"region:us"
] |
2023-05-17T16:22:09+00:00
|
{"dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "captions", "sequence": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 449844757.25, "num_examples": 8734}, {"name": "test", "num_bytes": 60130512.375, "num_examples": 1093}, {"name": "valid", "num_bytes": 57307918.25, "num_examples": 1094}], "download_size": 528945035, "dataset_size": 567283187.875}}
|
2023-05-17T16:22:30+00:00
|
82fd6b1c587f185ede13bb2edff7dad36241249f
|
# Dataset Card for "news-sp500"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** [email protected]
### Dataset Summary
The news-sp500 dataset provides news articles related to companies in the S&P 500 index.
### Supported Tasks and Leaderboards
The dataset can be used for various natural language processing tasks such as text classification, sentiment analysis, information extraction, etc. It does not have a specific leaderboard associated with it.
### Languages
The dataset contains news articles in multiple languages.
## Dataset Structure
### Data Instances
The dataset consists of [1563] data instances.
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- body (string): The main content of the news article.
- publisher (string): The name of the publisher or news agency.
- publish_time (timestamp[ns, tz=GMT]): A timestamp indicating the publication time of the news article in GMT timezone.
- title (string): The title or headline of the news article.
- url (string): The URL or link to the original news article.
- uuid (string): A unique identifier for the news article.
### Data Splits
The dataset consists of a single split called train.
## Dataset Creation
### Curation Rationale
The news-sp500 dataset was created to provide a collection of news articles related to companies in the S&P 500 index for research and analysis purposes.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from various online news sources and normalized for consistency.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The news-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The news-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, news-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset.
|
edarchimbaud/news-stocks
|
[
"region:us"
] |
2023-05-17T16:23:09+00:00
|
{"dataset_info": {"features": [{"name": "symbol", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "publisher", "dtype": "string"}, {"name": "publish_time", "dtype": "timestamp[ns, tz=GMT]"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "uuid", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 112563283, "num_examples": 22025}], "download_size": 55028670, "dataset_size": 112563283}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-11-21T05:06:42+00:00
|
5e877826c63d00ec32d0a93e1110cd764402e9b9
|
A description of this dataset can be found at https://arxiv.org/abs/2305.07759
Copied from roneneldan/TinyStories
Modified with:
```
import ftfy.bad_codecs
from datasets import Dataset, DatasetDict
train = open('./TinyStories-train.txt', 'r', encoding='sloppy-windows-1252').read()
train = train.split('<|endoftext|>')
train = [l.strip() for l in train]
valid = open('./TinyStories-valid.txt', 'r', encoding='sloppy-windows-1252').read()
valid = valid.split('<|endoftext|>')
valid = [l.strip() for l in valid]
dataset = DatasetDict({
'train': Dataset.from_dict({'text': train }),
'validation': Dataset.from_dict({'text': valid}),
})
dataset.save_to_disk('./TinyStories')
```
|
skeskinen/TinyStories-hf
|
[
"arxiv:2305.07759",
"region:us"
] |
2023-05-17T16:23:20+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1911420483, "num_examples": 2119719}, {"name": "validation", "num_bytes": 19306310, "num_examples": 21990}], "download_size": 1000775442, "dataset_size": 1930726793}}
|
2023-05-17T17:13:44+00:00
|
4168f371e5031312cd4a8d9efa936bdde82a64bf
|
# Dataset Card for "rsicd_deduplicate_99"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Braddy/rsicd_deduplicate_99
|
[
"region:us"
] |
2023-05-17T16:31:30+00:00
|
{"dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "captions", "sequence": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 450123178.25, "num_examples": 8734}, {"name": "test", "num_bytes": 60150977.375, "num_examples": 1093}, {"name": "valid", "num_bytes": 57334082.25, "num_examples": 1094}], "download_size": 529003664, "dataset_size": 567608237.875}}
|
2023-05-17T16:31:52+00:00
|
0cb9de53c7a000790356e0e50293f58d4faad0a3
|
# Dataset Card for "rsicd_deduplicate_95"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Braddy/rsicd_deduplicate_95
|
[
"region:us"
] |
2023-05-17T16:42:51+00:00
|
{"dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "captions", "sequence": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 449737330.25, "num_examples": 8734}, {"name": "test", "num_bytes": 60117169.375, "num_examples": 1093}, {"name": "valid", "num_bytes": 57297204.25, "num_examples": 1094}], "download_size": 528918987, "dataset_size": 567151703.875}}
|
2023-05-17T17:01:44+00:00
|
e083da216c146e5bf9e3dc425c453fdfd88ce80c
|
# Dataset Card for "fill50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lirus18/fill50k
|
[
"region:us"
] |
2023-05-17T16:47:58+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "canny", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 453988831.0, "num_examples": 50000}], "download_size": 0, "dataset_size": 453988831.0}}
|
2023-05-17T16:52:14+00:00
|
cc6346f09877b77a25b21db9d82eba47798193af
|
# AutoTrain Dataset for project: imagetest
## Dataset Description
This dataset has been automatically processed by AutoTrain for project imagetest.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<32x32 RGB PIL image>",
"feat_fine_label": 19,
"target": 11
},
{
"image": "<32x32 RGB PIL image>",
"feat_fine_label": 29,
"target": 15
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"feat_fine_label": "ClassLabel(names=['apple', 'aquarium_fish', 'baby', 'bear', 'beaver', 'bed', 'bee', 'beetle', 'bicycle', 'bottle', 'bowl', 'boy', 'bridge', 'bus', 'butterfly', 'camel', 'can', 'castle', 'caterpillar', 'cattle', 'chair', 'chimpanzee', 'clock', 'cloud', 'cockroach', 'couch', 'cra', 'crocodile', 'cup', 'dinosaur', 'dolphin', 'elephant', 'flatfish', 'forest', 'fox', 'girl', 'hamster', 'house', 'kangaroo', 'keyboard', 'lamp', 'lawn_mower', 'leopard', 'lion', 'lizard', 'lobster', 'man', 'maple_tree', 'motorcycle', 'mountain', 'mouse', 'mushroom', 'oak_tree', 'orange', 'orchid', 'otter', 'palm_tree', 'pear', 'pickup_truck', 'pine_tree', 'plain', 'plate', 'poppy', 'porcupine', 'possum', 'rabbit', 'raccoon', 'ray', 'road', 'rocket', 'rose', 'sea', 'seal', 'shark', 'shrew', 'skunk', 'skyscraper', 'snail', 'snake', 'spider', 'squirrel', 'streetcar', 'sunflower', 'sweet_pepper', 'table', 'tank', 'telephone', 'television', 'tiger', 'tractor', 'train', 'trout', 'tulip', 'turtle', 'wardrobe', 'whale', 'willow_tree', 'wolf', 'woman', 'worm'], id=None)",
"target": "ClassLabel(names=['aquatic_mammals', 'fish', 'flowers', 'food_containers', 'fruit_and_vegetables', 'household_electrical_devices', 'household_furniture', 'insects', 'large_carnivores', 'large_man-made_outdoor_things', 'large_natural_outdoor_scenes', 'large_omnivores_and_herbivores', 'medium_mammals', 'non-insect_invertebrates', 'people', 'reptiles', 'small_mammals', 'trees', 'vehicles_1', 'vehicles_2'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 50000 |
| valid | 10000 |
|
EveryPizza/autotrain-data-imagetest
|
[
"task_categories:image-classification",
"region:us"
] |
2023-05-17T17:28:36+00:00
|
{"task_categories": ["image-classification"]}
|
2023-05-17T18:27:01+00:00
|
cab32cc1102899b66bdfb660817dc364abd16c95
|
Audio files sampled at 48000Hz of an American male pronouncing the names of the Esperanto letters in three ways. Retroflex-r and trilled-r are included.
|
xekri/audio_letters_eo
|
[
"task_categories:automatic-speech-recognition",
"size_categories:n<1K",
"language:eo",
"license:cc-by-4.0",
"region:us"
] |
2023-05-17T17:44:20+00:00
|
{"language": ["eo"], "license": "cc-by-4.0", "size_categories": ["n<1K"], "task_categories": ["automatic-speech-recognition"]}
|
2023-05-17T18:49:54+00:00
|
f6408bd852147ad8edf2088094a662de52798c5a
|
yangwang825/marc-ja
|
[
"task_categories:text-classification",
"language:ja",
"region:us"
] |
2023-05-17T17:46:59+00:00
|
{"language": ["ja"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}]}}
|
2023-05-19T01:08:33+00:00
|
|
adcde0e2779dfd6a30f67b4b2c465d0b155f21f8
|
# zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr_wiki_sg_alp-c6795a-c2048
This dataset is a part of the `zh-tw-llm-dev` project.
* Tokenizer: `zh-tw-llm-dev-tokenizer-a8k-d40d11`
* Built with: `translations`, `wikipedia`, `sharegpt`, `alpaca`
* Rows: `train` `500`, `test` `140`
* Max length: `2048`
* Full config:
```json
{"build_with": ["translations", "wikipedia", "sharegpt", "alpaca"], "preview_length": 256, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}, "wikipedia_settings": {"source_dataset": "zetavg/zh-tw-wikipedia", "exclude": [{"content_length_longer_than": 512}, {"match": "小行星", "in": "markdown", "in_range": [0, 40]}, {"match": "是中華人民共和國", "in": "markdown", "in_range": [0, 80]}], "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}, "sharegpt_settings": {"source_dataset": "zetavg/ShareGPT-Processed", "train_on_inputs": false, "languages": [{"en": 0.4}, "zh_Hant"], "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}, "alpaca_settings": {"source_dataset": "zetavg/traditional-chinese-alpaca-en-align", "template": "short", "train_on_inputs": false, "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}}
```
|
zh-tw-llm-dv-dv/zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr_wiki_sg_alp-c6795a-c2048
|
[
"region:us"
] |
2023-05-17T17:56:55+00:00
|
{"dataset_info": {"dataset_size": 5061937.0, "download_size": 1510086, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}], "splits": [{"name": "train", "num_bytes": 3405190.0, "num_examples": 500}, {"name": "test", "num_bytes": 1656747.0, "num_examples": 140}]}}
|
2023-05-17T18:08:27+00:00
|
2f365fc26756d532620a56073a65eb6ffc865a2e
|
# Function Of Citation in Astrophysics Literature (FOCAL): Dataset and Task
*Can you explain why the authors made a given citation?*
This dataset was created as a [shared task](https://ui.adsabs.harvard.edu/WIESP/2023/shared_task_1) for [WIESP @ AACL-IJCNLP 2023](https://ui.adsabs.harvard.edu/WIESP/2023/).
## Dataset Description
Datasets are in JSON Lines format (each line is a json dictionary).
Each entry consists of a dictionary with the following keys:
- `"Identifier"`: unique string to identify the entry
- `"Paragraph"`: text string from an astrophysics paper
- `"Citation Text"`: list of strings forming the citation (most often a single string, but sometimes the citation text is split up)
- `"Citation Start End"`: list of integer pairs denoting where the citation starts and end in `"Paragraph"` (most often a single pair, sometimes the citation text is split up, if so follows the order in `"Citation Text"`)
- `"Functions Text"`: list of strings highlighting parts of the paragraph that explain the function of the citation
- `"Functions Label"`: list of strings with the label for each text element in `"Functions Text"` (in same order)
- `"Functions Start End"`: list of integer pairs denoting where the elements in `"Functions Text"` start and end in `"Paragraph"`(in same order)
start and end are defined by the character position in the `"Paragraph"` string.
## Instructions for Workshop Participants:
How to load the data using the Huggingface library:
```python
from datasets import load_dataset
dataset = load_dataset("adsabs/FOCAL")
```
How to load the data if you cloned the repository locally:
(assuming `./FOCAL-TRAINING.jsonl` is in the current directory, change as needed)
- python (as list of dictionaries):
```python
import json
with open("./FOCAL-TRAINING.jsonl", 'r') as f:
focal_training_from_json = [json.loads(l) for l in list(f)]
```
- into Huggingface (as a Huggingface Dataset):
```python
from datasets import Dataset
focal_training_from_json = Dataset.from_json(path_or_paths="./FOCAL-TRAINING.jsonl")
```
## File List
```
├── FOCAL-TRAINING.jsonl (2421 samples for training)
├── FOCAL-VALIDATION.jsonl (606 samples for validating your training methods)
├── FOCAL-TESTING.jsonl (821 samples for testing)
├── FOCAL-VALIDATION-NO-LABELS.jsonl (606 samples for validation without the labels. Used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── FOCAL-TESTING-NO-LABELS.jsonl (821 samples for testing without the labels. Used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── /scoring_scripts/score_focal_seqeval.py (scoring scripts used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── /scoring_scripts/score_focal_labels_only.py (scoring scripts used during the shared task of [WIESP-2023](https://ui.adsabs.harvard.edu/WIESP/2023/)
├── /data/*.parquet (files used when loading the dataset through Huggingface's API)
├── README.MD (this file)
└──
```
Maintainer: Felix Grezes (ORCID: 0000-0001-8714-7774)
Data annotator: Tom Allen (ORCID: 0000-0002-5532-4809)
|
adsabs/FOCAL
|
[
"task_categories:token-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"astronomy",
"region:us"
] |
2023-05-17T18:09:34+00:00
|
{"annotations_creators": ["expert-generated"], "language": ["en"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "tags": ["astronomy"], "dataset_info": {"features": [{"name": "Identifier", "dtype": "string"}, {"name": "Paragraph", "dtype": "string"}, {"name": "Citation Text", "sequence": "string"}, {"name": "Functions Text", "sequence": "string"}, {"name": "Functions Label", "sequence": "string"}, {"name": "Citation Start End", "sequence": {"sequence": "int64"}}, {"name": "Functions Start End", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 7096500, "num_examples": 2421}, {"name": "validation", "num_bytes": 1761751, "num_examples": 606}, {"name": "test", "num_bytes": 2512022, "num_examples": 821}], "download_size": 5649484, "dataset_size": 11370273}}
|
2023-10-18T18:15:03+00:00
|
4a00b88bf160eae934739c4a3fe41d1c248af8be
|
purav/animals
|
[
"license:mit",
"region:us"
] |
2023-05-17T18:14:49+00:00
|
{"license": "mit"}
|
2023-11-25T08:49:54+00:00
|
|
308d46fa36336c202b5bf7451236bee31b78551f
|
# zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr_wiki_sg_alp-396867-c2048
This dataset is a part of the `zh-tw-llm-dev` project.
* Tokenizer: `zh-tw-llm-dev-tokenizer-a8k-d40d11`
* Built with: `translations`, `wikipedia`, `sharegpt`, `alpaca`
* Rows: `train` `500`, `test` `50`
* Max length: `2048`
* Full config:
```json
{"build_with": ["translations", "wikipedia", "sharegpt", "alpaca"], "preview_length": 256, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}, "wikipedia_settings": {"source_dataset": "zetavg/zh-tw-wikipedia-dev", "exclude": [{"content_length_longer_than": 512}, {"match": "小行星", "in": "markdown", "in_range": [0, 40]}, {"match": "是中華人民共和國", "in": "markdown", "in_range": [0, 80]}], "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}, "sharegpt_settings": {"source_dataset": "zetavg/ShareGPT-Processed", "train_on_inputs": false, "languages": [{"en": 0.4}, "zh_Hant"], "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}, "alpaca_settings": {"source_dataset": "zetavg/traditional-chinese-alpaca-en-align", "template": "short", "train_on_inputs": false, "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}}
```
|
zh-tw-llm-dv-dv/zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr_wiki_sg_alp-396867-c2048
|
[
"region:us"
] |
2023-05-17T18:22:40+00:00
|
{"dataset_info": {"dataset_size": 3836796.0, "download_size": 1208658, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}], "splits": [{"name": "train", "num_bytes": 3555167.0, "num_examples": 500}, {"name": "test", "num_bytes": 281629.0, "num_examples": 50}]}}
|
2023-05-17T18:34:15+00:00
|
190b51ffe0ee8ac714ff8caa49db72de66ea2396
|
# Dataset Card for "GoogleNaturalQuestion_Indo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
genta-tech/GoogleNaturalQuestion_Indo
|
[
"region:us"
] |
2023-05-17T18:52:53+00:00
|
{"dataset_info": {"features": [{"name": "document_text", "dtype": "string"}, {"name": "question_text", "dtype": "string"}, {"name": "answer_text", "dtype": "string"}, {"name": "answer_index", "dtype": "string"}, {"name": "example_id", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 144067945, "num_examples": 307373}], "download_size": 78618056, "dataset_size": 144067945}}
|
2023-05-22T20:42:03+00:00
|
10e0eefb97093bfe59099a7d2f7380e96ca99d43
|
# Dataset Card for "ELI5_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dhmeltzer/ELI5_embedded
|
[
"region:us"
] |
2023-05-17T19:10:25+00:00
|
{"dataset_info": {"features": [{"name": "q_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "selftext", "dtype": "string"}, {"name": "document", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "a_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "score", "dtype": "int32"}]}, {"name": "title_urls", "sequence": [{"name": "url", "dtype": "string"}]}, {"name": "selftext_urls", "sequence": [{"name": "url", "dtype": "string"}]}, {"name": "answers_urls", "sequence": [{"name": "url", "dtype": "string"}]}, {"name": "split", "dtype": "string"}, {"name": "title_body", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 2375028302, "num_examples": 558669}], "download_size": 2134837293, "dataset_size": 2375028302}}
|
2023-05-17T19:11:53+00:00
|
1c136524a8f0c314a4a882b6e9ffcc78e7bb6ed3
|
# Dataset Card for "Py150-processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Dataset Creation
The original dataset is at https://www.sri.inf.ethz.ch/py150.
# Citation Information
@article{raychev2016probabilistic,
title={Probabilistic model for code with decision trees},
author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
journal={ACM SIGPLAN Notices},
volume={51},
number={10},
pages={731--747},
year={2016},
publisher={ACM New York, NY, USA}
}
|
Fraol/Py150-processed
|
[
"region:us"
] |
2023-05-17T19:23:00+00:00
|
{"dataset_info": {"features": [{"name": "repository_path", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 726142896.0, "num_examples": 120000}, {"name": "val", "num_bytes": 90767862.0, "num_examples": 15000}, {"name": "test", "num_bytes": 90767862.0, "num_examples": 15000}], "download_size": 343675742, "dataset_size": 907678620.0}}
|
2023-05-19T22:58:41+00:00
|
cb277394ca78831680ee7fae5e6f4f813a30e426
|
# zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr_wiki_sg_alp-c17ba7-c2048
This dataset is a part of the `zh-tw-llm-dev` project.
* Tokenizer: `zh-tw-llm-dev-tokenizer-a8k-d40d11`
* Built with: `translations`, `wikipedia`, `sharegpt`, `alpaca`
* Rows: `train` `500`, `test` `50`
* Max length: `2048`
* Full config:
```json
{"build_with": ["translations", "wikipedia", "sharegpt", "alpaca"], "sort_by": "length-desc", "preview_length": 256, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}, "wikipedia_settings": {"source_dataset": "zetavg/zh-tw-wikipedia-dev", "exclude": [{"content_length_longer_than": 512}, {"match": "小行星", "in": "markdown", "in_range": [0, 40]}, {"match": ",是中國", "in": "markdown", "in_range": [0, 20]}, {"match": "中華人民共和國", "in": "markdown", "in_range": [0, 20]}, {"match": "是中華人民共和國", "in": "markdown", "in_range": [0, 40]}], "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}, "sharegpt_settings": {"source_dataset": "zetavg/ShareGPT-Processed", "train_on_inputs": false, "languages": [{"en": 0.4}, "zh_Hant"], "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}, "alpaca_settings": {"source_dataset": "zetavg/traditional-chinese-alpaca-en-align", "template": "short", "train_on_inputs": false, "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}}
```
|
zh-tw-llm-dv-dv/zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr_wiki_sg_alp-c17ba7-c2048
|
[
"region:us"
] |
2023-05-17T19:29:35+00:00
|
{"dataset_info": {"dataset_size": 3875129.0, "download_size": 1187475, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}, {"dtype": "int64", "name": "length"}], "splits": [{"name": "train", "num_bytes": 3593100.0, "num_examples": 500}, {"name": "test", "num_bytes": 282029.0, "num_examples": 50}]}}
|
2023-05-19T18:35:55+00:00
|
29523c4eca87fa6e436a5f9c8b8bc344e7755ed6
|
Rajjjj/SSM
|
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] |
2023-05-17T19:47:20+00:00
|
{"license": "bigscience-bloom-rail-1.0"}
|
2023-05-17T19:47:20+00:00
|
|
9cd95b8265b7ebc06167a90e609a83bb4baa9df7
|
# Dataset Card for "aac4766c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/aac4766c
|
[
"region:us"
] |
2023-05-17T20:00:36+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 188, "num_examples": 10}], "download_size": 1336, "dataset_size": 188}}
|
2023-05-17T20:00:37+00:00
|
dfbd26306052623c5fbcc79dd6ac73cee81b8e6e
|
# Dataset Card for "b29e2786"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/b29e2786
|
[
"region:us"
] |
2023-05-17T20:00:39+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 188, "num_examples": 10}], "download_size": 1336, "dataset_size": 188}}
|
2023-05-17T20:00:40+00:00
|
cdc5cf0fffa663f66bdf9400d6a82c9d05b74869
|
# The 1st Scientific Figure Captioning (SciCap) Challenge 📖📊
Welcome to the 1st Scientific Figure Captioning (SciCap) Challenge! 🎉 This dataset contains approximately 400,000 scientific figure images sourced from various arXiv papers, along with their captions and relevant paragraphs. The challenge is open to researchers, AI/NLP/CV practitioners, and anyone interested in developing computational models for generating textual descriptions for visuals. 💻
*Challenge [homepage](http://SciCap.AI) 🏠*
## Challenge Overview 🌟
The SciCap Challenge will be hosted at ICCV 2023 in the 5th Workshop on Closing the Loop Between Vision and Language (October 2-3, Paris, France) 🇫🇷. Participants are required to submit the generated captions for a hidden test set for evaluation.
The challenge is divided into two phases:
- **Test Phase (2.5 months):** Use the provided training set, validation set, and public test set to build and test the models.
- **Challenge Phase (2 weeks):** Submit results for a hidden test set that will be released before the submission deadline.
Winning teams will be determined based on their results for the hidden test set 🏆. Details of the event's important dates, prizes, and judging criteria are listed on the challenge homepage.
## Dataset Overview and Download 📚
The SciCap dataset contains an expanded version of the [original SciCap](https://aclanthology.org/2021.findings-emnlp.277.pdf) dataset, and includes figures and captions from arXiv papers in eight categories: Computer Science, Economics, Electrical Engineering and Systems Science, Mathematics, Physics, Quantitative Biology, Quantitative Finance, and Statistics 📊. Additionally, it covers data from ACL Anthology papers [ACL-Fig](https://arxiv.org/pdf/2301.12293.pdf).
You can download the dataset using the following command:
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="CrowdAILab/scicap", repo_type='dataset')
```
_Merge all image split files into one_ 🧩
```
zip -F img-split.zip --out img.zip
```
The dataset schema is similar to the `mscoco` dataset:
- **images:** two separated folders - arXiv and acl figures 📁
- **annotations:** JSON files containing text information (filename, image id, figure type, OCR, and mapped image id, captions, normalized captions, paragraphs, and mentions) 📝
## Evaluation and Submission 📩
You have to submit your generated captions in JSON format as shown below:
```json
[
{
"image_id": int,
"caption": "PREDICTED CAPTION STRING"
},
{
"image_id": int,
"caption": "PREDICTED CAPTION STRING"
}
...
]
```
Submit your results using this [challenge link](https://eval.ai/web/challenges/challenge-page/2012/overview) 🔗. Participants must register on [Eval.AI](http://Eval.AI) to access the leaderboard and submit results.
**Please note:** Participants should not use the original captions from the arXiv papers (termed "gold data") as input for their systems ⚠️.
## Technical Report Submission 🗒️
All participating teams must submit a 2-4 page technical report detailing their system, adhering to the ICCV 2023 paper template 📄. Teams have the option to submit their reports to either the archival or non-archival tracks of the 5th Workshop on Closing the Loop Between Vision and Language.
Good luck with your participation in the 1st SciCap Challenge! 🍀🎊
|
CrowdAILab/scicap
|
[
"license:cc-by-nc-sa-4.0",
"arxiv:2301.12293",
"region:us"
] |
2023-05-17T20:01:09+00:00
|
{"license": "cc-by-nc-sa-4.0"}
|
2023-08-20T19:00:14+00:00
|
54a8c6284e60c89a389983ff2c6032c956d27d41
|
# Dataset Card for "0f1659c6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/0f1659c6
|
[
"region:us"
] |
2023-05-17T20:01:40+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 184, "num_examples": 10}], "download_size": 1326, "dataset_size": 184}}
|
2023-05-17T20:01:41+00:00
|
8db5770251f266e2d02d8883623b4116c256f693
|
# Dataset Card for "2c4acff4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/2c4acff4
|
[
"region:us"
] |
2023-05-17T20:03:17+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1339, "dataset_size": 182}}
|
2023-05-17T20:03:18+00:00
|
4f6d30facc8467a76f4e40a375e1e72a1951ebab
|
# Dataset Card for "44212031"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/44212031
|
[
"region:us"
] |
2023-05-17T20:03:20+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1339, "dataset_size": 182}}
|
2023-05-17T20:03:21+00:00
|
81bb5b943600a1695992c7d99902c66a4084828c
|
# Dataset Card for Dataset Name
## Dataset Description
Old ChatGPT scrapes, the RAW version.
### Dataset Summary
This is a result of a colab in a virtual shed. Really old stuff, before Plus even. Everything was generated by the model itself.
I think this is from what we call "alpha" now? Might even be before alpha idfk.
### Supported Tasks and Leaderboards
See dataset for more info.
### Languages
English only iirc, might be some translations thrown in there.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
Not much data was actually curated, it is recommended to go over the data yourself and fix some answers.
### Source Data
#### Initial Data Collection and Normalization
First, user queries were generated, then Assistant's answers.
#### Who are the source language producers?
OpenAI?
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
None. Z E R O.
### Discussion of Biases
Has some biases towards talking about OpenAI stuff and some weird-ish stuff. "NDA" stuff is missing.
### Other Known Limitations
Some of the quries contain answers, hence models trained on data as is will be fucked up. Raw data contains "today's date" and other stuff I didn't include in my Neo(X) finetune.
## Additional Information
### Dataset Curators
MrSteyk and old ChatGPT. RIP in pepperoni, you will be missed.
### Licensing Information
[More Information Needed]
### Citation Information
Don't
### Contributions
They know themselves, apart from OAI.
|
mrsteyk/opechatgpt-safe-r1
|
[
"task_categories:conversational",
"language:en",
"license:apache-2.0",
"chatgpt",
"openai",
"gpt35-alpha",
"region:us"
] |
2023-05-17T20:06:58+00:00
|
{"language": ["en"], "license": "apache-2.0", "task_categories": ["conversational"], "tags": ["chatgpt", "openai", "gpt35-alpha"]}
|
2023-05-17T20:33:44+00:00
|
d026c7f05dd0c2a533421fff0ee4fd17e928c682
|
devsamlak/chat-creator
|
[
"license:mit",
"region:us"
] |
2023-05-17T20:14:47+00:00
|
{"license": "mit"}
|
2023-05-17T20:23:00+00:00
|
|
ad83a01eb127861ea2a5944f6779d97749ac1f56
|
# Dataset Card for "28ac5cf1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/28ac5cf1
|
[
"region:us"
] |
2023-05-17T20:15:42+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 188, "num_examples": 10}], "download_size": 1341, "dataset_size": 188}}
|
2023-05-17T20:15:43+00:00
|
69060b5d9f638451e2a96622c357ecbae1bb7dda
|
# Dataset Card for "3de4ad84"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/3de4ad84
|
[
"region:us"
] |
2023-05-17T20:15:44+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 188, "num_examples": 10}], "download_size": 1341, "dataset_size": 188}}
|
2023-05-17T20:15:46+00:00
|
1cdc81742451b66b466437a904d0d3d614968401
|
I'm too lazy to fill in the dataset card template! Think of it like r1, but after NY - timestamp is XX-01-2023. This is not turbo at this point, it was before 26ths. This must be "alpha", I'm 99% sure.
Has same problems, additional one is missing greetings! "NDA" stuff is missing from this as well!
|
mrsteyk/openchatgpt-safe-r2
|
[
"task_categories:conversational",
"language:en",
"license:apache-2.0",
"chatgpt",
"openai",
"gpt35-alpha",
"region:us"
] |
2023-05-17T20:25:21+00:00
|
{"language": ["en"], "license": "apache-2.0", "task_categories": ["conversational"], "tags": ["chatgpt", "openai", "gpt35-alpha"]}
|
2023-05-17T20:31:04+00:00
|
ec3b33890e40ac691433a96cba8aa73480191dfd
|
GorkemPolat/LIMUC
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-05-17T20:34:19+00:00
|
{"license": "cc-by-4.0"}
|
2023-05-17T20:34:19+00:00
|
|
80925b956991564c7f4846d705546a7d008ce8e1
|
# Dataset Card for "deduplicated_dataset_400hrs_wer0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
linhtran92/deduplicated_dataset_400hrs_wer0
|
[
"region:us"
] |
2023-05-17T20:35:34+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "w2v2_transcription", "dtype": "string"}, {"name": "WER", "dtype": "int64"}, {"name": "sum", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 44901355753.77258, "num_examples": 493300}], "download_size": 44301883215, "dataset_size": 44901355753.77258}}
|
2023-05-18T17:26:30+00:00
|
6de6f352fab8fad2edc8ee4f7dd01cdf02593901
|
# Dataset Card for "arts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mooncakex/arts
|
[
"region:us"
] |
2023-05-17T21:02:18+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8308266698.9, "num_examples": 26134}], "download_size": 10177370938, "dataset_size": 8308266698.9}}
|
2023-05-17T23:21:29+00:00
|
b9f6d0c710556b8acb4124d0e9e3127010afa29d
|
# Dataset Card for "a4419c50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/a4419c50
|
[
"region:us"
] |
2023-05-17T21:06:03+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 188, "num_examples": 10}], "download_size": 1339, "dataset_size": 188}}
|
2023-05-17T21:06:04+00:00
|
96225161330f03abb438e3d8ac719761d06b2b9f
|
yangwang825/tnews
|
[
"task_categories:text-classification",
"language:en",
"region:us"
] |
2023-05-17T21:34:42+00:00
|
{"language": ["en"], "task_categories": ["text-classification"], "viewer": true, "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "news_story", "1": "news_culture", "2": "news_entertainment", "3": "news_sports", "4": "news_finance", "5": "news_house", "6": "news_car", "7": "news_edu", "8": "news_tech", "9": "news_military", "10": "news_travel", "11": "news_world", "12": "news_stock", "13": "news_agriculture", "14": "news_game"}}}}]}}
|
2023-05-18T07:20:45+00:00
|
|
f53769378bedc64314eecf07827c4ae83454bf85
|
# Dataset Card for "capstone-eng-hau-unclean-train-valid-0.1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shreevigneshs/capstone-eng-hau-unclean-train-valid-0.1
|
[
"region:us"
] |
2023-05-17T21:51:27+00:00
|
{"dataset_info": {"features": [{"name": "eng", "dtype": "string"}, {"name": "hau", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 297029855.7060056, "num_examples": 2088994}, {"name": "validation", "num_bytes": 33003396.293994457, "num_examples": 232111}, {"name": "test", "num_bytes": 281899, "num_examples": 1012}], "download_size": 241607500, "dataset_size": 330315151.0}}
|
2023-05-17T22:10:59+00:00
|
60a91d4412183f0c09a0c0b3e32eb541a8adcdb5
|
KyonBS/AdultNishikata
|
[
"license:openrail",
"region:us"
] |
2023-05-17T21:51:32+00:00
|
{"license": "openrail"}
|
2023-05-17T21:52:05+00:00
|
|
55d045f3a39d6e6f6915bc31e9f8f1707c638c2f
|
# Dataset Card for "1706e1cd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/1706e1cd
|
[
"region:us"
] |
2023-05-17T21:53:01+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 188, "num_examples": 10}], "download_size": 1339, "dataset_size": 188}}
|
2023-05-17T21:53:02+00:00
|
d2f6e6321cb63960ebae5fbc874359903e8d7533
|
# Dataset Card for "28d5d641"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/28d5d641
|
[
"region:us"
] |
2023-05-17T22:03:44+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 190, "num_examples": 10}], "download_size": 1330, "dataset_size": 190}}
|
2023-05-17T22:03:45+00:00
|
70afa88ad4d25ce1402e76b6f6b10c00eb44e7fa
|
# Dataset Card for "hagrid-classification-512p-no-gesture-150k"
This dataset contains 153,735 training images from [HaGRID](https://github.com/hukenovs/hagrid) (HAnd Gesture Recognition Image Dataset) modified for image classification instead of object detection. The original dataset is 716GB. I created this sample for a tutorial so readers can use the dataset in the free tiers of Google Colab and Kaggle Notebooks.
### Original Authors:
* [Alexander Kapitanov](https://www.linkedin.com/in/hukenovs)
* [Andrey Makhlyarchuk](https://www.linkedin.com/in/makhliarchuk)
* [Karina Kvanchiani](https://www.linkedin.com/in/kvanchiani)
### Original Dataset Links
* [GitHub](https://github.com/hukenovs/hagrid)
* [Kaggle Datasets Page](https://www.kaggle.com/datasets/kapitanov/hagrid)
|
cj-mills/hagrid-classification-512p-no-gesture-150k
|
[
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] |
2023-05-17T22:30:20+00:00
|
{"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "pretty_name": "HaGRID Classification 512p no_gesture 150k", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "call", "1": "dislike", "2": "fist", "3": "four", "4": "like", "5": "mute", "6": "no_gesture", "7": "ok", "8": "one", "9": "palm", "10": "peace", "11": "peace_inverted", "12": "rock", "13": "stop", "14": "stop_inverted", "15": "three", "16": "three2", "17": "two_up", "18": "two_up_inverted"}}}}], "splits": [{"name": "train", "num_bytes": 3805782529, "num_examples": 153735}], "download_size": 3808743954, "dataset_size": 3805782529}}
|
2023-05-18T05:21:04+00:00
|
95ef9a7fe2f9d0ec60ff3997992f7ec5abfb6cda
|
# Dataset Card for "us-breast-cancer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Yuchong/us-breast-cancer
|
[
"region:us"
] |
2023-05-17T22:40:30+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 42431652.0, "num_examples": 130}], "download_size": 10004141, "dataset_size": 42431652.0}}
|
2023-05-17T22:40:34+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.