sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
1c549b1402981ce3574be52e513f7ef9e85b7fd8
|
language:
en
en
license: cc-by-nc-sa-4.0
license_details: For non-commercial use. Refer to the license for more details.
tags:
geometry
sacred-geometry
symbolism
art
annotations_creators:
expert-generated
language_creators:
expert-generated
language_details:
en-US
en-US
pretty_name: Sacred Geometry Dataset
size_categories:
n<1K
source_datasets:
custom
task_categories:
image-classification
task_ids:
multi-class-image-classification
paperswithcode_id: null
configs:
default
dataset_info:
features:
- name: image
dtype: image
config_name: default
splits:
- name: train
num_bytes: 1024
num_examples: 500
download_size: 512
dataset_size: 2048
extra_gated_fields:
Affiliation: text
Email: text
Agreement: checkbox
extra_gated_prompt: By accessing this dataset, you agree to abide by the license terms and use it only for non-commercial purposes.
ChatGPT
language:
en
en
license: cc-by-nc-sa-4.0
license_details: For non-commercial use. Refer to the license for more details.
tags:
geometry
sacred-geometry
symbolism
art
annotations_creators:
expert-generated
language_creators:
expert-generated
language_details:
en-US
en-US
pretty_name: Sacred Geometry Dataset
size_categories:
n<1K
source_datasets:
custom
task_categories:
image-classification
task_ids:
multi-class-image-classification
paperswithcode_id: null
configs:
default
dataset_info:
features:
- name: image
dtype: image
config_name: default
splits:
- name: train
num_bytes: 1024
num_examples: 500
download_size: 512
dataset_size: 2048
extra_gated_fields:
Affiliation: text
Email: text
Agreement: checkbox
extra_gated_prompt: By accessing this dataset, you agree to abide by the license terms and use it only for non-commercial purposes.
---
language:
- en
- fr
pretty_name: "My Awesome Dataset"
tags:
- text classification
- sentiment analysis
license: apache-2.0
task_categories:
- text-classification
- natural-language-processing
---
|
tycru/autotrain-data-capture-the-flag
|
[
"region:us"
] |
2023-05-13T07:15:18+00:00
|
{}
|
2023-06-13T23:24:54+00:00
|
92cf93a686b85858ecec4f6f02a64cdf66e7085f
|
# Dataset Card for "code-search-net-go"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-go
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is the Go portion of the CodeSarchNet annotated with a summary column.
The code-search-net dataset includes open source functions that include comments found at GitHub.
The summary is a short description of what the function does.
### Languages
The dataset's comments are in English and the functions are coded in Go
### Data Splits
Train, test, validation labels are included in the dataset as a column.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset can be used to generate instructional (or many other interesting) datasets that are useful to train LLMs
### Source Data
The CodeSearchNet dataset can be found at https://www.kaggle.com/datasets/omduggineni/codesearchnet
### Annotations
This datasets include a summary column including a short description of the function.
#### Annotation process
The annotation procedure was done using [Salesforce](https://huggingface.co/Salesforce) T5 summarization models.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. (some may still be present in the dataset)
### Licensing Information
Apache 2.0
|
Nan-Do/code-search-net-go
|
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"code",
"go",
"CodeSearchNet",
"summary",
"region:us"
] |
2023-05-13T07:55:08+00:00
|
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation", "text2text-generation", "summarization"], "pretty_name": "Go CodeSearchNet with Summaries", "dataset_info": {"features": [{"name": "repo", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "original_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "code_tokens", "sequence": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "sha", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "partition", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 833011518, "num_examples": 345890}], "download_size": 239636894, "dataset_size": 833011518}, "tags": ["code", "go", "CodeSearchNet", "summary"]}
|
2023-05-14T23:56:15+00:00
|
c60e03521bae2928ea3c4a4cfecc6fd51a9fe140
|
Garfieldgx/Js100
|
[
"license:other",
"region:us"
] |
2023-05-13T08:33:19+00:00
|
{"license": "other"}
|
2023-05-13T08:33:19+00:00
|
|
c389bff8fd96eb6b6701a55b079c052e15066c2b
|
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
d4niel92/celeb-identities
|
[
"region:us"
] |
2023-05-13T08:41:05+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Andres_Iniesta", "1": "Heung-min_Son", "2": "Lionel_Messi", "3": "Mikaela_Shiffrin", "4": "Rafael_Nadal", "5": "Usain_Bolt"}}}}], "splits": [{"name": "train", "num_bytes": 3222797.0, "num_examples": 18}], "download_size": 3217445, "dataset_size": 3222797.0}}
|
2023-05-13T08:46:54+00:00
|
ccc9217d0fb4a7eb342ab7bc771b10894d807b64
|
gycchris/gycchris
|
[
"license:cc-by-sa-4.0",
"region:us"
] |
2023-05-13T09:23:30+00:00
|
{"license": "cc-by-sa-4.0"}
|
2023-05-13T09:23:30+00:00
|
|
7f414457570fb261449b3b9d456a2ec4cf0f6117
|
# Dataset Card for "DR_Grading_413_103"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sngsfydy/DR_Grading_413_103
|
[
"region:us"
] |
2023-05-13T10:36:10+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4"}}}}], "splits": [{"name": "train", "num_bytes": 261501746.0, "num_examples": 413}, {"name": "test", "num_bytes": 64805638.0, "num_examples": 103}], "download_size": 316625605, "dataset_size": 326307384.0}}
|
2023-05-13T10:38:39+00:00
|
cc6c186b8929c0578a92b8fad87ffe2dd62de995
|
# Dataset Card for "Disease_Grading_for_DR_and_Mucula"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sngsfydy/Disease_Grading_for_DR_and_Mucula
|
[
"region:us"
] |
2023-05-13T10:41:13+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4"}}}}], "splits": [{"name": "train", "num_bytes": 261501746.0, "num_examples": 413}, {"name": "test", "num_bytes": 64805638.0, "num_examples": 103}], "download_size": 316625605, "dataset_size": 326307384.0}}
|
2023-05-13T10:41:35+00:00
|
a4954e44fd33648538daa3f1f829c6deba2deae7
|
yongchoooon/fire-test
|
[
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-05-13T10:51:13+00:00
|
{"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "fire-test", "tags": []}
|
2023-06-03T10:57:56+00:00
|
|
5a056f6c57a018b232de870936f7f0385741b768
|
Real-world smartmeter dataset, 220V, 50Hz (EU)
|
scalytics/smartmeterdata
|
[
"license:cc-by-nc-4.0",
"region:us"
] |
2023-05-13T10:56:48+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-05-13T12:02:54+00:00
|
3b68a980ebbfbec6a6ac4cd92c5e2d8b65b4dadc
|
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Ashish08/celeb-identities
|
[
"region:us"
] |
2023-05-13T11:23:44+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "David_Schwimmer", "1": "Megan_Fox", "2": "Mila_Kunis", "3": "Ryan_Reynolds", "4": "Scarlett_Johansson", "5": "Wayne_Rooney"}}}}], "splits": [{"name": "train", "num_bytes": 914546.0, "num_examples": 18}], "download_size": 916734, "dataset_size": 914546.0}}
|
2023-05-13T12:40:46+00:00
|
a1e4cd3e5f697cca2d2aa9b2507b939e9ebf2b2a
|
# KiriTrash Dataset
## Summary
KiriTrash is a collection of trash images taken on the shorelines of Tarawa Atoll, Kiribati.
This is a dataset I used for my own research.
## Dataset Description
+ Dataset format: COCO Format
+ Number of images: 650 training, 90 validation, 5 Test
+ Preprocessings: Auto-Oriented, Resized to 640x640
+ Classes: 1 class
+ Augmentations: Flipped-Horizontal, Bounding Box exposure: -17%-17%
## Cite
I would really appreciate you citing my github homepage if you are using it:
[Github Homepage](https://github.com/tbensap18).
---
license: odc-by
---
|
tbensap18/KiriBeach
|
[
"task_categories:object-detection",
"license:openrail",
"region:us"
] |
2023-05-13T12:28:24+00:00
|
{"license": "openrail", "task_categories": ["object-detection"]}
|
2023-05-13T12:52:52+00:00
|
a55d52be00be7227b82c111c41c5e623eed78dbf
|
# Dataset Card for "wikiart_testing"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nouman-10/wikiart_testing
|
[
"region:us"
] |
2023-05-13T13:03:12+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "filename", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "embeddings_pca512", "sequence": "float32"}, {"name": "resnet50_non_robust_feats", "sequence": "float32"}, {"name": "resnet50_robust_feats", "sequence": "float32"}, {"name": "artist_question", "dtype": "string"}, {"name": "artist_long_answer", "dtype": "string"}, {"name": "artist_short_answer", "dtype": "string"}, {"name": "style_question", "dtype": "string"}, {"name": "style_long_answer", "dtype": "string"}, {"name": "style_short_answer", "dtype": "string"}, {"name": "genre_question", "dtype": "string"}, {"name": "genre_long_answer", "dtype": "string"}, {"name": "genre_short_answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 197225567.0, "num_examples": 6000}], "download_size": 188628095, "dataset_size": 197225567.0}}
|
2023-05-13T13:03:34+00:00
|
7c0bf5ef97ad3343e0728345f34c4bc76466eba3
|
baira/indian_food_images
|
[
"license:openrail",
"region:us"
] |
2023-05-13T13:04:22+00:00
|
{"license": "openrail", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "burger", "1": "butter_naan", "2": "chai", "3": "chapati", "4": "chole_bhature", "5": "dal_makhani", "6": "dhokla", "7": "fried_rice", "8": "idli", "9": "jalebi", "10": "kaathi_rolls", "11": "kadai_paneer", "12": "kulfi", "13": "masala_dosa", "14": "momos", "15": "paani_puri", "16": "pakode", "17": "pav_bhaji", "18": "pizza", "19": "samosa"}}}}], "splits": [{"name": "train", "num_bytes": 1377006438.2874336, "num_examples": 5328}, {"name": "test", "num_bytes": 235132199.3925666, "num_examples": 941}], "download_size": 1600810218, "dataset_size": 1612138637.6800003}}
|
2023-05-20T12:20:48+00:00
|
|
73a922ccc6a204e03307147dfa606ae3391f916e
|
# Dataset Card for "stamp-verification-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bilal01/stamp-verification-test
|
[
"region:us"
] |
2023-05-13T13:48:14+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 99500873.0, "num_examples": 5}], "download_size": 0, "dataset_size": 99500873.0}}
|
2023-05-14T05:47:09+00:00
|
c8fb5f0aaba6f49ec297530c6371ae160ab8b23b
|
coyotespike/testdata
|
[
"license:mit",
"region:us"
] |
2023-05-13T13:56:05+00:00
|
{"license": "mit"}
|
2023-05-13T13:56:05+00:00
|
|
bf39c5f8a56b8bda43d883e73224ad74a7a9be30
|
# Dataset Card for GSM QnA reasoning with ~8.8K entries.
### Dataset Summary
Contains Parquet of a list of instructions and answers.
Each row consists of
* INSTRUCTION
* RESPONSE
* SOURCE
* METADATA (json with language).
### Original Datasets are available here:
* https://huggingface.co/datasets/gsm8k
* https://huggingface.co/datasets/reasoning-machines/gsm-hard
|
0x22almostEvil/reasoning-gsm-qna-oa
|
[
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"QnA",
"math",
"programming",
"region:us"
] |
2023-05-13T14:09:16+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "tags": ["QnA", "math", "programming"]}
|
2023-05-13T14:43:31+00:00
|
4156fb62fcea66a79233d17c89c825a4a129d6f9
|
# Dataset Card for "cd45rb_leukocytes_masks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
polejowska/cd45rb_leukocytes_masks
|
[
"region:us"
] |
2023-05-13T14:22:22+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 39145092410.45, "num_examples": 20518}, {"name": "validation", "num_bytes": 3870988258.696, "num_examples": 1988}, {"name": "test", "num_bytes": 4403608598.814, "num_examples": 2299}], "download_size": 47565523851, "dataset_size": 47419689267.96}}
|
2023-05-14T05:33:44+00:00
|
00ad523538801c04882b70680ae01b2bdeedd960
|
Justiceak/yellow-book
|
[
"license:mit",
"region:us"
] |
2023-05-13T14:23:08+00:00
|
{"license": "mit"}
|
2023-05-13T14:27:28+00:00
|
|
d73bdaa40938394b4e2220d72b90790695853d00
|
# Dataset Card for "rest23_sentiment_data_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
javilonso/rest23_sentiment_data_v2
|
[
"region:us"
] |
2023-05-13T14:51:31+00:00
|
{"dataset_info": {"features": [{"name": "Title", "dtype": "string"}, {"name": "Review", "dtype": "string"}, {"name": "Polarity", "dtype": "int64"}, {"name": "Country", "dtype": "int64"}, {"name": "Type", "dtype": "int64"}, {"name": "Title_Review", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 183968346.20422935, "num_examples": 226161}, {"name": "test", "num_bytes": 20441740.79577064, "num_examples": 25130}], "download_size": 127064003, "dataset_size": 204410087.0}}
|
2023-05-13T14:51:57+00:00
|
8b12d223b69f9f31fc828ca5edf10f2868cc567e
|
Zone369/Ai
|
[
"license:artistic-2.0",
"region:us"
] |
2023-05-13T15:17:04+00:00
|
{"license": "artistic-2.0"}
|
2023-05-13T15:17:04+00:00
|
|
061feb750308eee380fea6a489df5aa85b3cb688
|
# zh-tw-pythia-a12k-alpaca
This dataset is a part of the `zh-tw-pythia` project.
* Tokenizer: `zh-tw-pythia-tokenizer-a12k-te01`
* Built with: `alpaca`
* Rows: `104004`
* Max length: `2048`
* Full config:
```json
{"build_with": "alpaca", "preview_length": 256, "alpaca_settings": {"source_dataset": "zetavg/traditional-chinese-alpaca-en-align"}}
```
|
zh-tw-llm-dv/zh-tw-pythia-a12k-alpaca
|
[
"region:us"
] |
2023-05-13T15:22:13+00:00
|
{"dataset_info": {"dataset_size": 191727949.0, "download_size": 52567497, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}], "splits": [{"name": "train", "num_bytes": 191727949.0, "num_examples": 104004}]}}
|
2023-05-13T15:25:46+00:00
|
cf94101f817f7890ac7f0a299e6a1da7426f35f0
|
# Dataset Card for "pixel-art-nouns"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jiovine/pixel-art-nouns
|
[
"region:us"
] |
2023-05-13T15:37:27+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 364572580.625, "num_examples": 49859}], "download_size": 328291373, "dataset_size": 364572580.625}}
|
2023-05-13T15:37:44+00:00
|
ad85e850c47e391ecd665c3491274d8eca47bbee
|
A dataset of Wikipedia sentences accompannied by valid and invalid abstract descriptions.
|
biu-nlp/abstract-sim
|
[
"region:us"
] |
2023-05-13T15:43:12+00:00
|
{}
|
2023-05-29T08:33:17+00:00
|
9ff1d1cc939daf0643422de67ac8767a4c97adc1
|
# Summary
This is a 🇹🇭 Thai-translated (GCP) dataset based on [MBZUAI/LaMini-instruction](MBZUAI/LaMini-instruction), The dataset was generated with a total of 2.58 million pairs of instructions and responses which later used to fine-tune the LaMini-LM model series.
This dataset utilizes GPT-3.5-turbo and is based on several existing resources of prompts, including self-instruct (Wang et al., 2022), P3 (Sanh et al., 2022), FLAN (Longpre et al., 2023), and Alpaca (Taori et al., 2023).
For more information about the process of generating instruction dataset, please refer to [the accompanying paper](https://arxiv.org/abs/2304.14402).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
### Special Thanks:
- Mr. Harris Boonkerd (Data Annotator)
### Languages: Thai
### Version: 1.0
---
|
Thaweewat/LaMini-instruction-th
|
[
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:th",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2304.14402",
"region:us"
] |
2023-05-13T15:57:39+00:00
|
{"language": ["th"], "license": "cc-by-nc-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["question-answering"], "tags": ["instruction-finetuning"]}
|
2023-05-13T16:15:17+00:00
|
08afd66ec21e920afd2a4a2551ccc302e0a5aad5
|
# Dataset Card for "aptos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sngsfydy/aptos
|
[
"region:us"
] |
2023-05-13T15:58:34+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4"}}}}], "splits": [{"name": "train", "num_bytes": 6185316143.746, "num_examples": 3662}], "download_size": 8874518024, "dataset_size": 6185316143.746}}
|
2023-05-13T16:07:47+00:00
|
54504e7fb31e98e51ed8f30171f501d2e88f0b0a
|
erichilarysmithsr/dovichousesimulation
|
[
"license:ncsa",
"doi:10.57967/hf/0646",
"region:us"
] |
2023-05-13T16:03:02+00:00
|
{"license": "ncsa"}
|
2023-05-13T16:05:51+00:00
|
|
8f042590bbce4b92fad7781f434a468d12122b62
|
erichilarysmithsr/autotrain-data-dovic-simulation-house-of-krebsville
|
[
"task_categories:conversational",
"language:en",
"language:es",
"license:afl-3.0",
"medical",
"code",
"biology",
"climate",
"doi:10.57967/hf/0647",
"region:us"
] |
2023-05-13T16:07:57+00:00
|
{"language": ["en", "es"], "license": "afl-3.0", "task_categories": ["conversational"], "pretty_name": "dovic-simuation-house", "tags": ["medical", "code", "biology", "climate"]}
|
2023-05-13T16:14:59+00:00
|
|
30aecbd2653259dafe5e7855930e68c621c7add1
|
# Dataset Card for "TextVQA_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/TextVQA_train
|
[
"region:us"
] |
2023-05-13T16:30:18+00:00
|
{"dataset_info": {"features": [{"name": "image_id", "dtype": "string"}, {"name": "question_id", "dtype": "int32"}, {"name": "question", "dtype": "string"}, {"name": "question_tokens", "sequence": "string"}, {"name": "image", "dtype": "image"}, {"name": "image_width", "dtype": "int32"}, {"name": "image_height", "dtype": "int32"}, {"name": "flickr_original_url", "dtype": "string"}, {"name": "flickr_300k_url", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "image_classes", "sequence": "string"}, {"name": "set_name", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "id_image", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 9836053547.652, "num_examples": 34602}], "download_size": 6184373820, "dataset_size": 9836053547.652}}
|
2023-05-13T16:34:14+00:00
|
51795a9b6c934fc4568ae19db5c14286d3caa118
|
# Dataset Card for "TextVQA_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/TextVQA_validation
|
[
"region:us"
] |
2023-05-13T16:34:16+00:00
|
{"dataset_info": {"features": [{"name": "image_id", "dtype": "string"}, {"name": "question_id", "dtype": "int32"}, {"name": "question", "dtype": "string"}, {"name": "question_tokens", "sequence": "string"}, {"name": "image", "dtype": "image"}, {"name": "image_width", "dtype": "int32"}, {"name": "image_height", "dtype": "int32"}, {"name": "flickr_original_url", "dtype": "string"}, {"name": "flickr_300k_url", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "image_classes", "sequence": "string"}, {"name": "set_name", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "id_image", "dtype": "int64"}], "splits": [{"name": "validation", "num_bytes": 1438271534.0, "num_examples": 5000}], "download_size": 917037596, "dataset_size": 1438271534.0}}
|
2023-05-13T16:35:03+00:00
|
e0b132a30a587bc696ab92893c91ccfa4654c707
|
# Dataset Card for "TextVQA_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/TextVQA_test
|
[
"region:us"
] |
2023-05-13T16:35:05+00:00
|
{"dataset_info": {"features": [{"name": "image_id", "dtype": "string"}, {"name": "question_id", "dtype": "int32"}, {"name": "question", "dtype": "string"}, {"name": "question_tokens", "sequence": "string"}, {"name": "image", "dtype": "image"}, {"name": "image_width", "dtype": "int32"}, {"name": "image_height", "dtype": "int32"}, {"name": "flickr_original_url", "dtype": "string"}, {"name": "flickr_300k_url", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "image_classes", "sequence": "string"}, {"name": "set_name", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "id_image", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 1711535106.844, "num_examples": 5734}], "download_size": 957546751, "dataset_size": 1711535106.844}}
|
2023-05-13T16:35:47+00:00
|
ae0e85475994150d3e9a1740b46251c689dd6168
|
leostelon/happiness-report
|
[
"license:openrail",
"region:us"
] |
2023-05-13T16:37:16+00:00
|
{"license": "openrail"}
|
2023-05-13T16:44:54+00:00
|
|
44340193baf8cc7b4d0d7ba79d544360547d8c8a
|
# MIT 6.8300/6.8301 Advances in Computer Vision Final Project
This is a dataset card used for our final projet on control nets
Dataset is obtained from https://www.kaggle.com/datasets/kengoichiki/the-metropolitan-museum-of-art-ukiyoe-dataset
Here, we used BLIP for image captions (prompt), used CV2 canny edge detection algorithm for conditioning images (target)
|
annabely/ukiyoe_50_100_control_net
|
[
"region:us"
] |
2023-05-13T16:39:32+00:00
|
{"dataset_info": {"features": [{"name": "source", "dtype": "image"}, {"name": "target", "dtype": "image"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1713365742.05, "num_examples": 4015}], "download_size": 1765586642, "dataset_size": 1713365742.05}}
|
2023-05-14T09:27:31+00:00
|
f82ac6aa406bd79d9a92d078f0de9ff3f134b496
|
# MIT 6.8300/6.8301 Advances in Computer Vision Final Project
This is a dataset card used for our final projet on control nets
Dataset is obtained from https://www.kaggle.com/datasets/kengoichiki/the-metropolitan-museum-of-art-ukiyoe-dataset
Here, we used BLIP for image captions (prompt), used CV2 canny edge detection algorithm for conditioning images (target)
|
annabely/ukiyoe_100_200_control_net
|
[
"region:us"
] |
2023-05-13T16:47:11+00:00
|
{"dataset_info": {"features": [{"name": "source", "dtype": "image"}, {"name": "target", "dtype": "image"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1693358001.33, "num_examples": 4015}], "download_size": 1744645145, "dataset_size": 1693358001.33}}
|
2023-05-14T09:25:14+00:00
|
72cf6b30a015047b0e17291367fda34f4c93ccdc
|
tsvs are from here https://github.com/google-research/FLAN/tree/4c5f0ba64239f7ff98fe90a5426fa175f4d1713c/flan/v2/cot_data
1 dupe removed, 31356 instances of alignment removed
dataset preview is fucked, it looks like this
```
{"question": "What does the government have control over?\\nOptions:\\n- trouble\\n- country\\n- army\\n- city\\n- control", "answer": "city", "chain_of_thought": "A city is a large town. A government controls large towns.\n"}
```
inspired by https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to clean_format_dedupe.py
|
ewof/flan_unfiltered
|
[
"region:us"
] |
2023-05-13T16:51:23+00:00
|
{}
|
2023-05-15T12:28:38+00:00
|
cda27c441e2814b84485a387f8d39664ff9236b1
|
# Dataset Card for "bookmebus-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
seanghay/bookmebus-reviews
|
[
"region:us"
] |
2023-05-13T17:01:59+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 622221, "num_examples": 4114}], "download_size": 371796, "dataset_size": 622221}}
|
2023-05-13T17:02:11+00:00
|
05c4065c416b9ee94b7b67e04200d1947972ced1
|
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Saiteja/celeb-identities
|
[
"region:us"
] |
2023-05-13T17:37:16+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "allu_arjun", "1": "chiranjeevi", "2": "kamal_haasan", "3": "mahesh_babu", "4": "prabhas", "5": "rajnikanth"}}}}], "splits": [{"name": "train", "num_bytes": 1952307.0, "num_examples": 18}], "download_size": 1943795, "dataset_size": 1952307.0}}
|
2023-05-13T17:53:03+00:00
|
c24031770264266239adc1c3a588febb0e2c13a3
|
# Dataset Card for SQuAD-TR
## Table of Contents
- [SQuAD-TR](#dataset-summary)
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## 📜 SQuAD-TR
SQuAD-TR is a machine translated version of the original [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset into Turkish, using [Amazon Translate](https://aws.amazon.com/translate/).
### Dataset Description
- **Repository:** [SQuAD-TR GitHub Repository](https://github.com/boun-tabi/SQuAD2.0-TR)
- **Paper:** Building Efficient and Effective OpenQA Systems for Low-Resource Languages
- **Point of Contact:** [Emrah Budur](mailto:[email protected])
## Dataset Structure
### Data Instances
Our data instances follow that of the original SQuAD2.0 dataset.
Shared below is an example instance from the default train dataset🍫
Example from SQuAD2.0:
```
{
"context": "Chocolate is New York City's leading specialty-food export, with up to US$234 million worth of exports each year. Entrepreneurs were forming a \"Chocolate District\" in Brooklyn as of 2014, while Godiva, one of the world's largest chocolatiers, continues to be headquartered in Manhattan.",
"qas": [
{
"id": "56cff221234ae51400d9c140",
"question": "Which one of the world's largest chocolate makers is stationed in Manhattan?",
"is_impossible": false,
"answers": [
{
"text": "Godiva",
"answer_start": 194
}
],
}
]
}
```
Turkish translation:
```
{
"context": "Çikolata, her yıl 234 milyon ABD dolarına varan ihracatı ile New York'un önde gelen özel gıda ihracatıdır. Girişimciler 2014 yılı itibariyle Brooklyn'de bir “Çikolata Bölgesi” kurarken, dünyanın en büyük çikolatacılarından biri olan Godiva merkezi Manhattan'da olmaya devam ediyor.",
"qas": [
{
"id": "56cff221234ae51400d9c140",
"question": "Dünyanın en büyük çikolata üreticilerinden hangisi Manhattan'da konuşlandırılmış?",
"is_impossible": false,
"answers": [
{
"text": "Godiva",
"answer_start": 233
}
]
}
]
}
```
### Data Fields
Below if the data model of the splits.
- `id`: a string feature.
- `title`: a string feature.
- `context`: a string feature.
- `question`: a string feature.
- `answers`: a dictionary feature containing:
- `text`: a string feature.
- `*answer_start`: a int32 feature.
*Notes:
- The training split we get by `openqa` parameter will not include `answer_start` field as it is not required for the training phase of the OpenQA formulation.
- The split we get by `excluded` parameter is also missing `answer_start` field as we could not identify the starting index of the answers for these examples from the context after the translation.
## Dataset Creation
We translated the titles, context paragraphs, questions and answer spans from the original SQuAD2.0 dataset using [Amazon Translate](https://aws.amazon.com/translate/) - requiring us to remap the starting positions of the answer spans, since their positions were changed due to the automatic translation.
We performed an automatic post-processing step to populate the start positions for the answer spans. To do so, we have first looked at whether there was an exact match for the translated answer span in the translated context paragraph and if so, we kept the answer text along with this start position found.
If no exact match was found, we looked for approximate matches using a character-level edit distance algorithm.
We have excluded the question-answer pairs from the original dataset where neither an exact nor an approximate match was found in the translated version. Our `default` configuration corresponds to this version.
We have put the excluded examples in our `excluded` configuration.
As a result, the datasets in these two configurations are mutually exclusive. Below are the details for the corresponding dataset splits.
### Data Splits
The SQuAD2.0 TR dataset has 2 splits: _train_ and _validation_. Below are the statistics for the most recent version of the dataset in the default configuration.
| Split | Articles | Paragraphs | Answerable Questions | Unanswerable Questions | Total |
| ---------- | -------- | ---------- | -------------------- | ---------------------- | ------- |
| train | 442 | 18776 | 61293 | 43498 | 104,791 |
| validation | 35 | 1204 | 2346 | 5945 | 8291 |
| Split | Articles | Paragraphs | Questions wo/ answers | Total |
| ------- | -------- | ---------- | --------------------- | ------- |
| train-excluded | 440 | 13490 | 25528 | 25528 |
| dev-excluded | 35 | 924 | 3582 | 3582 |
In addition to the default configuration, we also a different view of train split can be obtained specifically for openqa setting by combining the `train` and `train-excluded` splits. In this view, we only have question-answer pairs (without `answer_start` field) along with their contexts.
| Split | Articles | Paragraphs | Questions w/ answers | Total |
| ---------- | -------- | ---------- | -------------------- | ------- |
| openqa | 442 | 18776 | 86821 | 86821 |
More information on our translation strategy can be found in our linked paper.
### Source Data
This dataset used the original SQuAD2.0 dataset as its source data.
### Licensing Information
The SQuAD-TR is released under [CC BY-NC-ND 4.0](https://creativecommons.org/licenses/by-nc-nd/4.0).
#### 🤗 HuggingFace datasets
```py
from datasets import load_dataset
squad_tr_standard_qa = load_dataset("[TBD]", "default")
squad_tr_open_qa = load_dataset("[TBD]", "openqa")
squad_tr_excluded = load_dataset("[TBD]", "excluded")
xquad_tr = load_dataset("xquad", "xquad.tr") # External resource
```
* Demo application 👉 [Google Colab](https://colab.research.google.com/drive/1QVD0c1kFfOUc1sRGKDHWeF_HgNEineRt?usp=sharing).
### 🔬 Reproducibility
You can find all code, models and samples of the input data here [link TBD]. Please feel free to reach out to us if you have any specific questions.
### ✍️ Citation
>[Emrah Budur](https://scholar.google.com/citations?user=zSNd03UAAAAJ), [Rıza Özçelik](https://www.cmpe.boun.edu.tr/~riza.ozcelik), [Dilara Soylu](https://scholar.google.com/citations?user=_NC2jJEAAAAJ), [Omar Khattab](https://omarkhattab.com), [Tunga Güngör](https://www.cmpe.boun.edu.tr/~gungort/) and [Christopher Potts](https://web.stanford.edu/~cgpotts).
Building Efficient and Effective OpenQA Systems for Low-Resource Languages. 2024.
```
@misc{budur-etal-2024-squad-tr,
title={Building Efficient and Effective OpenQA Systems for Low-Resource Languages},
author={Emrah Budur and R{\i}za \"{O}z\c{c}elik and Dilara Soylu and Omar Khattab and Tunga G\"{u}ng\"{o}r and Christopher Potts},
year={2024},
eprint={TBD},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## ❤ Acknowledgment
This research was supported by the _[AWS Cloud Credits for Research Program](https://aws.amazon.com/government-education/research-and-technical-computing/cloud-credit-for-research/) (formerly AWS Research Grants)_.
We thank Alara Dirik, Almira Bağlar, Berfu Büyüköz, Berna Erden, Gökçe Uludoğan, Havva Yüksel, Melih Barsbey, Murat Karademir, Selen Parlar, Tuğçe Ulutuğ, Utku Yavuz for their support on our application for AWS Cloud Credits for Research Program and Fatih Mehmet Güler for the valuable advice, discussion and insightful comments.
|
boun-tabi/squad_tr
|
[
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|squad",
"language:tr",
"license:cc-by-nc-nd-4.0",
"region:us"
] |
2023-05-13T18:01:44+00:00
|
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["tr"], "license": "cc-by-nc-nd-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|squad"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa", "extractive-qa"], "paperswithcode_id": "squad-tr", "pretty_name": "SQuAD-TR", "dataset_info": [{"config_name": "default", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 95795325, "num_examples": 104791}, {"name": "validation", "num_bytes": 8287109, "num_examples": 8291}], "download_size": 9425486, "dataset_size": 104082434}, {"config_name": "excluded", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 24130226, "num_examples": 25528}, {"name": "validation", "num_bytes": 3427513, "num_examples": 3582}], "download_size": 5270628, "dataset_size": 27557739}, {"config_name": "openqa", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 119261215, "num_examples": 130319}, {"name": "validation", "num_bytes": 11649046, "num_examples": 11873}], "download_size": 14696114, "dataset_size": 130910261}]}
|
2024-01-06T20:03:12+00:00
|
eb75e86c8e29eb013fed49870ac0854146d79c06
|
# Dataset Card for "randomized_clean_miniwob_episodes_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LucasThil/randomized_clean_miniwob_episodes_v2
|
[
"region:us"
] |
2023-05-13T18:13:54+00:00
|
{"dataset_info": {"features": [{"name": "task_name", "dtype": "string"}, {"name": "utterance", "dtype": "string"}, {"name": "reward", "dtype": "float64"}, {"name": "raw_reward", "dtype": "float64"}, {"name": "processed_states", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 443683307, "num_examples": 13412}], "download_size": 55056820, "dataset_size": 443683307}}
|
2023-05-13T18:13:59+00:00
|
4f2ab1c6d8c454249a6918b52ae3732d521fa616
|
plugnplai/plugins-dataset-sample
|
[
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-05-13T18:59:32+00:00
|
{"license": "cc-by-nc-sa-4.0"}
|
2023-08-01T16:21:14+00:00
|
|
8aadf83eedfa37a83baa10834008a58cf16e702d
|
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
cjensen/celeb-identities
|
[
"region:us"
] |
2023-05-13T19:18:27+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Carrot_Top", "1": "Chris_Hemsworth", "2": "Gru", "3": "Michael_Jordan", "4": "Mother_Teresa", "5": "Winona_Ryder"}}}}], "splits": [{"name": "train", "num_bytes": 8636520.0, "num_examples": 18}], "download_size": 8635182, "dataset_size": 8636520.0}}
|
2023-05-13T19:18:33+00:00
|
948d74333cce3d6e8ab5704e88648f12b9b610e2
|
sultan93/ar_bactarian
|
[
"license:unknown",
"region:us"
] |
2023-05-13T19:52:06+00:00
|
{"license": "unknown"}
|
2023-05-13T19:53:10+00:00
|
|
6b0acc40ca164fe9c5105a5699bb36bc9c645565
|
# Dataset Card for "personal_finance_v0.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
danielv835/personal_finance_v0.2
|
[
"region:us"
] |
2023-05-13T20:06:30+00:00
|
{"dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 105692600, "num_examples": 56557}, {"name": "test", "num_bytes": 1825911, "num_examples": 1000}], "download_size": 64159306, "dataset_size": 107518511}}
|
2023-05-13T20:06:35+00:00
|
d096f402fdc76886458c0cfb5dedc829bea2b935
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [Enhancement to Low Resource Text Classification via Sequential Transfer Learning](#)
- **Leaderboard:**
- **Point of Contact:** [Neil Riego](mailto:[email protected])
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Tagalog (TL)
## Dataset Structure
### Data Instances
A typical data point, comprises of a text and the corresponding label.
An example from the YelpReviewFull test set looks as follows:
```
{
'label': 2,
'text': 'Madaling masira yung sa may sinisintasan nya. Wala rin syang box. Sana mas ginawa pa na matibay para sana sulit yung pagkakabili'
}
```
### Data Fields
- 'text': The review texts are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes ("").
- 'label': Corresponds to the score associated with the review (between 1 and 5).
### Data Splits
The Shopee reviews tl 15 dataset is constructed by randomly taking 2100 training samples and 450 samples for testing and validation for each review star from 1 to 5.
In total there are 10500 trainig samples and 2250 each in validation and testing samples.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
scaredmeow/shopee-reviews-tl-stars
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:tl",
"license:mpl-2.0",
"reviews",
"shopee",
"doi:10.57967/hf/0656",
"region:us"
] |
2023-05-13T20:13:28+00:00
|
{"language": ["tl"], "license": "mpl-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "1 star", "1": "2 star", "2": "3 stars", "3": "4 stars", "4": "5 stars"}}}}, {"name": "text", "dtype": "string"}]}, "tags": ["reviews", "shopee"]}
|
2023-05-15T06:40:20+00:00
|
5aef86d4d5146d753851bc7cca78498aab8af79b
|
DeadBeast/dreambooth-images
|
[
"license:openrail",
"region:us"
] |
2023-05-13T20:29:02+00:00
|
{"license": "openrail", "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 27360897.0, "num_examples": 11}], "download_size": 27356699, "dataset_size": 27360897.0}}
|
2023-05-13T20:36:37+00:00
|
|
17bec28f60c03e75417e8e8d4f4afa621ab4d15e
|
hostiliann/ererererer
|
[
"license:unknown",
"region:us"
] |
2023-05-13T20:36:12+00:00
|
{"license": "unknown"}
|
2023-05-13T20:36:12+00:00
|
|
e734e5093becb4ca8066bc50aa30f69e89b12931
|
# Dataset Card for "pixel-art-nouns-2k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jiovine/pixel-art-nouns-2k
|
[
"region:us"
] |
2023-05-13T20:45:53+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14571875.0, "num_examples": 2000}], "download_size": 13095236, "dataset_size": 14571875.0}}
|
2023-05-13T20:48:43+00:00
|
015d77b65c3cbef24b83d4634bd734e47f058d71
|
# Dataset Card for "blur_caption"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sejalg/blur_caption
|
[
"region:us"
] |
2023-05-13T20:48:58+00:00
|
{"dataset_info": {"features": [{"name": "original_image", "dtype": "image"}, {"name": "blurred_image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6367434279.878, "num_examples": 10882}], "download_size": 6368908097, "dataset_size": 6367434279.878}}
|
2023-05-14T16:19:13+00:00
|
0fe6a15666977ca76204791520e84f79f0765761
|
# Dataset Card for "roberta-no-topic-predictions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/roberta-no-topic-predictions
|
[
"region:us"
] |
2023-05-13T20:59:37+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "lang_score", "dtype": "float64"}, {"name": "topic", "dtype": "float64"}, {"name": "topic_prob", "dtype": "float64"}, {"name": "was_outlier", "dtype": "float64"}, {"name": "comments", "list": [{"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "score", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "validation", "num_bytes": 27640176, "num_examples": 8811}], "download_size": 17293485, "dataset_size": 27640176}}
|
2023-05-13T20:59:41+00:00
|
450b0d1538369576dfa6d16b140a9b776d75d7fa
|
# Dataset Card for "VQAv2_test_no_image_google_flan_t5_xl_mode_A_T_D_PNP_FILTER_C_Q_rices_ns_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_test_no_image_google_flan_t5_xl_mode_A_T_D_PNP_FILTER_C_Q_rices_ns_100
|
[
"region:us"
] |
2023-05-13T21:24:09+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_", "num_bytes": 9503, "num_examples": 100}], "download_size": 5619, "dataset_size": 9503}}
|
2023-05-13T21:24:11+00:00
|
e576524ca841af3c36fd6912e68e5920430928c1
|
tatsu-lab/alpaca_farm
|
[
"license:cc-by-nc-4.0",
"region:us"
] |
2023-05-13T21:28:40+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-05-29T00:00:10+00:00
|
|
85894860d2d17eec8493f44a0e884db3146876e1
|
# Dataset Card for "reward-model-no-topic-predictions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/reward-model-no-topic-predictions
|
[
"region:us"
] |
2023-05-13T21:34:44+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "lang_score", "dtype": "float64"}, {"name": "topic", "dtype": "float64"}, {"name": "topic_prob", "dtype": "float64"}, {"name": "was_outlier", "dtype": "float64"}, {"name": "comments", "list": [{"name": "prediction", "dtype": "float64"}, {"name": "score", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "validation", "num_bytes": 24952821, "num_examples": 8811}], "download_size": 15720103, "dataset_size": 24952821}}
|
2023-05-13T21:34:47+00:00
|
d7170fef1e6a63f15c613c1de82d117e0bbf33c3
|
YaTharThShaRma999/ImageCaptioningDataset
|
[
"license:mit",
"region:us"
] |
2023-05-13T22:16:43+00:00
|
{"license": "mit"}
|
2023-05-13T22:16:43+00:00
|
|
7f2c6d28d2f94f33b41c893381254d567ffb0af2
|
Cacau/drkiridescentrealm
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-13T22:19:29+00:00
|
{"license": "apache-2.0"}
|
2023-05-13T22:20:32+00:00
|
|
e2bf3c2f1d7f0aef76fd95a9b417a6d748d97c2f
|
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
obrito/celeb-identities
|
[
"region:us"
] |
2023-05-13T22:28:24+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Armadillo", "1": "Cat", "2": "Corgi", "3": "Emma_Stone", "4": "Platypus", "5": "Ryan_Gosling"}}}}], "splits": [{"name": "train", "num_bytes": 1589786.0, "num_examples": 18}], "download_size": 1591720, "dataset_size": 1589786.0}}
|
2023-05-13T22:28:29+00:00
|
caff4686cdd2a21d09de90dfc679c7b58bfab095
|
kartik727/Test_Dataset
|
[
"size_categories:n<1K",
"language:en",
"license:mit",
"vison-language",
"region:us"
] |
2023-05-13T22:49:41+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "pretty_name": "NLP Project Dataset", "tags": ["vison-language"]}
|
2023-05-17T22:40:41+00:00
|
|
39db91866dd0f251f3b0c7f42c0f85634101df6e
|
# Dataset Card for "code-search-net-python"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-python
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-python
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is the Python portion of the CodeSarchNet annotated with a summary column.
The code-search-net dataset includes open source functions that include comments found at GitHub.
The summary is a short description of what the function does.
### Languages
The dataset's comments are in English and the functions are coded in Python
### Data Splits
Train, test, validation labels are included in the dataset as a column.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset can be used to generate instructional (or many other interesting) datasets that are useful to train LLMs
### Source Data
The CodeSearchNet dataset can be found at https://www.kaggle.com/datasets/omduggineni/codesearchnet
### Annotations
This datasets include a summary column including a short description of the function.
#### Annotation process
The annotation procedure was done using [Salesforce](https://huggingface.co/Salesforce) T5 summarization models.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. (some may still be present in the dataset)
### Licensing Information
Apache 2.0
|
Nan-Do/code-search-net-python
|
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"code",
"python",
"CodeSearchNet",
"region:us"
] |
2023-05-13T23:42:57+00:00
|
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation", "text2text-generation", "summarization"], "pretty_name": "Python CodeSearchNet with Summaries", "dataset_info": {"features": [{"name": "repo", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "original_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "code_tokens", "sequence": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "sha", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "partition", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1772584117, "num_examples": 455243}], "download_size": 598837908, "dataset_size": 1772584117}, "tags": ["code", "python", "CodeSearchNet"]}
|
2023-05-14T23:55:15+00:00
|
b31e70e407f7674b39c5a4a0fe53335d5d45a202
|
# Dataset Card for "VQAv2_validation_no_image_google_flan_t5_xxl_mode_A_T_D_PNP_FILTER_C_Q_rices_ns_2000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_validation_no_image_google_flan_t5_xxl_mode_A_T_D_PNP_FILTER_C_Q_rices_ns_2000
|
[
"region:us"
] |
2023-05-14T00:06:31+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_", "num_bytes": 282708, "num_examples": 2000}], "download_size": 100307, "dataset_size": 282708}}
|
2023-05-14T00:06:33+00:00
|
168e980363218100cb8a4bde59525ca6cd2041bd
|
# Dataset Card for "musicdiffuser"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ramgus/musicdiffuser
|
[
"region:us"
] |
2023-05-14T00:21:28+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1244122934.912, "num_examples": 9929}], "download_size": 1183249933, "dataset_size": 1244122934.912}}
|
2023-05-14T00:53:24+00:00
|
b9ba22fa430481061c9e33748b4a272d1522860e
|
# Dataset Card for "VQAv2_validation_no_image_google_flan_t5_xxl_mode_T_A_D_PNP_FILTER_C_Q_rices_ns_2000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_validation_no_image_google_flan_t5_xxl_mode_T_A_D_PNP_FILTER_C_Q_rices_ns_2000
|
[
"region:us"
] |
2023-05-14T00:21:45+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_", "num_bytes": 282770, "num_examples": 2000}], "download_size": 100408, "dataset_size": 282770}}
|
2023-05-14T00:21:47+00:00
|
776d6c567919b3a5184cb71aed59568565303747
|
# Dataset Card for "VQAv2_test_no_image_google_flan_t5_xxl_mode_T_A_D_PNP_FILTER_C_Q_rices_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_test_no_image_google_flan_t5_xxl_mode_T_A_D_PNP_FILTER_C_Q_rices_ns_1000
|
[
"region:us"
] |
2023-05-14T00:39:40+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_", "num_bytes": 94167, "num_examples": 1000}], "download_size": 29466, "dataset_size": 94167}}
|
2023-05-14T00:39:42+00:00
|
e51995ddfb8ae2d6e1c5de29f8c041065cb57059
|
# Dataset Card for "analisis-sentimientos-textos-turisitcos-mx-polaridad-DataAugmentationV1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vg055/analisis-sentimientos-textos-turisitcos-mx-polaridad-DataAugmentationV1
|
[
"region:us"
] |
2023-05-14T00:49:41+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 99751382, "num_examples": 243912}, {"name": "test", "num_bytes": 10317131, "num_examples": 25171}], "download_size": 67444651, "dataset_size": 110068513}}
|
2023-05-14T00:49:50+00:00
|
81c8d24b26e5a78b69bb4c03fc77a9e50c851bc9
|
containerlib/embedded_sg_faq
|
[
"license:mit",
"region:us"
] |
2023-05-14T00:58:07+00:00
|
{"license": "mit"}
|
2023-05-14T01:04:07+00:00
|
|
ee4a3cbadb2f14e776646f82ce98b9f534335c23
|
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
khelton/celeb-identities
|
[
"region:us"
] |
2023-05-14T01:16:56+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Jackie_Robinson", "1": "Karen_Gillan", "2": "Ken_Griffey_Jr", "3": "Michelle_Yeoh", "4": "Mike_Trout", "5": "Ralph_Macchio", "6": "Steve_Lemme"}}}}], "splits": [{"name": "train", "num_bytes": 7874324.0, "num_examples": 27}], "download_size": 7877307, "dataset_size": 7874324.0}}
|
2023-05-14T01:17:03+00:00
|
3d8f37afe078f9c3d703ce7e69adcf7d34200a67
|
# MIT 6.8300/6.8301 Advances in Computer Vision Final Project
This is a dataset card used for our final projet on control nets
Dataset is obtained from https://www.kaggle.com/datasets/kengoichiki/the-metropolitan-museum-of-art-ukiyoe-dataset
Here, we used BLIP for image captions (prompt), used CV2 canny edge detection algorithm for conditioning images (target)
|
annabely/ukiyoe_10_30_control_net
|
[
"region:us"
] |
2023-05-14T01:49:30+00:00
|
{"dataset_info": {"features": [{"name": "source", "dtype": "image"}, {"name": "target", "dtype": "image"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1781618172.45, "num_examples": 4015}], "download_size": 1838494499, "dataset_size": 1781618172.45}}
|
2023-05-14T09:24:00+00:00
|
838358a8bd654e55020d78e43401a2ccb90a4080
|
# Dataset Card for "celeb-identities-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
deman539/celeb-identities-test
|
[
"region:us"
] |
2023-05-14T02:25:46+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Dua_Lipa", "1": "Emma_Watson", "2": "Kim_Kardashian", "3": "Morgan_Freeman", "4": "Robert_Downey_Jr", "5": "Salma_Hayek", "6": "Tom_Cruise"}}}}], "splits": [{"name": "train", "num_bytes": 1747620.0, "num_examples": 27}], "download_size": 1745368, "dataset_size": 1747620.0}}
|
2023-05-14T02:25:48+00:00
|
207e24b71fba53a5391839458773145582e269ef
|
# Synthetic Python Problems(SPP) Dataset
The dataset includes around 450k synthetic Python programming problems. Each Python problem consists of a task description, 1-3 examples, code solution and 1-3 test cases.
The CodeGeeX-13B model was used to generate this dataset.
A subset of the data has been verified by Python interpreter and de-duplicated. This data is `SPP_30k_verified.jsonl`.
The dataset is in a .jsonl format (json per line).
Released as part of Self-Learning to Improve Code Generation with Interpreter, Yetao et. al., 2023.
|
wuyetao/spp
|
[
"size_categories:100K<n<1M",
"license:cc-by-4.0",
"region:us"
] |
2023-05-14T02:54:27+00:00
|
{"license": "cc-by-4.0", "size_categories": ["100K<n<1M"]}
|
2023-05-14T03:49:53+00:00
|
c7194e3f90ee84684a26f335288bcbbfc7b8c94f
|
# Distilled XSum Dataset
This folder contains the dataset loading script for the distilled XSum data, which replaces the gold summaries with the [pseudo-labels](https://github.com/huggingface/transformers/blob/main/examples/research_projects/seq2seq-distillation/precomputed_pseudo_labels.md) generated by google/pegasus-xsum.
|
yuyang/distil_xsum
|
[
"region:us"
] |
2023-05-14T03:22:05+00:00
|
{}
|
2023-05-14T03:22:29+00:00
|
51f29bb5d58b51a03c17bd3a1e0ce7b461e8df8a
|
# Dataset Card for "code-search-net-javascript"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-JavaScript
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is the JavaScript portion of the CodeSarchNet annotated with a summary column.
The code-search-net dataset includes open source functions that include comments found at GitHub.
The summary is a short description of what the function does.
### Languages
The dataset's comments are in English and the functions are coded in JavaScript
### Data Splits
Train, test, validation labels are included in the dataset as a column.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset can be used to generate instructional (or many other interesting) datasets that are useful to train LLMs
### Source Data
The CodeSearchNet dataset can be found at https://www.kaggle.com/datasets/omduggineni/codesearchnet
### Annotations
This datasets include a summary column including a short description of the function.
#### Annotation process
The annotation procedure was done using [Salesforce](https://huggingface.co/Salesforce) T5 summarization models.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. (some may still be present in the dataset)
### Licensing Information
Apache 2.0
|
Nan-Do/code-search-net-javascript
|
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"code",
"javascript",
"CodeSearchNet",
"summary",
"region:us"
] |
2023-05-14T03:31:20+00:00
|
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation", "text2text-generation", "summarization"], "pretty_name": "JavaScript CodeSearchNet with Summaries", "dataset_info": {"features": [{"name": "repo", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "original_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "code_tokens", "sequence": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "sha", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "partition", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 543032741, "num_examples": 138155}], "download_size": 182237165, "dataset_size": 543032741}, "tags": ["code", "javascript", "CodeSearchNet", "summary"]}
|
2023-05-14T23:57:43+00:00
|
1298afed59cf69f67e738715e8ebe4e35801c11f
|
Thouph/dump_complete
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-14T03:50:21+00:00
|
{"license": "apache-2.0"}
|
2023-05-14T03:50:21+00:00
|
|
ed2ec686d8d3d8300a8bfdeda30344e2f13c8675
|
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zxzl/celeb-identities
|
[
"region:us"
] |
2023-05-14T04:04:51+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Beyonce", "1": "Jennie_of_BlackPink", "2": "Martin_Luther_King_Jr.", "3": "Matt_Damon", "4": "Miranda_Kerr", "5": "RM_of_BTS"}}}}], "splits": [{"name": "train", "num_bytes": 1207553.0, "num_examples": 18}], "download_size": 1206043, "dataset_size": 1207553.0}}
|
2023-05-14T04:04:53+00:00
|
7d7a3dea4d810fce9ee71cb46bee1b57a7b88298
|
# Dataset Card for "github-code-haskell-function"
Rows: 3.26M
Download Size: 1.17GB
This dataset is extracted from [github-code-haskell-file](https://huggingface.co/datasets/blastwind/github-code-haskell-file).
Each row has 3 flavors of the same function:
`uncommented_code`: Includes the function and its closest signature.
`function_only_code`: Includes the function only.
`full_code`: Includes the function and its closest [signature](https://wiki.haskell.org/Type_signature) and comment.
The heuristic for finding the closest signature and comment follows: If the immediate previous neighbor of the function
is neither a signature nor comment, `full_code` is just the function. If the previous neighbor is one though, include
them appropriately, then search the previous neighbor for the other node with the same logic.
Further, each row also contains attribute values for my personal analysis project. The attributes are calculated from the code in column `uncommented_code`.
7% (225k) of the rows have cyclomatic complexity and LOC valued at `-1` because [`homplexity`](https://github.com/BlastWind/homplexity) failed in parsing the row's `uncommented_code`.
|
blastwind/deprecated-github-code-haskell-function
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"code",
"haskell",
"region:us"
] |
2023-05-14T04:17:31+00:00
|
{"size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "full_code", "dtype": "string"}, {"name": "full_size", "dtype": "int64"}, {"name": "uncommented_code", "dtype": "string"}, {"name": "uncommented_size", "dtype": "int64"}, {"name": "function_only_code", "dtype": "string"}, {"name": "function_only_size", "dtype": "int64"}, {"name": "is_commented", "dtype": "bool"}, {"name": "is_signatured", "dtype": "bool"}, {"name": "n_ast_errors", "dtype": "int64"}, {"name": "ast_max_depth", "dtype": "int64"}, {"name": "n_whitespaces", "dtype": "int64"}, {"name": "n_ast_nodes", "dtype": "int64"}, {"name": "n_ast_terminals", "dtype": "int64"}, {"name": "n_ast_nonterminals", "dtype": "int64"}, {"name": "loc", "dtype": "int64"}, {"name": "cycloplexity", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2166157579, "num_examples": 2284385}, {"name": "valid", "num_bytes": 307778276, "num_examples": 326341}, {"name": "test", "num_bytes": 620756348, "num_examples": 652682}], "download_size": 1597070903, "dataset_size": 3094692203}, "tags": ["code", "haskell"]}
|
2023-12-01T06:04:52+00:00
|
d5388197d457f4d294975d252aa9306487b9bd75
|
# Dataset Card for "github-code-haskell"
Size: 754MB
Rows: 1.5M
Each row also contains attribute values for my personal analysis project. 7.5% (114k) of the rows have cyclomatic complexity and LOC valued at -1 because homplexity failed in parsing the row's uncommented_code.
|
blastwind/github-code-haskell
|
[
"region:us"
] |
2023-05-14T04:17:40+00:00
|
{"dataset_info": {"features": [{"name": "code", "dtype": "string"}, {"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "size", "dtype": "int64"}, {"name": "n_ast_errors", "dtype": "int64"}, {"name": "ast_max_depth", "dtype": "int64"}, {"name": "n_whitespaces", "dtype": "int64"}, {"name": "n_ast_nodes", "dtype": "int64"}, {"name": "n_ast_terminals", "dtype": "int64"}, {"name": "n_ast_nonterminals", "dtype": "int64"}, {"name": "loc", "dtype": "int64"}, {"name": "cycloplexity", "dtype": "int64"}, {"name": "granularity", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1233315603, "num_examples": 1064847}, {"name": "valid", "num_bytes": 178190369, "num_examples": 152121}, {"name": "test", "num_bytes": 356331754, "num_examples": 304243}], "download_size": 753800043, "dataset_size": 1767837726}}
|
2023-05-16T04:20:55+00:00
|
3c83f1baccc3e43160b449b3dc3b1a2b4badd7a1
|
Phuoth/6e-complete
|
[
"license:mit",
"region:us"
] |
2023-05-14T04:46:04+00:00
|
{"license": "mit", "viewer": false}
|
2023-05-14T04:59:37+00:00
|
|
aa2d55c568bd72bb86d230bf45e5a1ed07784e41
|
TeamSODA/ae-signal_processing_attacks_whisper_librispeech
|
[
"task_categories:audio-to-audio",
"size_categories:1K<n<10K",
"language:en",
"license:openrail",
"region:us"
] |
2023-05-14T04:58:11+00:00
|
{"language": ["en"], "license": "openrail", "size_categories": ["1K<n<10K"], "task_categories": ["audio-to-audio"], "pretty_name": "SodaSpPair", "dataset_info": {"features": [{"name": "audio_0", "dtype": "audio"}, {"name": "audio_1", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 14183982634, "num_examples": 9000}], "download_size": 432744050, "dataset_size": 14183982634}}
|
2023-05-14T07:34:52+00:00
|
|
f67dfac62d558a80cf79f4757d4005a9901403f9
|
nanxstats/movie-poster-5k
|
[
"license:openrail",
"region:us"
] |
2023-05-14T05:10:12+00:00
|
{"license": "openrail"}
|
2023-05-14T05:16:25+00:00
|
|
e7429f94e0acf486ef421737468a882c8e013dc3
|
pxovela/merab_6_longer_adjusted_captions
|
[
"license:openrail",
"region:us"
] |
2023-05-14T05:25:15+00:00
|
{"license": "openrail"}
|
2023-05-14T05:26:43+00:00
|
|
0da2a6267f4bf97737d8d74c104c3c26cee7c9af
|
Vanmas/PoE_data
|
[
"license:cc",
"region:us"
] |
2023-05-14T05:33:37+00:00
|
{"license": "cc"}
|
2023-08-28T07:57:29+00:00
|
|
85a3c8bc81ae638db8eb0d39c8b43227794c5342
|
pxovela/training_setting_burnt_unet_and_text_encoder
|
[
"license:openrail",
"region:us"
] |
2023-05-14T05:51:22+00:00
|
{"license": "openrail"}
|
2023-05-14T05:56:26+00:00
|
|
02d64dc60bdc1d549e2de10187aa8987e31299cc
|
# Dataset Card for "cl-signal_processing_attacks_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TeamSODA/cl-signal_processing_attacks_whisper_librispeech
|
[
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"language:en",
"license:openrail",
"region:us"
] |
2023-05-14T05:53:16+00:00
|
{"language": ["en"], "license": "openrail", "size_categories": ["1K<n<10K"], "task_categories": ["audio-classification"], "pretty_name": "SodaSP", "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0-original", "1": "1-attacked"}}}}], "splits": [{"name": "train", "num_bytes": 13751864078, "num_examples": 18000}], "download_size": 910820595, "dataset_size": 13751864078}}
|
2023-05-14T07:34:31+00:00
|
144c3349bf849409ffb42da1bc835f2def1d39bf
|
# Dataset Card for "details_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
darksensei/details_dataset
|
[
"region:us"
] |
2023-05-14T06:23:46+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "full_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 43751142.4, "num_examples": 960}, {"name": "test", "num_bytes": 10937785.6, "num_examples": 240}], "download_size": 53367810, "dataset_size": 54688928.0}}
|
2023-05-23T14:12:36+00:00
|
7ef2d8bba93a063d56911dbbc386ece6fb51961b
|
# Dataset Summary
Contains hourly 2 meters of land (on-shore) air temperature data within grid areas of Thailand country. <br/>
Data is retrieved from [Corpernicus Climate Data Store](https://cds.climate.copernicus.eu/cdsapp#!/home) on [ERA5-Land hourly data from 1950 to present](https://cds.climate.copernicus.eu/cdsapp#!/dataset/10.24381/cds.e2161bac?tab=overview)
<br/>
Thailand areas in this context is **Latitude** = **[5.77434, 20.43353]** and **Longitude** = **[97.96852, 105.22908]** <br/>
For more details of data, you can refer to [ERA5-Land hourly data from 1950 to present](https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-land?tab=overview)
- Data Granularity: Hourly per Latitude/ Longitude
- Period: **31/Dec/1999** - **08/May/2023**
- Temperature Unit: Celsius (°C) (Original data from [ERA5-Land hourly data from 1950 to present](https://cds.climate.copernicus.eu/cdsapp#!/dataset/10.24381/cds.e2161bac?tab=overview) is Kelvin)
# Source Data
- Organization of the producer: ECMWF
# Data Creation
Below is an example of how to make data query using Python via [CDS API](https://cds.climate.copernicus.eu/api-how-to) in monthly requests. <br/>
Script can be found [here](https://huggingface.co/datasets/WasuratS/ECMWF_Thailand_Land_Air_Temperatures/blob/main/cds_api_requestor_example.py)
``` python
import cdsapi
c = cdsapi.Client()
month_list = [str(num).zfill(2) for num in range(1, 13)]
day_list = [str(num).zfill(2) for num in range(1, 32)]
time_list = [str(num).zfill(2) + ":00" for num in range(0, 24)]
year_list = [str(num) for num in range(2000, 2022)]
for year in year_list:
for month in month_list:
c.retrieve('reanalysis-era5-land',
{
'variable': [
'2m_temperature']
,
'year': year,
'month' : month,
'day': day_list,
'time': time_list,
'format': 'grib',
'area': [
20.43, 97.96, 5.77,
105.22,
],
},
f'{year}_{month}_hourly_2m_temp_TH.grib')
```
Direct file output from API is in ```.grib``` format, to make it easy for further analysis work, I have converted it to ```.parquet``` format. <br/>
To convert GRIB format to pandas dataframe, you can use [xrray](https://github.com/pydata/xarray) and [cfgrib](https://github.com/ecmwf/cfgrib) library to help as below example snippet of code.
``` python
import xarray as xr
import cfgrib
ds = xr.open_dataset('2022_12_31_hourly_2m_temp_TH.grib', engine='cfgrib')
df = ds.to_dataframe().reset_index()
```
## Licensing
[Climate Data Store Product Licensing](https://cds.climate.copernicus.eu/api/v2/terms/static/licence-to-use-copernicus-products.pdf)
## Citation
- This data was generated using **Copernicus Climate Change Service** information and <br/>
contains modified **Copernicus Climate Change Service** information on 1999/Dec/31 - 2023/May/08 data period
- Muñoz Sabater, J. (2019): ERA5-Land hourly data from 1950 to present. <br/>
Copernicus Climate Change Service (C3S) Climate Data Store (CDS). <br/>
DOI: [10.24381/cds.e2161bac](https://cds.climate.copernicus.eu/cdsapp#!/dataset/10.24381/cds.e2161bac?tab=overview) (Accessed on 13-May-2023)
- Copernicus Climate Change Service (C3S) (2022): ERA5-Land hourly data from 1950 to present. <br/>
Copernicus Climate Change Service (C3S) Climate Data Store (CDS). <br/>
DOI: [10.24381/cds.e2161bac](https://cds.climate.copernicus.eu/cdsapp#!/dataset/10.24381/cds.e2161bac?tab=overview) (Accessed on 13-May-2023)
|
WasuratS/ECMWF_Thailand_Land_Air_Temperatures
|
[
"task_categories:time-series-forecasting",
"size_categories:100M<n<1B",
"license:eupl-1.1",
"climate",
"region:us"
] |
2023-05-14T06:30:45+00:00
|
{"license": "eupl-1.1", "size_categories": ["100M<n<1B"], "task_categories": ["time-series-forecasting"], "tags": ["climate"]}
|
2023-05-15T00:20:10+00:00
|
51c4d9e7b5436505d27676b38f5f893aa3b63519
|
# Gaepago (Gae8J/gaepago_s)
## How to use
### 1. Install dependencies
```bash
pip install datasets==2.10.1
pip install soundfile==0.12.1
pip install librosa==0.10.0.post2
```
### 2. Load the dataset
```python
from datasets import load_dataset
dataset = load_dataset("Gae8J/gaepago_s")
```
Outputs
```
DatasetDict({
train: Dataset({
features: ['file', 'audio', 'label', 'is_unknown', 'youtube_id'],
num_rows: 12
})
validation: Dataset({
features: ['file', 'audio', 'label', 'is_unknown', 'youtube_id'],
num_rows: 12
})
test: Dataset({
features: ['file', 'audio', 'label', 'is_unknown', 'youtube_id'],
num_rows: 12
})
})
```
### 3. Check a sample
```python
dataset['train'][0]
```
Outputs
```
{'file': 'bark/1_Q80fDGLRM.wav', 'audio': {'path': 'bark/1_Q80fDGLRM.wav', 'array': array([-9.15838356e-08, 6.80501699e-08, 1.97052145e-07, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00]), 'sampling_rate': 16000}, 'label': 0, 'is_unknown': False, 'youtube_id': '1_Q80fDGLRM'}
```
|
Gae8J/gaepago_s
|
[
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] |
2023-05-14T06:33:00+00:00
|
{"license": "other", "size_categories": ["1K<n<10K"], "task_categories": ["audio-classification"], "dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bark", "1": "bow-wow", "2": "growling", "3": "howl", "4": "whimper", "5": "yip"}}}}, {"name": "is_unknown", "dtype": "bool"}, {"name": "youtube_id", "dtype": "string"}, {"name": "youtube_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8774740.0, "num_examples": 12}, {"name": "validation", "num_bytes": 8774740.0, "num_examples": 12}, {"name": "test", "num_bytes": 8774740.0, "num_examples": 12}], "download_size": 26037015, "dataset_size": 26324220.0}}
|
2023-05-19T13:50:49+00:00
|
d0608e438fac44c57e2bd951254a1686322cbb2c
|
SilpaCS/Ecommerce
|
[
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] |
2023-05-14T06:38:25+00:00
|
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"]}
|
2023-05-14T06:42:49+00:00
|
|
49ed7ffcc4107a14182983586e12bde06154123b
|
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Ruqoyya/celeb-identities
|
[
"region:us"
] |
2023-05-14T07:14:52+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Albert_Einstein", "1": "Ashley_Olsen", "2": "Chris_Rock", "3": "Cristiano_Ronaldo", "4": "Didier_Drogba", "5": "Idris_Elba", "6": "Lionel_Messi", "7": "Mary-Kate_Olsen", "8": "Paul_Pogba", "9": "Tamera_Mowry", "10": "Tia_Mowry"}}}}], "splits": [{"name": "train", "num_bytes": 1992683.0, "num_examples": 34}], "download_size": 1995278, "dataset_size": 1992683.0}}
|
2023-05-14T07:14:55+00:00
|
dceb01add0c581c7376e71820ffc8470a4f2a106
|
chebao/rv
|
[
"license:openrail",
"region:us"
] |
2023-05-14T07:25:55+00:00
|
{"license": "openrail"}
|
2023-05-14T07:25:55+00:00
|
|
a2a731f30f6095e26596ef0760d1a76ca28d6ae4
|
# Dataset Card for "skin_cancer_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Pranavkpba2000/skin_cancer_dataset
|
[
"region:us"
] |
2023-05-14T07:40:43+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AK", "1": "BCC", "2": "BKL", "3": "DF", "4": "MEL", "5": "NV", "6": "SCC", "7": "VASC"}}}}], "splits": [{"name": "train", "num_bytes": 9380942753.528, "num_examples": 28516}, {"name": "test", "num_bytes": 1445202498.285, "num_examples": 7105}], "download_size": 9852696203, "dataset_size": 10826145251.813}}
|
2023-05-14T07:47:49+00:00
|
030a540c609ee5da0761e8c3678925471ee46f92
|
pxovela/Test_Images_Overtrained_TE_vs_Unet
|
[
"license:openrail",
"region:us"
] |
2023-05-14T07:49:53+00:00
|
{"license": "openrail"}
|
2023-05-14T07:51:53+00:00
|
|
a3555d4b6e89ebd4d820b57c891a3990779f6844
|
WeixuanYuan/VAE_sound
|
[
"license:openrail",
"region:us"
] |
2023-05-14T07:52:18+00:00
|
{"license": "openrail"}
|
2023-05-14T12:44:37+00:00
|
|
5a3dd3f9ca524d2c5c1f02b5c0b3fd4cc81d64d6
|
Fine/Stable-diffusion
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-05-14T08:37:34+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-05-14T08:46:58+00:00
|
|
91871af5af11efba6eb1fcd421403f783fc6ea85
|
# Dataset Card for "NCT-CRC-HE-45k"
@dataset{kather_jakob_nikolas_2018_1214456,
author = {Kather, Jakob Nikolas and
Halama, Niels and
Marx, Alexander},
title = {{100,000 histological images of human colorectal
cancer and healthy tissue}},
month = apr,
year = 2018,
publisher = {Zenodo},
version = {v0.1},
doi = {10.5281/zenodo.1214456},
url = {https://doi.org/10.5281/zenodo.1214456}
}
|
polejowska/NCT-CRC-HE-45k
|
[
"region:us"
] |
2023-05-14T09:29:04+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "ADI", "1": "BACK", "2": "DEB", "3": "LYM", "4": "MUC", "5": "MUS", "6": "NORM", "7": "STR", "8": "TUM"}}}}], "splits": [{"name": "train", "num_bytes": 2820558485.0, "num_examples": 45000}], "download_size": 1579401162, "dataset_size": 2820558485.0}}
|
2023-05-14T14:36:01+00:00
|
e17d6e3b97e09289bbb0ffab82ba2641e83a0a3c
|
> There is also an identical dataset for the new libritts-r dataset at [cdminix/libritts-r-aligned](https://huggingface.co/datasets/cdminix/libritts-r-aligned)
# Dataset Card for LibriTTS with Forced Alignments (and Measures)
UPDATE: The preprocessed alignments are now in this repository, so montreal forced aligner does not have to run locally.
## Requirements
- ``pip install alignments phones`` **(required)**
- ``pip install speech-collator`` (optional)
## Example Item
```json
{
'id': '100_122655_000073_000002.wav',
'speaker': '100',
'text': 'the day after, diana and mary quitted it for distant b.',
'start': 0.0,
'end': 3.6500000953674316,
'phones': ['[SILENCE]', 'ð', 'ʌ', '[SILENCE]', 'd', 'eɪ', '[SILENCE]', 'æ', 'f', 't', 'ɜ˞', '[COMMA]', 'd', 'aɪ', 'æ', 'n', 'ʌ', '[SILENCE]', 'æ', 'n', 'd', '[SILENCE]', 'm', 'ɛ', 'ɹ', 'i', '[SILENCE]', 'k', 'w', 'ɪ', 't', 'ɪ', 'd', '[SILENCE]', 'ɪ', 't', '[SILENCE]', 'f', 'ɜ˞', '[SILENCE]', 'd', 'ɪ', 's', 't', 'ʌ', 'n', 't', '[SILENCE]', 'b', 'i', '[FULL STOP]'],
'phone_durations': [5, 2, 4, 0, 5, 13, 0, 16, 7, 5, 20, 2, 6, 9, 15, 4, 2, 0, 11, 3, 5, 0, 3, 8, 9, 8, 0, 13, 3, 5, 3, 6, 4, 0, 8, 5, 0, 9, 5, 0, 7, 5, 6, 7, 4, 5, 10, 0, 3, 35, 9],
'audio': '/dev/shm/metts/train-clean-360-alignments/100/100_122655_000073_000002.wav'
}
```
The phones are IPA phones, and the phone durations are in frames (assuming a hop length of 256, sample rate of 22050 and window length of 1024). These attributes can be changed using the ``hop_length``, ``sample_rate`` and ``window_length`` arguments to ``LibriTTSAlign``.
## Data Collator
This dataset comes with a data collator which can be used to create batches of data for training.
It can be installed using ``pip install speech-collator`` ([MiniXC/speech-collator](https://www.github.com/MiniXC/speech-collator)) and can be used as follows:
```python
import json
from datasets import load_dataset
from speech_collator import SpeechCollator
from torch.utils.data import DataLoader
dataset = load_dataset('cdminix/libritts-aligned', split="train")
speaker2ixd = json.load(open("speaker2idx.json"))
phone2ixd = json.load(open("phone2idx.json"))
collator = SpeechCollator(
speaker2ixd=speaker2idx,
phone2ixd=phone2idx ,
)
dataloader = DataLoader(dataset, collate_fn=collator.collate_fn, batch_size=8)
```
You can either download the ``speaker2idx.json`` and ``phone2idx.json`` files from [here](https://huggingface.co/datasets/cdminix/libritts-aligned/tree/main/data) or create them yourself using the following code:
```python
import json
from datasets import load_dataset
from speech_collator import SpeechCollator, create_speaker2idx, create_phone2idx
dataset = load_dataset("cdminix/libritts-aligned", split="train")
# Create speaker2idx and phone2idx
speaker2idx = create_speaker2idx(dataset, unk_idx=0)
phone2idx = create_phone2idx(dataset, unk_idx=0)
# save to json
with open("speaker2idx.json", "w") as f:
json.dump(speaker2idx, f)
with open("phone2idx.json", "w") as f:
json.dump(phone2idx, f)
```
### Measures
When using ``speech-collator`` you can also use the ``measures`` argument to specify which measures to use. The following example extracts Pitch and Energy on the fly.
```python
import json
from torch.utils.data import DataLoader
from datasets import load_dataset
from speech_collator import SpeechCollator, create_speaker2idx, create_phone2idx
from speech_collator.measures import PitchMeasure, EnergyMeasure
dataset = load_dataset("cdminix/libritts-aligned", split="train")
speaker2idx = json.load(open("data/speaker2idx.json"))
phone2idx = json.load(open("data/phone2idx.json"))
# Create SpeechCollator
speech_collator = SpeechCollator(
speaker2idx=speaker2idx,
phone2idx=phone2idx,
measures=[PitchMeasure(), EnergyMeasure()],
return_keys=["measures"]
)
# Create DataLoader
dataloader = DataLoader(
dataset,
batch_size=8,
collate_fn=speech_collator.collate_fn,
)
```
COMING SOON: Detailed documentation on how to use the measures at [MiniXC/speech-collator](https://www.github.com/MiniXC/speech-collator).
## Splits
This dataset has the following splits:
- ``train``: All the training data, except one sample per speaker which is used for validation.
- ``dev``: The validation data, one sample per speaker.
- ``train.clean.100``: Training set derived from the original materials of the train-clean-100 subset of LibriSpeech.
- ``train.clean.360``: Training set derived from the original materials of the train-clean-360 subset of LibriSpeech.
- ``train.other.500``: Training set derived from the original materials of the train-other-500 subset of LibriSpeech.
- ``dev.clean``: Validation set derived from the original materials of the dev-clean subset of LibriSpeech.
- ``dev.other``: Validation set derived from the original materials of the dev-other subset of LibriSpeech.
- ``test.clean``: Test set derived from the original materials of the test-clean subset of LibriSpeech.
- ``test.other``: Test set derived from the original materials of the test-other subset of LibriSpeech.
## Environment Variables
There are a few environment variable which can be set.
- ``LIBRITTS_VERBOSE``: If set, will print out more information about the dataset creation process.
- ``LIBRITTS_MAX_WORKERS``: The number of workers to use when creating the alignments. Defaults to ``cpu_count()``.
- ``LIBRITTS_PATH``: The path to download LibriTTS to. Defaults to the value of ``HF_DATASETS_CACHE``.
# Citation
When using LibriTTS please cite the following papers:
- [LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech](https://arxiv.org/abs/1904.02882)
- [Montreal Forced Aligner: Trainable text-speech alignment using Kaldi](https://www.researchgate.net/publication/319185277_Montreal_Forced_Aligner_Trainable_Text-Speech_Alignment_Using_Kaldi)
When using the Measures please cite the following paper (ours):
- [Evaluating and reducing the distance between synthetic and real speech distributions](https://arxiv.org/abs/2211.16049)
|
cdminix/libritts-aligned
|
[
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"annotations_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"speech",
"audio",
"automatic-speech-recognition",
"text-to-speech",
"arxiv:1904.02882",
"arxiv:2211.16049",
"region:us"
] |
2023-05-14T09:29:46+00:00
|
{"annotations_creators": ["crowdsourced"], "language": "en", "license": ["cc-by-4.0"], "task_categories": ["automatic-speech-recognition", "text-to-speech"], "pretty_name": "LibriTTS Corpus with Forced Alignments", "tags": ["speech", "audio", "automatic-speech-recognition", "text-to-speech"], "extra_gated_prompt": "When using this dataset to download LibriTTS, you agree to the terms on https://www.openslr.org"}
|
2023-10-11T18:46:28+00:00
|
fda1b621f979937e0ff1ea4475443aac2970b4ae
|
nightaway/pixelart
|
[
"license:openrail",
"region:us"
] |
2023-05-14T10:03:05+00:00
|
{"license": "openrail", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 560641.0, "num_examples": 176}], "download_size": 273903, "dataset_size": 560641.0}}
|
2023-05-25T08:34:56+00:00
|
|
ef8f2cee66f9c386eb8f9fe72d1fff1b47b6b95a
|
# Dataset Card for "prompt_generations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
diffusers/prompt_generations
|
[
"region:us"
] |
2023-05-14T10:47:42+00:00
|
{"dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}, {"name": "Category", "dtype": "string"}, {"name": "Challenge", "dtype": "string"}, {"name": "Note", "dtype": "string"}, {"name": "images", "dtype": "image"}, {"name": "model_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2171078.0, "num_examples": 16}], "download_size": 2173721, "dataset_size": 2171078.0}}
|
2023-05-14T10:58:28+00:00
|
7a6f8d356c44ded6f58e405fe4292f589601075a
|
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Citation Information](#citation-information)
## Dataset Description
This is a variation of the slake dataset!
### Languages
We have only seletec the instances with English data!
## Dataset Structure
This dataset contains medical question and answer pairs related to medical images.
### Data Instances
- data/raw contains the original slake dataset downloaded from here: https://drive.google.com/file/d/1EZ0WpO5Z6BJUqC3iPBQJJS1INWSMsh7U/view?usp=sharing
- data/clean contains only the English data.
- data/wordbroken contains the question and answers plit into words.
- data/sentencebroken contains the question and answers plit into sentences.
- dataset/full_sentence_data contains the train, validation, and test data with full sentence answers.
## Citation Information
This dataset is originally from this paper: https://arxiv.org/abs/2102.09542
|
Nikinzt/medical-gen-vqa
|
[
"arxiv:2102.09542",
"region:us"
] |
2023-05-14T11:10:39+00:00
|
{}
|
2023-07-18T13:28:21+00:00
|
bd68de364bfcada4816a97c312d4cdcfb2d76363
|
# zh-tw-llm-dev-sample-ta1k-f6dd50-embeddings-tr_alp-61d3e1-c2048
This dataset is a part of the `zh-tw-llm-dev` project.
* Tokenizer: `zh-tw-llm-dev-sample-tokenizer-a1k-f6dd50`
* Built with: `translations`, `alpaca`
* Rows: `300`
* Max length: `2048`
* Full config:
```json
{"build_with": ["translations", "alpaca"], "preview_length": 64, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "rows_limit": 100}, "alpaca_settings": {"source_dataset": "zetavg/traditional-chinese-alpaca-en-align", "template": "short", "rows_limit": 100}}
```
|
zh-tw-llm-dv-dv/zh-tw-llm-dev-sample-ta1k-f6dd50-embeddings-tr_alp-61d3e1-c2048
|
[
"region:us"
] |
2023-05-14T11:29:52+00:00
|
{"dataset_info": {"dataset_size": 475784.0, "download_size": 146475, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}], "splits": [{"name": "train", "num_bytes": 475784.0, "num_examples": 300}]}}
|
2023-05-14T11:29:59+00:00
|
dccda4838745da7de1285ae3994ed713ac09098a
|
# zh-tw-llm-dev-sample-ta8k-f6dd50-embeddings-tr_alp-61d3e1-c2048
This dataset is a part of the `zh-tw-llm-dev` project.
* Tokenizer: `zh-tw-llm-dev-sample-tokenizer-a8k-f6dd50`
* Built with: `translations`, `alpaca`
* Rows: `300`
* Max length: `2048`
* Full config:
```json
{"build_with": ["translations", "alpaca"], "preview_length": 256, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "rows_limit": 100}, "alpaca_settings": {"source_dataset": "zetavg/traditional-chinese-alpaca-en-align", "template": "short", "rows_limit": 100}}
```
|
zh-tw-llm-dv-dv/zh-tw-llm-dev-sample-ta8k-f6dd50-embeddings-tr_alp-61d3e1-c2048
|
[
"region:us"
] |
2023-05-14T11:37:40+00:00
|
{"dataset_info": {"dataset_size": 453739.0, "download_size": 189056, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}], "splits": [{"name": "train", "num_bytes": 453739.0, "num_examples": 300}]}}
|
2023-05-14T11:37:48+00:00
|
e121e4fd886fadc030d633274c053b71839f9c20
|
## Dataset Summary
ProsocialDialogFiltered is a filtered version of the ProsocialDialog dataset.
Multiple versions are present:
- In train_no_casual, rows with the label "casual" have been filtered out as a starting point.
- In train_no_possibly, rows with "possibly needs caution" have been filtered out.
- In train_no_probably, rows with "probably needs caution" have been filtered out, as I found those to be largely pointless as well, leaving only "needs caution" and "needs intervention".
- In the final train dataset, rows containing multiple phrases such as "You should not" and "you should refrain from" have been filtered out. This is done in an attempt to reduce the number of refusals language models issue to the user, in order to create better, and more open models.
ProsocialDialog is a large-scale multi-turn English dialogue dataset to teach conversational agents to respond to problematic content.
**For more information on the source dataset, refer to the original official [huggingface](https://huggingface.co/datasets/allenai/prosocial-dialog) and [paper](https://arxiv.org/abs/2205.12688).**
Possible drawbacks:
- Some ending messages have been cut off. This is only of concern if you rely on the 'episode_done' indicator.
## Languages
English
## Additional Information
### Citation
```
@inproceedings{kim2022prosocialdialog,
title={ProsocialDialog: A Prosocial Backbone for Conversational Agents},
author={Hyunwoo Kim and Youngjae Yu and Liwei Jiang and Ximing Lu and Daniel Khashabi and Gunhee Kim and Yejin Choi and Maarten Sap},
booktitle={EMNLP},
year=2022
}
```
|
Englishman2022/prosocial-dialog-filtered
|
[
"task_categories:conversational",
"task_categories:text-classification",
"task_ids:dialogue-generation",
"task_ids:multi-class-classification",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:ProsocialDialog",
"language:en",
"license:cc-by-4.0",
"dialogue",
"dialogue safety",
"social norm",
"rules-of-thumb",
"arxiv:2205.12688",
"region:us"
] |
2023-05-14T11:41:10+00:00
|
{"language_creators": ["crowdsourced", "machine-generated"], "language": ["en"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["ProsocialDialog"], "task_categories": ["conversational", "text-classification"], "task_ids": ["dialogue-generation", "multi-class-classification"], "pretty_name": "ProsocialDialogFiltered", "tags": ["dialogue", "dialogue safety", "social norm", "rules-of-thumb"]}
|
2023-05-14T16:48:49+00:00
|
bc4f83ff754c445ad4bc385f13b017c400ab718f
|
# Images of Parti Prompts for "sd-v1-5"
Code that was used to get the results:
```py
from diffusers import DiffusionPipeline, DDIMScheduler
import torch
import PIL
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, safety_checker=None)
pipe.to("cuda")
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
prompt = "" # a parti prompt
generator = torch.Generator("cuda").manual_seed(0)
image = pipe(prompt, generator=generator, num_inference_steps=100, guidance_scale=7.5).images[0]
image = image.resize((256, 256), resample=PIL.Image.Resampling.LANCZOS)
```
|
diffusers-parti-prompts/sd-v1-5
|
[
"region:us"
] |
2023-05-14T12:19:45+00:00
|
{"dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}, {"name": "Category", "dtype": "string"}, {"name": "Challenge", "dtype": "string"}, {"name": "Note", "dtype": "string"}, {"name": "images", "dtype": "image"}, {"name": "model_name", "dtype": "string"}, {"name": "seed", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 198852412.0, "num_examples": 1632}], "download_size": 198704477, "dataset_size": 198852412.0}}
|
2023-05-17T15:53:08+00:00
|
ff49ef45e5e567132ea4e4fd997f0da2415752ce
|
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
edangx100/celeb-identities
|
[
"region:us"
] |
2023-05-14T12:44:19+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bradley_cooper", "1": "chris_pratt", "2": "dave_bautista", "3": "djimon", "4": "karen_gillan", "5": "zoe_saldana"}}}}], "splits": [{"name": "train", "num_bytes": 8292287.0, "num_examples": 24}], "download_size": 8260548, "dataset_size": 8292287.0}}
|
2023-05-15T12:44:34+00:00
|
ef26e062490ad7fcf59095de423acb4d111bd325
|
# Dataset Card for russe-semantics-sim with ~200K entries. Russian language.
### Dataset Summary
License: MIT. Contains CSV of a list of word1, word2, their `connection score` (are they synonymous or associations), type of connection.
### Original Datasets are available here:
- https://github.com/nlpub/russe-evaluation
|
0x22almostEvil/russe-semantics-sim
|
[
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:ru",
"license:mit",
"semantics",
"region:us"
] |
2023-05-14T12:45:30+00:00
|
{"language": ["ru"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "tags": ["semantics"]}
|
2023-05-17T14:43:59+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.