sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
35af63b9b26596f0b80c9a7b572e6d10a46eccec | # Dataset Card for "common_voice_12.0_Augmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Salama1429/common_voice_Arabic_12.0_Augmented | [
"region:us"
]
| 2022-12-24T10:31:44+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14306290182.938, "num_examples": 63546}, {"name": "test", "num_bytes": 316503630.559, "num_examples": 10433}], "download_size": 12163898712, "dataset_size": 14622793813.497}} | 2022-12-24T10:35:23+00:00 |
edda71e3982c76f2c6a6533287837723ccadeb3d | musicakamusic/Afro | [
"license:gpl-3.0",
"region:us"
]
| 2022-12-24T12:54:00+00:00 | {"license": "gpl-3.0"} | 2022-12-24T12:54:01+00:00 |
|
6d72f2acc6e41cacd5f9b88cb8b275ed0db9d166 | # Dataset Card for "dreambooth-hackathon-images-proteins"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jonathang/dreambooth-hackathon-images-proteins | [
"region:us"
]
| 2022-12-24T13:06:59+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3961830.0, "num_examples": 17}], "download_size": 3905517, "dataset_size": 3961830.0}} | 2022-12-24T13:07:03+00:00 |
8dea39dcc98f9f4f9577475b2060045f7aa0aacd | # Dataset Card for "sobotta-anatomical-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanderland/sobotta-anatomical-dataset | [
"region:us"
]
| 2022-12-24T13:08:29+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 54613498.0, "num_examples": 14}], "download_size": 33366858, "dataset_size": 54613498.0}} | 2022-12-24T13:08:35+00:00 |
3d984f8e1cc4ac2f9aa9259f0364b2bb97de0cf8 | # Dataset Card for "dreambooth-hackathon-images-protein2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jonathang/dreambooth-hackathon-images-protein2 | [
"region:us"
]
| 2022-12-24T13:17:59+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3901067.0, "num_examples": 16}], "download_size": 3846228, "dataset_size": 3901067.0}} | 2022-12-24T13:18:16+00:00 |
28dddd7e435bac23f5fb3eb83acfa70fcfd13bd5 | # Dataset Card for "dreambooth-hackathon-images-protein3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jonathang/dreambooth-hackathon-images-protein3 | [
"region:us"
]
| 2022-12-24T13:25:06+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2000745.0, "num_examples": 11}], "download_size": 1946505, "dataset_size": 2000745.0}} | 2022-12-24T13:25:24+00:00 |
4e2ec897b62db1ca2704e432715f8beaeee1ab1c | # Dataset Card for "20NG_train10.8k_test3.6K_valid3.6k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pig4431/20NG_train10.8k_test3.6K_valid3.6k | [
"region:us"
]
| 2022-12-24T14:56:20+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 13917789.0, "num_examples": 11314}, {"name": "test", "num_bytes": 4175991.5, "num_examples": 3766}, {"name": "validate", "num_bytes": 4175991.5, "num_examples": 3766}], "download_size": 14342171, "dataset_size": 22269772.0}} | 2022-12-24T14:56:50+00:00 |
ff328a349f8b2ce89e1e23007a97cd8d68e00c05 | preprocessed data for LAVISH | genjib/LAVISHData | [
"region:us"
]
| 2022-12-24T15:12:12+00:00 | {} | 2022-12-24T15:58:34+00:00 |
83c0f91ecd03701df57fb8b54be81627ee743036 | # Dataset Card for "legal_corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | marcus2000/legal_corpus | [
"region:us"
]
| 2022-12-24T15:30:41+00:00 | {"dataset_info": {"features": [{"name": "0", "dtype": "string"}, {"name": "1", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 66404959, "num_examples": 1200}, {"name": "validation", "num_bytes": 32302991, "num_examples": 400}, {"name": "test", "num_bytes": 33181409, "num_examples": 427}], "download_size": 39180007, "dataset_size": 131889359}} | 2022-12-24T15:34:43+00:00 |
2e1aa8edbbc8365ebc5a3e84df3929c79ce002c0 | ratesc/fkm | [
"license:other",
"region:us"
]
| 2022-12-24T15:49:24+00:00 | {"license": "other"} | 2022-12-24T20:53:31+00:00 |
|
5be197c3dd388c8cd9263e1d45f95681775bfc75 | # Dataset Card for "clinic-credit_cards"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | fathyshalab/clinic-credit_cards | [
"region:us"
]
| 2022-12-24T16:48:27+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22514.533333333333, "num_examples": 262}, {"name": "test", "num_bytes": 9710.466666666667, "num_examples": 113}], "download_size": 16877, "dataset_size": 32225.0}} | 2022-12-24T16:48:40+00:00 |
9435a1ac6b32ec74fc2c2984ad920bc58c382161 | msamogh/gpt-negocaht | [
"license:apache-2.0",
"region:us"
]
| 2022-12-24T19:51:07+00:00 | {"license": "apache-2.0"} | 2022-12-24T19:51:07+00:00 |
|
98689c4f101a0181743367b899b73ada9376a4d4 |
# Dataset Card for GPT-Negochat
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
## Dataset Description
- **Repository:** https://github.com/msamogh/GPT-NegoChat-Corpus
- **Point of Contact:** [email protected]
### Dataset Summary
he **GPT-Negochat** corpus is a modified version of the original Negochat corpus (https://aclanthology.org/L16-1501/), which contains negotiation dialogues between an Employer and a Candidate. The utterances in the original corpus were generated using a template-based NLG module and therefore, sound robotic and in general, do not sound convincingly real.
GPT-Negochat is the result of using GPT-3 to modify this original corpus to make the dialogues resemble actual job-negotiation dialogues more closely while still retaining the original meaning of the utterances.
In addition to rephrasing the utterances, a small set of highly unrealistic dialogue segments have been removed in GPT-Negochat without affecting the coherence of the surrounding dialogue.
### Supported Tasks and Leaderboards
- Dialogue Act Classification
- Offer Identification
- Agreement Tracking
### Languages
- English
## Dataset Structure
### Data Fields
Below is an excerpt containing two consecutive turns from a dialogue. The `input` field contains the utterance from the original Negochat corpus. The `augmented_input` field contains the same utterance rephrased using GPT-3.
```json
{
"role": "Candidate",
"input": "I want a position of project manager",
"output": [
{
"Offer": {
"Job Description": "Project Manager"
}
}
],
"augmented_input": "I'm interested in a project manager role."
},
{
"role": "Employer",
"input": "I do have programmer positions open with a strong potential to advance to project manager based on your performance.",
"output": [
{
"Offer": {
"Job Description": "Programmer"
}
}
],
"augmented_input": "We do have programmer roles available that could provide you with the opportunity to advance to project manager based on your performance. "
}
```
## Dataset Creation
### Curation Rationale
The original Negochat corpus is one of the only dialogue corpora with containing turn-level annotations for offers, acceptances, and rejects in a negotiation dialogue.
However, the utterances in the corpus were generated using a template-based NLG system, which makes the dialogues unrealistic to the point of sounding robotic at times.
We wanted to make the utterances sound more like those from an actual negotiation dialogue in a job interview.
### Source Data
#### Initial Data Collection and Normalization
The original Negochat corpus can be found here: [https://github.com/vaskonov/negochat_corpus](https://github.com/vaskonov/negochat_corpus)
## Annotations
Since each utterance in GPT-Negochat was generated by rephrasing the original without changing the underlying meaning, we simply transfer over the annotations from the original Negochat corpus. | msamogh/gpt-negochat | [
"license:apache-2.0",
"region:us"
]
| 2022-12-24T19:51:18+00:00 | {"license": "apache-2.0"} | 2022-12-24T20:03:35+00:00 |
f9427b44f7ba0676e57832132c3c287470063061 | gavrenkov/MLSGnome | [
"license:mit",
"region:us"
]
| 2022-12-24T23:38:56+00:00 | {"license": "mit"} | 2022-12-24T23:40:18+00:00 |
|
736df3da6261e1a0cd2bfe2532a98332dc919b9e | flamesbob/Line_style-Embedding | [
"license:creativeml-openrail-m",
"region:us"
]
| 2022-12-25T00:40:59+00:00 | {"license": "creativeml-openrail-m"} | 2022-12-25T00:41:21+00:00 |
|
bf3029b3b52c9e79940d40319ebb4192ab5d7c0d | # Summary
Dataset contains numbers in different formats:
* Numbers (base 10)
* Numbers as words
* Roman numbers
Dataset range 1-4999 | vijaygkd/roman-numbers-text | [
"region:us"
]
| 2022-12-25T01:04:29+00:00 | {} | 2022-12-25T01:07:36+00:00 |
529d9fa1382412a631cbd7ce408aa98c28b13afb | # Dataset Card for "LLM_Description_Vocab_bloom_bigscience_bloom_downstream_tasks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/LLM_Description_Vocab_bloom_bigscience_bloom_downstream_tasks | [
"region:us"
]
| 2022-12-25T03:29:32+00:00 | {"dataset_info": {"features": [{"name": "vocab", "dtype": "string"}, {"name": "descriptions", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 658686, "num_examples": 3426}], "download_size": 373501, "dataset_size": 658686}} | 2022-12-25T03:29:36+00:00 |
35162e5aeeb22daccfc19de4993e12bfe8b4d530 |
# Dataset Card for Open-Domain Question Answering Wikipedia Corpora
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
## Dataset Description
### Dataset Summary
The Wikipedia corpus variants provided can serve as knowledge sources for question-answering systems based on a retriever–reader pipeline. These corpus variants and their corresponding experiments are described further in the paper entitled:
> Pre-Processing Matters! Improved Wikipedia Corpora for Open-Domain Question Answering.
## Dataset Structure
### Data Fields
The dataset consists of passages that have been segmented from Wikipedia articles.
For each passage, the following fields are provided
- ```docid```: The passage id in the format of (X#Y) where passages from the same article share the same X, but Y denotes the segment id within the article
- ```title```: The title of the article from where the passage comes
- ```text```: The text content of the passage
### Data Splits
There are 6 corpus variants in total
- ```wiki-text-100w-karpukhin```: The original DPR Wikipedia corpus with non-overlapping passages, each 100 words long, from Karpukhin et al.,
> Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. [Dense Passage Retrieval for Open-Domain Question Answering](https://www.aclweb.org/anthology/2020.emnlp-main.550/). _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 6769-6781, 2020.
- ```wiki-text-100w-tamber```: Our replication of the above corpus
- ```wiki-text-6-3-tamber```: A corpus similar to above i.e. without tables, infoboxes, and lists. Segmentation is done differently, with a passage size of 6 sentences and a stride of 3 sentences. Note, this means that passages are overlapped.
- ```wiki-text-8-4-tamber```: Like wiki-text-6-3, but with a passage size of 8 sentences and a stride of 4 sentences.
- ```wiki-all-6-3-tamber```: A corpus with tables, infoboxes, and lists included with a passage size of 6 sentences and a stride of 3 sentences.
- ```wiki-all-8-4-tamber```: Like wiki-all-6-3, but with a passage size of 8 sentences and a stride of 4 sentences.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
We start with downloading the full December 20, 2018 Wikipedia XML dump: ```enwiki-20181220-pages-articles.xml``` from the Internet Archive: https://archive.org/details/enwiki-20181220. This is then Pre-processed by WikiExtractor: https://github.com/attardi/wikiextractor (making sure to modify the code to include lists as desired and replacing any tables with the string "TABLETOREPLACE") and DrQA: https://github.com/facebookresearch/DrQA/tree/main/scripts/retriever (again making sure to modify the code to not remove lists as desired).
We then apply the [pre-processing script]((https://github.com/castorini/pyserini/blob/master/docs/experiments-wiki-corpora.md)) we make available in [Pyserini](https://github.com/castorini/pyserini) to generate the different corpus variants.
| castorini/odqa-wiki-corpora | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
]
| 2022-12-25T03:47:21+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["question-answering", "text-retrieval"], "task_ids": ["open-domain-qa"], "pretty_name": "Open-Domain Question Answering Wikipedia Corpora", "tags": []} | 2023-01-05T21:32:51+00:00 |
52a7a7ca6528176c934bf7eee4472267196bf227 | shahp7575/chemex | [
"license:apache-2.0",
"region:us"
]
| 2022-12-25T03:59:27+00:00 | {"license": "apache-2.0"} | 2022-12-25T03:59:48+00:00 |
|
c5b5b9381889a2a810fa49028faba3353f2186cc | miroslawas/pepega | [
"license:unknown",
"region:us"
]
| 2022-12-25T09:21:23+00:00 | {"license": "unknown"} | 2022-12-25T09:22:02+00:00 |
|
544d9dddea0b6fc1755020da1b404300536d838a | PredatorAI/DBUIUX | [
"license:gpl-3.0",
"region:us"
]
| 2022-12-25T11:39:50+00:00 | {"license": "gpl-3.0"} | 2022-12-27T04:37:12+00:00 |
|
42bc7fb5ce166489dca5961ba5b14f9fc520d22b | grasshoff/lhc_sents | [
"license:bsd",
"region:us"
]
| 2022-12-25T13:31:21+00:00 | {"license": "bsd"} | 2022-12-25T14:08:48+00:00 |
|
ddd24029c66289429ca47b3813d5367690256e8e | # Dataset Card for "dreambooth-hackathon-images-mario-bg-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jonathang/dreambooth-hackathon-images-mario-bg-1 | [
"region:us"
]
| 2022-12-25T13:59:53+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 559875.0, "num_examples": 15}], "download_size": 523924, "dataset_size": 559875.0}} | 2022-12-25T14:00:04+00:00 |
855bfbc27798067480ebe537177acd13dbdb75a0 | # Dataset Card for "donut_check"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tsabar/donut_check | [
"region:us"
]
| 2022-12-25T14:59:13+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "letter", "1": "form", "2": "email", "3": "handwritten", "4": "advertisement", "5": "scientific report", "6": "scientific publication", "7": "specification", "8": "file folder", "9": "news article", "10": "budget", "11": "invoice", "12": "presentation", "13": "questionnaire", "14": "resume", "15": "memo"}}}}], "splits": [{"name": "train", "num_bytes": 19445096.284, "num_examples": 160}, {"name": "test", "num_bytes": 19445071.284, "num_examples": 160}], "download_size": 0, "dataset_size": 38890167.568}} | 2022-12-25T15:37:28+00:00 |
f7bd228608aed4228a5c2b50cb407d1e3d9ab4d9 | # Dataset Card for "rvl_cdip_10_examples_per_class_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tsabar/rvl_cdip_10_examples_per_class_donut | [
"region:us"
]
| 2022-12-25T15:05:22+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "letter", "1": "form", "2": "email", "3": "handwritten", "4": "advertisement", "5": "scientific report", "6": "scientific publication", "7": "specification", "8": "file folder", "9": "news article", "10": "budget", "11": "invoice", "12": "presentation", "13": "questionnaire", "14": "resume", "15": "memo"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 18011328.0, "num_examples": 160}, {"name": "train", "num_bytes": 19396350.0, "num_examples": 160}], "download_size": 35234585, "dataset_size": 37407678.0}} | 2022-12-25T15:38:22+00:00 |
e15cb6c72ced2942908b04108710953456a9bfbc | Textual Inversion trained on Hades game art. Tested on Anything V3 model. Recommend to use words "cartoon","comic","realistic","dark outlines" in prompt to get better results.
| AgntPerseus/Hadesstl | [
"license:creativeml-openrail-m",
"region:us"
]
| 2022-12-25T16:34:53+00:00 | {"license": "creativeml-openrail-m"} | 2022-12-25T16:53:03+00:00 |
47bf9539caf69656bc76b583b840514bf8b062db | # Dataset Card for "coco-panoptic-val2017"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nielsr/coco-panoptic-val2017 | [
"region:us"
]
| 2022-12-25T16:56:03+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "image"}, {"name": "segments_info", "list": [{"name": "id", "dtype": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}, {"name": "bbox", "sequence": "int64"}, {"name": "area", "dtype": "int64"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 850795822.0, "num_examples": 5000}], "download_size": 849210800, "dataset_size": 850795822.0}} | 2022-12-25T17:26:14+00:00 |
cec8994185bf49d28d9e00c6c658a423b684f546 | nev/nto2023-hack-dataset | [
"license:isc",
"region:us"
]
| 2022-12-25T17:28:26+00:00 | {"license": "isc"} | 2023-01-07T01:29:18+00:00 |
|
286a84425c5fec7faa8d96c7a0b8257563cf3307 | successor/qrl-yt-metadata | [
"license:mit",
"region:us"
]
| 2022-12-25T18:53:17+00:00 | {"license": "mit"} | 2022-12-25T19:35:13+00:00 |
|
3399f7dcadd3a804dfee4f5c63698406dd8cf3c0 | # Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mjfang27/dreambooth-hackathon-images | [
"region:us"
]
| 2022-12-26T00:58:26+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 23442012.0, "num_examples": 16}], "download_size": 23419281, "dataset_size": 23442012.0}} | 2022-12-26T00:58:42+00:00 |
145febfdf1af7943d46f1f9adfd6de66bd453042 | darwintattoo/prueba | [
"license:afl-3.0",
"region:us"
]
| 2022-12-26T03:32:05+00:00 | {"license": "afl-3.0"} | 2022-12-26T03:33:43+00:00 |
|
b15916bc0ee9a35dc8cb83e6c83d242ddb9e453c | # Dataset Card for "rulibrispeech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bond005/rulibrispeech | [
"region:us"
]
| 2022-12-26T10:39:04+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11165185580.744, "num_examples": 54472}, {"name": "test", "num_bytes": 306649969.0, "num_examples": 1352}, {"name": "validation", "num_bytes": 321842480.0, "num_examples": 1400}], "download_size": 10689335725, "dataset_size": 11793678029.744}} | 2023-01-18T19:38:48+00:00 |
ca6a77f2f092d737ac6c212c51d8ebd66eb4a7bd | Taeyoung/testgg | [
"license:apache-2.0",
"region:us"
]
| 2022-12-26T11:00:50+00:00 | {"license": "apache-2.0"} | 2022-12-26T11:00:50+00:00 |
|
3a7bbd86c5a2055e32e415278b9da08da1cb96df | HuggingFace-CN-community/translation | [
"license:apache-2.0",
"region:us"
]
| 2022-12-26T12:41:09+00:00 | {"license": "apache-2.0"} | 2023-03-24T14:23:03+00:00 |
|
6a09d998616dc1f8b2fde1b8dc17456114383d31 | successor/qrl-yt-transcriptions | [
"license:mit",
"region:us"
]
| 2022-12-26T13:01:55+00:00 | {"license": "mit"} | 2022-12-26T13:02:26+00:00 |
|
ddcf45a2145f82ea1144cc4983142f715ed33cc1 | This dataset contains paragraphs tagged as relevant to soft skills or not. | ateffal/softskills | [
"license:mit",
"region:us"
]
| 2022-12-26T13:03:37+00:00 | {"license": "mit"} | 2023-04-05T17:15:12+00:00 |
59b8d27682dac1da89a3506622a773ff046cdc97 | vastream/dm | [
"license:apache-2.0",
"region:us"
]
| 2022-12-26T13:51:13+00:00 | {"license": "apache-2.0"} | 2022-12-26T13:52:21+00:00 |
|
00a72802b462725230c19b10106285072df9680a | # Dataset Card for "processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | enpassant/processed_bert_dataset | [
"region:us"
]
| 2022-12-26T14:04:46+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 158400.0, "num_examples": 44}], "download_size": 30837, "dataset_size": 158400.0}} | 2022-12-26T14:26:44+00:00 |
49fa65494419053dd5401d686f337104a26fd6b5 |
# Dataset Card for tox21_srp53
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://moleculenet.org/**
- **Repository: https://github.com/deepchem/deepchem/tree/master**
- **Paper: https://arxiv.org/abs/1703.00564**
### Dataset Summary
`tox21_srp53` is a dataset included in [MoleculeNet](https://moleculenet.org/). It is the p53 stress-response pathway activation (SR-p53) task from Tox21.
## Dataset Structure
### Data Fields
Each split contains
* `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule
* `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule
* `target`: clinical trial toxicity (or absence of toxicity)
### Data Splits
The dataset is split into an 80/10/10 train/valid/test split using scaffold split.
### Source Data
#### Initial Data Collection and Normalization
Data was originially generated by the Pande Group at Standford
### Licensing Information
This dataset was originally released under an MIT license
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.1703.00564,
doi = {10.48550/ARXIV.1703.00564},
url = {https://arxiv.org/abs/1703.00564},
author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay},
keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences},
title = {MoleculeNet: A Benchmark for Molecular Machine Learning},
publisher = {arXiv},
year = {2017},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
Thanks to [@zanussbaum](https://github.com/zanussbaum) for adding this dataset. | zpn/tox21_srp53 | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"license:mit",
"bio",
"bio-chem",
"molnet",
"molecule-net",
"biophysics",
"arxiv:1703.00564",
"region:us"
]
| 2022-12-26T14:55:36+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["other"], "task_ids": [], "pretty_name": "tox21_srp53", "tags": ["bio", "bio-chem", "molnet", "molecule-net", "biophysics"], "dataset_info": {"features": [{"name": "smiles", "dtype": "string"}, {"name": "selfies", "dtype": "string"}, {"name": "target", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 1055437, "num_examples": 6264}, {"name": "test", "num_bytes": 223704, "num_examples": 784}, {"name": "validation", "num_bytes": 224047, "num_examples": 783}], "download_size": 451728, "dataset_size": 1503188}} | 2022-12-26T15:10:20+00:00 |
08268c2faabe4c9ec5db39d53708f8dc8a295c4f | marcoyang/pruned_transducer_stateless6_hubert_xtralarge_ll60k_finetune_ls960 | [
"license:apache-2.0",
"region:us"
]
| 2022-12-26T15:43:50+00:00 | {"license": "apache-2.0"} | 2022-12-26T15:43:51+00:00 |
|
71a989dcf6961ed4be10df81f20141f9dcf52f68 | # Dataset Card for "bkk-budget-ner-page"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | napatswift/bkk-budget-ner-page | [
"region:us"
]
| 2022-12-26T16:55:11+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-ENTRY", "2": "I-ENTRY"}}}}], "splits": [{"name": "train", "num_bytes": 2455950.107936508, "num_examples": 472}, {"name": "test", "num_bytes": 822118.8920634921, "num_examples": 158}], "download_size": 377734, "dataset_size": 3278069.0}} | 2022-12-31T10:33:08+00:00 |
e72c04e8b324c928e65a48077c0e95b3ed6a2caf | spacemanidol/wikipedia-squad-KALE | [
"region:us"
]
| 2022-12-26T18:25:48+00:00 | {} | 2022-12-27T16:24:50+00:00 |
|
d44bc2bfdeae58b77bf87d6205b52b8ba62a3c31 |
# Dataset Card for ACLFig Dataset
<!-- ## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions) -->
## Dataset Description
- **Paper:**
- **Leaderboard:**
### Dataset Summary
1758 total labelled images
The scientific figures dataset contains 1758 scientific figures extracted from 890 research papers(ACL). The scientific figures are in png format.
The dataset has been classified into 19 categories. These are
- Algorithms
- Architecture/Pipeline diagrams
- Bar charts
- Box Plots
- Confusion Matrix
- Graph
- Line Chart
- Maps
- Natural Images
- Neural Networks
- NLP rules/grammar
- Pie chart
- Scatter Plot
- Screenshots
- Tables
- Trees
- Pareto chart
- Venn Diagram
- Word Cloud
The scientific figures are in the `png` directory.
The `metadata` directory contains metadata extracted from the pdf along with scientific figures in json format.
Finally, the `scientific_figures.csv` file contains following columns/fields:
1. `sci_fig` : Scientific figure name
2. `caption`: Caption text
3. `inline_reference`: Scientific figure contexts mentioned in the research paper
4. `metadata`: metadata json filename
5. `label`: One of the 19 categories as described above.
6. `acl_paper_id`: Unique identifier assigned to each pdf by ACL
### Supported Tasks and Leaderboards
Multi-label classification
## Dataset Creation
The dataset was created using papers in ACL Anthology.
### Annotations
#### Annotation process
~2k images manually labelled
### Citation Information
TODO
### Contributions
Thanks to [@zebaKarishma](https://github.com/zebaKarishma), [@shauryr](https://github.com/shauryr) and [@KavyaPuranik](https://github.com/KavyaPuranik) for adding this dataset.
| citeseerx/ACL-fig | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
]
| 2022-12-26T18:28:49+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated", "found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-label-image-classification"], "pretty_name": "ACL-Fig", "tags": []} | 2023-01-04T12:52:12+00:00 |
01bcc51d26814e610d91eef4ecf39df687babc63 |
This dataset is processed version of Social Bias Inference Corpus(SBIC) dataset including text, annotator's demographics and the annotation disagreement labels. <br>
Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br>
Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br>
Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br>
| RuyuanWan/SBIC_Disagreement | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|social_bias_frames",
"language:en",
"region:us"
]
| 2022-12-26T18:46:23+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["extended|social_bias_frames"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "RuyuanWan/SBIC_Disagreement", "tags": []} | 2022-12-26T22:07:09+00:00 |
500a8fd3383138a5efece6c6744028e3211a6cc0 |
This dataset is processed version of Social Chemistry 101(SChem) dataset including text and the annotation disagreement labels. <br>
Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br>
Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br>
Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br>
Source Data: [Social Chemistry 101(Forbes et al. 2020)](https://github.com/mbforbes/social-chemistry-101) <br> | RuyuanWan/SChem_Disagreement | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:extended",
"language:en",
"region:us"
]
| 2022-12-26T19:56:21+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["extended"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "RuyuanWan/SChem_Disagreement", "tags": []} | 2022-12-26T22:03:28+00:00 |
8d5520fc4675fe37bd4b15271feb982a04c8f8ba |
This dataset is processed version of Dilemmas dataset including text and the annotation disagreement labels. <br>
Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br>
Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br>
Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br>
Source Data: [Scruples-dilemmas (Lourie, Bras, and Choi 2021)](https://github.com/allenai/scruples) <br> | RuyuanWan/Dilemmas_Disagreement | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:extended",
"language:en",
"region:us"
]
| 2022-12-26T21:21:21+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["extended"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "RuyuanWan/Dilemmas_Disagreement", "tags": []} | 2022-12-26T21:28:17+00:00 |
48fe35d8cd764209a087ee36823523c42119c866 |
This dataset is processed version of Dynamic Sentiment Analysis (DynaSent) dataset including text and the annotation disagreement labels. <br>
Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br>
Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br>
Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br>
Source Data: [Dynamic Sentiment Analysis Dataset(Potts et al. 2021)](https://github.com/cgpotts/dynasent) <br> | RuyuanWan/Dynasent_Disagreement | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:find",
"multilinguality:monolingual",
"source_datasets:extended",
"language:en",
"region:us"
]
| 2022-12-26T21:32:44+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["find"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["extended"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "RuyuanWan/Dynasent_Disagreement", "tags": []} | 2022-12-26T22:14:00+00:00 |
9527b141f5b9acda32e0df4f69040b746459ead5 |
This dataset is processed version of Stanford Politeness Corpus (Wikipedia) including text and the annotation disagreement labels. <br>
Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br>
Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br>
Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br>
Source Data: [Wikipedia Politeness Corpus(Danescu-Niculescu-Mizil et al. 2013)](https://convokit.cornell.edu/documentation/wiki_politeness.html) <br>
| RuyuanWan/Politeness_Disagreement | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:extended",
"language:en",
"region:us"
]
| 2022-12-26T21:44:39+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["extended"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "RuyuanWan/Politeness_Disagreement", "tags": []} | 2022-12-26T22:21:56+00:00 |
bdb17e3672308890562fe8f5ebe5d07bc88d764a | # Dataset Card for "c4-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | NeelNanda/c4-10k | [
"region:us"
]
| 2022-12-26T23:12:45+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[us]"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21970889, "num_examples": 10000}], "download_size": 13645542, "dataset_size": 21970889}} | 2022-12-26T23:12:52+00:00 |
82e9178fdd9e1f7ac93f81234aeedceec49bc8b4 | # Dataset Card for "c4-code-10k"
10K elements of C4 and 10K elements of code parrot clean (Python code).
Note that these are the datasets used to train my interpretability-friendly models, but is *not* of the correct mixture. Those models were trained on 83% C4 and 17% Python Code (ish) by tokens. This dataset has 10K strings of each, and by tokens is about 22M of code and 5M of C4 (code is longer and harder to compress!)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | NeelNanda/c4-code-20k | [
"region:us"
]
| 2022-12-26T23:22:53+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 101351288, "num_examples": 20000}], "download_size": 42778874, "dataset_size": 101351288}} | 2022-12-26T23:25:12+00:00 |
b9ef9f6084d66d3815ba74371ce41c04967cb531 | ixelszy/text_inversion | [
"license:creativeml-openrail-m",
"not-for-all-audiences",
"region:us"
]
| 2022-12-26T23:59:23+00:00 | {"license": "creativeml-openrail-m", "tags": ["not-for-all-audiences"]} | 2023-07-27T09:19:06+00:00 |
|
30d18ef25f976ac51a63b38874300a11416b121b | # Dataset Card for "wiki-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | NeelNanda/wiki-10k | [
"region:us"
]
| 2022-12-27T00:22:16+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 222757944, "num_examples": 10000}], "download_size": 129077566, "dataset_size": 222757944}} | 2022-12-27T00:22:23+00:00 |
bcedc04b957a14ae24047f9f36051c78560f30e1 | # Dataset Card for "code-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | NeelNanda/code-10k | [
"region:us"
]
| 2022-12-27T00:24:22+00:00 | {"dataset_info": {"features": [{"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "copies", "dtype": "string"}, {"name": "size", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "hash", "dtype": "int64"}, {"name": "line_mean", "dtype": "float64"}, {"name": "line_max", "dtype": "int64"}, {"name": "alpha_frac", "dtype": "float64"}, {"name": "autogenerated", "dtype": "bool"}, {"name": "ratio", "dtype": "float64"}, {"name": "config_test", "dtype": "bool"}, {"name": "has_no_keywords", "dtype": "bool"}, {"name": "few_assignments", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 81445605, "num_examples": 10000}], "download_size": 29955076, "dataset_size": 81445605}} | 2022-12-27T00:24:33+00:00 |
c72883b56186711f7dc90d5a2aaa276cdf74efef | MU-Kindai/datasets-for-JCSE | [
"license:mit",
"region:us"
]
| 2022-12-27T05:28:41+00:00 | {"license": "mit"} | 2022-12-28T07:49:50+00:00 |
|
d2c5a5ddd6cf7dcc4ac9b5e0d085184ae7594386 | . | fendiirfan/bocah-alam-chatbot | [
"region:us"
]
| 2022-12-27T08:42:04+00:00 | {} | 2022-12-27T08:57:15+00:00 |
48daa7aa5c05e82d947da852fa6cfdbbfa84a3aa | augsaksham/devrev | [
"license:apache-2.0",
"region:us"
]
| 2022-12-27T09:17:04+00:00 | {"license": "apache-2.0"} | 2022-12-27T13:35:38+00:00 |
|
d04d5fb63bd7ba5b38d2c1ec3a421f3dc781610b | drox1o1/bulk_deals | [
"license:openrail",
"region:us"
]
| 2022-12-27T09:28:28+00:00 | {"license": "openrail"} | 2022-12-27T09:28:51+00:00 |
|
ea71db75d05943973f46971c4ab95eb511ff835b | masoudmzb/Persian_Speech2Text | [
"license:mit",
"region:us"
]
| 2022-12-27T09:36:45+00:00 | {"license": "mit"} | 2022-12-27T09:36:45+00:00 |
|
808913993bc82bbdc40796edc7813955786dabee | {"2 (1).jpg": "boots", "2 (10).jpg": "boots", "2 (11).jpg": "boots", "2 (12).jpg": "boots", "2 (13).jpg": "boots", "2 (14).jpg": "boots", "2 (15).jpg": "boots", "2 (16).jpg": "boots", "2 (17).jpg": "boots", "2 (18).jpg": "boots", "2 (19).jpg": "boots", "2 (2).jpg": "boots", "2 (3).jpg": "boots", "2 (4).jpg": "boots", "2 (5).jpg": "boots", "2 (6).jpg": "boots", "2 (7).jpg": "boots", "2 (8).jpg": "boots", "2 (9).jpg": "boots", "1 (1).jpg": "heels", "1 (10).jpg": "heels", "1 (11).jpg": "heels", "1 (12).jpg": "heels", "1 (13).jpg": "heels", "1 (14).jpg": "heels", "1 (15).jpg": "heels", "1 (16).jpg": "heels", "1 (17).jpg": "heels", "1 (18).jpg": "heels", "1 (19).jpg": "heels", "1 (2).jpg": "heels", "1 (20).jpg": "heels", "1 (21).jpg": "heels", "1 (22).jpg": "heels", "1 (23).jpg": "heels", "1 (24).jpg": "heels", "1 (25).jpg": "heels", "1 (26).jpg": "heels", "1 (27).jpg": "heels", "1 (28).jpg": "heels", "1 (29).jpg": "heels", "1 (3).jpg": "heels", "1 (30).jpg": "heels", "1 (31).jpg": "heels", "1 (32).jpg": "heels", "1 (33).jpg": "heels", "1 (34).jpg": "heels", "1 (35).jpg": "heels", "1 (36).jpg": "heels", "1 (37).jpg": "heels", "1 (38).jpg": "heels", "1 (39).jpg": "heels", "1 (4).jpg": "heels", "1 (40).jpg": "heels", "1 (41).jpg": "heels", "1 (42).jpg": "heels", "1 (43).jpg": "heels", "1 (44).jpg": "heels", "1 (45).jpg": "heels", "1 (46).jpg": "heels", "1 (47).jpg": "heels", "1 (48).jpg": "heels", "1 (49).jpg": "heels", "1 (5).jpg": "heels", "1 (50).jpg": "heels", "1 (51).jpg": "heels", "1 (52).jpg": "heels", "1 (53).jpg": "heels", "1 (54).jpg": "heels", "1 (55).jpg": "heels", "1 (56).jpg": "heels", "1 (57).jpg": "heels", "1 (58).jpg": "heels", "1 (59).jpg": "heels", "1 (6).jpg": "heels", "1 (60).jpg": "heels", "1 (61).jpg": "heels", "1 (62).jpg": "heels", "1 (63).jpg": "heels", "1 (64).jpg": "heels", "1 (65).jpg": "heels", "1 (66).jpg": "heels", "1 (67).jpg": "heels", "1 (68).jpg": "heels", "1 (69).jpg": "heels", "1 (7).jpg": "heels", "1 (70).jpg": "heels", "1 (71).jpg": "heels", "1 (72).jpg": "heels", "1 (73).jpg": "heels", "1 (74).jpg": "heels", "1 (75).jpg": "heels", "1 (76).jpg": "heels", "1 (77).jpg": "heels", "1 (78).jpg": "heels", "1 (79).jpg": "heels", "1 (8).jpg": "heels", "1 (80).jpg": "heels", "1 (81).jpg": "heels", "1 (82).jpg": "heels", "1 (83).jpg": "heels", "1 (84).jpg": "heels", "1 (85).jpg": "heels", "1 (86).jpg": "heels", "1 (87).jpg": "heels", "1 (88).jpg": "heels", "1 (89).jpg": "heels", "1 (9).jpg": "heels"} | Franksking/Shoe | [
"region:us"
]
| 2022-12-27T10:05:37+00:00 | {} | 2022-12-27T10:11:17+00:00 |
8538c3ee7b5dcbdc9f119085c910bf6f96de93be | Configuration for Stable-Diffusion - Automatic 1111 | BlodyTraveler/automatic1111config | [
"region:us"
]
| 2022-12-27T10:50:41+00:00 | {} | 2022-12-28T08:24:14+00:00 |
0b56a4ed07743eea9113cfb001f1914fd9616d59 | Nix0n/AUTOMATIC1111_Change_output_folder | [
"license:openrail",
"region:us"
]
| 2022-12-27T11:29:05+00:00 | {"license": "openrail"} | 2022-12-27T11:29:30+00:00 |
|
c93456930b7ae826d75c2ab8fb38d64b7bd73f43 | # Dataset Card for "HebrewStageAndLyricsWithNewLines"
* Contains poems and stories from "New Stage" ("במה חדשה")
* Contains text lines from various Hebrew song lyrics
* Data contains new-line characters
* Generated from a text file in which different poems were seperated using a double new-line character
* The script I made for converting the text file into a dataset is [available here](https://huggingface.co/datasets/Norod78/HebrewStageAndLyricsWithNewLines/blob/main/load_ds.py) | Norod78/HebrewStageAndLyricsWithNewLines | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"multilinguality:monolingual",
"language:he",
"region:us"
]
| 2022-12-27T12:14:25+00:00 | {"language": ["he"], "multilinguality": ["monolingual"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 12638465.341690589, "num_examples": 11113}, {"name": "train", "num_bytes": 240110370.6583094, "num_examples": 211129}], "download_size": 133520933, "dataset_size": 252748836.0}} | 2022-12-28T20:04:04+00:00 |
d4f745bd7f05c248f09374e561ec5a637f93fc51 | AlekseyKorshuk/crowdsourced-rlhf | [
"license:openrail",
"region:us"
]
| 2022-12-27T14:13:09+00:00 | {"license": "openrail"} | 2022-12-27T14:21:14+00:00 |
|
813243fb132c54ee12d1eb22791f584b803ce601 | # Pick a Pic
* We are periodically uploading (almost) all of the collected data from [pickapic.io](https://pickapic.io/).
* We have three different datasets:
* [Images dataset](https://huggingface.co/datasets/yuvalkirstain/PickaPic-images) - includes the images that were created as part of Pick a Pic.
* [Rankings dataset](https://huggingface.co/datasets/yuvalkirstain/PickaPic-rankings) - includes the rankings that users submitted in Pick a Pic.
* [Downloads dataset](https://huggingface.co/datasets/yuvalkirstain/PickaPic-downloads) - includes the images that users downloaded in Pick a Pic.
* Help us in creating the largest publicly available human-feedback for text-to-image dataset!
* You can reach us on [discord](https://discord.gg/qKEVkF85DT) or by [mail]([email protected]). | yuvalkirstain/PickaPic | [
"region:us"
]
| 2022-12-27T14:20:20+00:00 | {} | 2023-01-30T15:57:03+00:00 |
eaa7be77c802340cdf4ad991d3917410bb3559fc |
## Description
The Pixiv Niji Journey dataset is a collection of 9766 images with accompanying metadata, scraped from the online art platform Pixiv. The images were collected using the `gallery-dl` Python package, with the search term "nijijourney" on Pixiv. The collection period for the dataset was from November 6, 2022 to December 27, 2022.
The dataset is divided into two variants: `raw` and `preprocessed`. The `raw` variant contains the pure dataset resulting from the scraping of Pixiv, while the `preprocessed` variant contains the same dataset but with additional preprocessing steps applied. These preprocessing steps include converting the images from RGB to RGBA, labeling the dataset with captions using the BLIP tool, and providing Danbooru tags using the wd-v1-4-vit-tagger tool. The `preprocessed` variant has also been carefully cleaned and filtered to remove any low quality or irrelevant images.
The images in the dataset are in JPG and PNG format, and the metadata is provided in JSON format, while the preprocessed metadata is provided in `.txt` and `.caption` format. The metadata includes information about the images such as their captions, tags, and other metadata provided by Pixiv. The structure of the raw and preprocessed variants of the dataset is described in the `File Structure` section below.
The Pixiv Niji Journey dataset is primarily intended for use in machine learning tasks related to image classification and caption generation. It can also be used as a dataset for image generation models such as stable diffusion. However, users should be aware that the dataset may contain biases or limitations, such as the bias of the Pixiv platform or the specific search term used to collect the data.
## File Structure
The structure of the raw files is as follows:
```
nijijourney_pixiv_2022110620221222_raw.zip/
├╴nijijourney/
│ ├╴images.png
│ ├╴images.png.json
│ └╴...
```
while the structure of the preprocessed files is:
```
nijijourney_pixiv_2022110620221222_preprocessed.zip/
├╴dataset/
│ ├╴images.png
│ ├╴images.png.json
│ ├╴images.txt
│ ├╴images.caption
│ └╴...
├╴meta_cap.json
├╴meta_dd.json
├╴meta_clean.json
```
## Usage
- Access: the dataset is available for download from the Hugging Face dataset collection
- Format: the dataset is provided in ZIP format, with images in PNG format and metadata in JSON format
- Requirements: the dataset requires no specific requirements or dependencies for use
## Data Quality
- Number of images: 9766
- Image sizes: vary, but all images are in PNG format
- Class balance: the distribution of classes in the dataset is not known
- Quality: the dataset has been carefully cleaned and filtered to remove low quality or irrelevant images
## Limitations
While the Pixiv Niji Journey dataset has been carefully cleaned and preprocessed to ensure high quality and consistency, it is important to be aware of certain limitations and biases that may be present in the dataset. Some potential limitations of the dataset include:
- Bias of the Pixiv platform: Pixiv is an online art platform that may have its own biases in terms of the content that is available and the users who contribute to it. This could potentially introduce biases into the dataset.
- Search term bias: The dataset was collected using the search term "nijijourney" on Pixiv, which may have introduced biases into the dataset depending on the popularity and prevalence of this term on the platform.
- Limited scope: The dataset only includes images scraped from Pixiv, and therefore may not be representative of a wider range of images or artistic styles.
- Potential errors or inconsistencies in the metadata: While every effort has been made to ensure the accuracy of the metadata, there may be errors or inconsistencies present in the data.
It is important to be aware of these limitations and to consider them when using the Pixiv Niji Journey dataset for research or other purposes.
## License
The Pixiv Niji Journey dataset is made available under the terms of the AGPL-3.0 license. This license is a copyleft license that allows users to freely use, modify, and distribute the dataset, as long as any modified versions are also made available under the same terms.
Under the terms of the AGPL-3.0 license, users are allowed to:
- Use the dataset for any purpose, commercial or non-commercial
- Modify the dataset as needed for their purposes
- Distribute copies of the dataset, either modified or unmodified
However, users must also follow the following conditions:
- Any modified versions of the dataset must be made available under the same AGPL-3.0 license
- If the dataset is used to provide a service to others (such as through a website or API), the source code for the service must be made available to users under the AGPL-3.0 license
It is important to carefully review the terms of the AGPL-3.0 license and ensure that you understand your rights and obligations when using the Pixiv Niji Journey dataset.
## Citation
If you use this dataset in your work, please cite it as follows:
```
@misc{pixiv_niji_journey,
author = {Linaqruf},
title = {Pixiv Niji Journey},
year = {2022},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/Linaqruf/pixiv-niji-journey},
}
```
| Linaqruf/pixiv-niji-journey | [
"license:agpl-3.0",
"region:us"
]
| 2022-12-27T14:43:38+00:00 | {"license": "agpl-3.0"} | 2023-01-10T03:32:36+00:00 |
65fddc1c8440dea945ba3c140fb8b9ae37db9336 | Jonga5426/Jonga | [
"license:other",
"region:us"
]
| 2022-12-27T14:48:37+00:00 | {"license": "other"} | 2022-12-27T14:48:37+00:00 |
|
c1066fb4cbd28e291fc86825f58207bd80806559 | # MindBigData 2022 A Large Dataset of Brain Signals
> Supporting datasets for paper [ arXiv:2212.14746](https://arxiv.org/abs/2212.14746)
> There are 3 Main datasets with subdatasets:
>
**1.- MindBigData MNIST of Brain Digits**
> based on http://mindbigdata.com/opendb/index.html
> But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes)
> EEG's Resampled to match original headsets sampling rate
> Included headers.
> and simplified to contain only label & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel FP1 and MindWave will be FP1-0 FP1-1 ..... FP1-1023 since there are 1024 samples.
> There are 4 subdatasets:
>
> For MindWave with 1 EEG Channel and 1024 samples x Channel
>
> For EPOC1 with 14 EEG Channels and 256 samples x Channel
>
> For Muse1 with 4 EEG Channels and 440 samples x Channel
>
> For Insight1 with 5 EEG Channels and 256 samples x Channel
>
**1.1.- MindBigData MNIST of Brain digits MindWave1**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_MW
>
**1.2.- MindBigData MNIST of Brain digits EPOC1**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_EP
**1.3.- MindBigData MNIST of Brain digits Muse1**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_MU
**1.4.- MindBigData MNIST of Brain digits Insight1**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_IN
**2.- MindBigData Imagenet of the Brain**
> based on http://mindbigdata.com/opendb/imagenet.html
> But all datasets splitted to 80% Train 20% Test (also proportional in all the classes)
> EEG's Resampled to match original headsets sampling rate
> Included headers.
> contains label as the ILSVRC2013 category, and a hotencoded name lists, the RGB pixel values of the image seen resampled to 150pixels by 150 pixels & EEG data as rows named in headers as ChannelName-SampleNum,
> There are 2 subdatasets:
>
> One with the Insight 1 EEG signals at 384 samples per channel (5 channels)
>
> One with the Spectrogram image 64x64px instead of the EEG as described in the paper
>
**2.1.- MindBigData Imagenet of the Brain Insight1 EEG**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_Imagenet_IN
**2.2.- MindBigData Imagenet of the Brain Insight1 Spectrogram**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_Imagenet_IN_Spct
**3.- MindBigData Visual MNIST of Brain Digits**
> based on http://mindbigdata.com/opendb/visualmnist.html
> But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes)
> Included headers.
> and simplified to contain only label, the original MNIST pixels of the digit seen 28x28pixels & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel TP9 and Muse2 will be TP9-0 TP9-1 ..... TP9-511 since there are 512 samples.
> There are 3 subdatasets:
>
> For Muse2 with 5 EEG Channels, 3 PPG Channels, 3 ACC Channels & 3 GYR Channels and 512 samples x Channel
>
> For Cap64 with 64 EEG Channels and 400 samples x Channel
>
> For Cap64 with 64 EEG Channels and 400 samples x Channel but with Morlet png images as EEG outputs
>
**3.1.- MindBigData Visual MNIST of Brain digits Muse2**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_VisMNIST_MU2
**3.2.- MindBigData Visual MNIST of Brain digits Cap64**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_VisMNIST_Cap64
**3.3.- MindBigData Visual MNIST of Brain digits Cap64 Morlet**
https://huggingface.co/datasets/DavidVivancos/MindBigData2022_VisMNIST_Cap64_Morlet
| DavidVivancos/MindBigData2022 | [
"arxiv:2212.14746",
"region:us"
]
| 2022-12-27T16:01:18+00:00 | {} | 2023-01-07T10:18:30+00:00 |
ef6186fd8fe470cf26133653288adf131aeb99a2 | spacemanidol/msmarco-passage-KALE | [
"region:us"
]
| 2022-12-27T16:27:07+00:00 | {} | 2022-12-27T17:51:22+00:00 |
|
c9839408ac25896c2bc2ff8bcf4b3f97c3c6e47b | pushina/fotos | [
"license:openrail",
"region:us"
]
| 2022-12-27T16:37:54+00:00 | {"license": "openrail"} | 2022-12-27T16:39:53+00:00 |
|
f57c991695ec71f90c1beece802d61729f19fca0 | awacke1/BookComposer | [
"license:mit",
"region:us"
]
| 2022-12-27T16:48:06+00:00 | {"license": "mit"} | 2022-12-27T16:48:44+00:00 |
|
0817b7e7008f61c92e28e72772677f226f887a53 | # Disclaimer
*This is a hate speech dataset (in Arabic, French, and English).*
*Offensive content that does not reflect the opinions of the authors.*
# Dataset of our EMNLP 2019 Paper (Multilingual and Multi-Aspect Hate Speech Analysis)
For more details about our dataset, please check our paper:
@inproceedings{ousidhoum-etal-multilingual-hate-speech-2019,
title = "Multilingual and Multi-Aspect Hate Speech Analysis",
author = "Ousidhoum, Nedjma
and Lin, Zizheng
and Zhang, Hongming
and Song, Yangqiu
and Yeung, Dit-Yan",
booktitle = "Proceedings of EMNLP",
year = "2019",
publisher = "Association for Computational Linguistics",
}
(You can preview our paper on https://arxiv.org/pdf/1908.11049.pdf)
## Clarification
The multi-labelled tasks are *the hostility type of the tweet* and the *annotator's sentiment*. (We kept labels on which at least two annotators agreed.)
## Taxonomy
In further experiments that involved binary classification tasks of the hostility/hate/abuse type, we considered single-labelled *normal* instances to be *non-hate/non-toxic* and all the other instances to be *toxic*.
## Dataset
Our dataset is composed of three csv files sorted by language. They contain the tweets and the annotations described in our paper:
the hostility type *(column: tweet sentiment)*
hostility directness *(column: directness)*
target attribute *(column: target)*
target group *(column: group)*
annotator's sentiment *(column: annotator sentiment)*.
## Experiments
To replicate our experiments, please see https://github.com/HKUST-KnowComp/MLMA_hate_speech/blob/master/README.md
| nedjmaou/MLMA_hate_speech | [
"license:mit",
"arxiv:1908.11049",
"region:us"
]
| 2022-12-27T17:04:33+00:00 | {"license": "mit"} | 2022-12-28T11:24:32+00:00 |
0ef7a4214e44aeb1b8a7ca98c3a3f04a348be8b2 | # Dataset Card for "scalable_project"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | marvmk/scalable_project | [
"region:us"
]
| 2022-12-27T17:19:18+00:00 | {"dataset_info": {"features": [{"name": "Open", "dtype": "float64"}, {"name": "High", "dtype": "float64"}, {"name": "Low", "dtype": "float64"}, {"name": "Close", "dtype": "float64"}, {"name": "Volume", "dtype": "int64"}, {"name": "Inflation", "dtype": "float64"}, {"name": "CPI", "dtype": "float64"}, {"name": "Quarter_end", "dtype": "int64"}, {"name": "Date", "dtype": "timestamp[ns, tz=America/New_York]"}], "splits": [{"name": "train", "num_bytes": 359424, "num_examples": 4992}], "download_size": 0, "dataset_size": 359424}} | 2023-01-06T21:58:44+00:00 |
78ff54a5171d23c39c728d28759956c13725aee9 | spacemanidol/trec-dl2019-query-variation | [
"region:us"
]
| 2022-12-27T17:31:26+00:00 | {} | 2022-12-28T18:10:30+00:00 |
|
c2ae4cb9df2db03e6c78a16d8e3dc7b961130f21 |
This dataset is extracted from the Anime "Rent-A-Girlfriend" as posted on Kaggle by [xandercubbin](https://www.kaggle.com/datasets/xandercubbin/chizuru-ichinose).
Please refer to the `chizuru_dialog_dataset.ipynb` file to see how the dataset was pre-processed. | alexandreteles/chizuru-ichinose | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
]
| 2022-12-27T17:48:44+00:00 | {"language": ["en"], "license": "cc0-1.0", "multilinguality": ["monolingual"], "pretty_name": "chizuru", "language_bcp47": ["en-US"]} | 2022-12-27T17:53:53+00:00 |
ed97fae31a3f3de7aefb22af5ff5783c5763a777 | spacemanidol/wikipedia-trivia-KALE | [
"region:us"
]
| 2022-12-27T17:56:12+00:00 | {} | 2022-12-27T17:59:44+00:00 |
|
5cb66728f6e33f0bd8fce2015bb690c1cf1c4a3d | # TREC DL 2020 Query Variation | spacemanidol/trec-dl2020-query-variation | [
"region:us"
]
| 2022-12-27T17:57:35+00:00 | {} | 2022-12-28T18:11:23+00:00 |
8199ff7ae044380b10175ef3260fb3b4822107df | spacemanidol/scifacts-KALE | [
"region:us"
]
| 2022-12-27T17:57:57+00:00 | {} | 2022-12-27T17:58:38+00:00 |
|
e56aad2f9be461b98949bc18a70f6ee2949ebec7 |
# Albino Style Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/albino_style/resolve/main/showcase.png"/>
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"albino_style"```
Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(albino_style:0.8)"```
I trained the embedding two epochs until 6800 steps.
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/albino_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
]
| 2022-12-27T18:08:38+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/albino_style/resolve/main/showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2022-12-27T18:12:47+00:00 |
cc250e4a6c875d20a0d6e9badfbcb3cf39cd391f |
# Barbosa Style Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/barbosa_style/resolve/main/barbosa_showcase.png"/>
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"barbosa_style"```
Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(barbosa_style:0.8)"```
I trained the embedding two epochs until 8000 steps.
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/barbosa_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
]
| 2022-12-27T18:13:37+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/barbosa_style/resolve/main/barbosa_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2022-12-27T18:17:03+00:00 |
e3c5457a9b60b00e6b2a3e4e783cd5f453d47a43 |
# Cyberware Style Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/cyberware_style/resolve/main/cyber_showcase.png"/>
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"cyberware_style"```
Personally, I would recommend to use my embeddings with a strength of 0.8, but this time I would use it just as it is.
The embedding itself is based on the dataset given by Eppinette: https://huggingface.co/Eppinette/Cyberware
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/cyberware_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
]
| 2022-12-27T18:17:27+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/cyberware_style/resolve/main/cyber_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2022-12-27T18:21:47+00:00 |
917d7062ee8722fa26c0554966884d814a64774a | # Dataset Card for "OxfordPets_facebook_opt_30b_LLM_Description_opt30b_downstream_tasks_ViT_L_14"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/OxfordPets_facebook_opt_30b_LLM_Description_opt30b_downstream_tasks_ViT_L_14 | [
"region:us"
]
| 2022-12-27T19:18:36+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 25933.0, "num_examples": 2}], "download_size": 30228, "dataset_size": 25933.0}} | 2022-12-27T19:18:40+00:00 |
a48fe7b9cacb11116b4ab66debf203549c8b75e5 |
# Dataset Card for OLM November/December 2022 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 15% of the November/December 2022 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`. | olm/olm-CC-MAIN-2022-49-sampling-ratio-olm-0.15114822547 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:en",
"pretraining",
"language modelling",
"common crawl",
"web",
"region:us"
]
| 2022-12-27T19:22:18+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": [], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "OLM November/December 2022 Common Crawl", "tags": ["pretraining", "language modelling", "common crawl", "web"]} | 2023-02-05T18:28:47+00:00 |
d0ad6b80c87bd819bda7003fca75afcea5272fca | # Dataset Card for "dreambooth-hackathon-images-sbob"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jonathang/dreambooth-hackathon-images-sbob | [
"region:us"
]
| 2022-12-27T19:34:51+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1488165.0, "num_examples": 4}], "download_size": 1489345, "dataset_size": 1488165.0}} | 2022-12-27T19:35:01+00:00 |
459ea11a4586ed02ae164beaf95de2b3e5c9396d |
# Hagikora
*Aka, Stripped photoshop.*
## FAQ:
Q: Can you remove the gated prompts?
A: No. Personally I don't want any random person downloading the dataset and finding out it isn't suitable for them.
Q: Can you make Zip file.
A: Yes.
Q: Filtering?
A: No filtering done. All files are as is and untouched. You probably want to aesthetic filer on the images or something like that. | KaraKaraWitch/Hagikora | [
"license:cc-by-nc-4.0",
"not-for-all-audiences",
"region:us"
]
| 2022-12-27T19:37:33+00:00 | {"license": ["cc-by-nc-4.0"], "pretty_name": "Hagikora", "tags": ["not-for-all-audiences"]} | 2024-01-19T18:33:36+00:00 |
77cdfbad7898c8eaf6e9587915e916436a048e2d | # Dataset Card for "base_code_review"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Dahoas/base_code_review | [
"region:us"
]
| 2022-12-27T19:41:07+00:00 | {"dataset_info": {"features": [{"name": "body", "dtype": "string"}, {"name": "comments", "list": [{"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "body", "dtype": "string"}]}, {"name": "answers", "list": [{"name": "body", "dtype": "string"}, {"name": "comments", "list": [{"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "body", "dtype": "string"}]}, {"name": "meta_data", "struct": [{"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "ParentId", "dtype": "string"}, {"name": "Score", "dtype": "string"}]}]}, {"name": "meta_data", "struct": [{"name": "AcceptedAnswerId", "dtype": "string"}, {"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "Tags", "sequence": "string"}, {"name": "Title", "dtype": "string"}]}, {"name": "question_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 729807089, "num_examples": 76003}], "download_size": 335610114, "dataset_size": 729807089}} | 2022-12-27T19:41:29+00:00 |
54f61a2ccd3c56818966986e214df8fd0b76f7dd | # Dataset Card for "dreambooth-hackathon-images-sbob-jpeg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jonathang/dreambooth-hackathon-images-sbob-jpeg | [
"region:us"
]
| 2022-12-27T19:41:49+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1028414.0, "num_examples": 4}], "download_size": 1018233, "dataset_size": 1028414.0}} | 2022-12-27T19:41:59+00:00 |
c9c623d82807d4cd68cfcdff019851ea0ef9249b | # Dataset Card for "doge"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | fabiochiu/doge | [
"region:us"
]
| 2022-12-27T19:48:36+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 451322.0, "num_examples": 5}], "download_size": 451958, "dataset_size": 451322.0}} | 2022-12-27T19:55:59+00:00 |
d6b2854fdfcf8a626f8f1ac7b569a5c2accf52a4 | # Dataset Card for "OxfordPets_Multimodal_Fatima_opt_175b_LLM_Description_opt175b_downstream_tasks_ViT_L_14"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/OxfordPets_Multimodal_Fatima_opt_175b_LLM_Description_opt175b_downstream_tasks_ViT_L_14 | [
"region:us"
]
| 2022-12-27T20:01:16+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 3482068.0, "num_examples": 100}], "download_size": 3458504, "dataset_size": 3482068.0}} | 2022-12-27T20:27:45+00:00 |
52c70f8ec561c3df37208c5d3ec026910cace849 | # Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | GV05/dreambooth-hackathon-images | [
"region:us"
]
| 2022-12-27T20:03:47+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 927160.0, "num_examples": 13}], "download_size": 923205, "dataset_size": 927160.0}} | 2022-12-27T20:04:01+00:00 |
c0a7d4e93c96530a6753118fa4a71148c9425b87 | # Dataset Card for "olm-december-2022-tokenized-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | olm/olm-december-2022-tokenized-512 | [
"region:us"
]
| 2022-12-27T20:14:46+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 86351663844, "num_examples": 27999891}], "download_size": 23243344520, "dataset_size": 86351663844}} | 2022-12-27T20:41:53+00:00 |
03c6bf31ce30383e0012167401908ac2f91c3c3f | https://colab.research.google.com/github/huggingface/diffusion-models-class/blob/main/hackathon/dreambooth.ipynb?authuser=2#scrollTo=c3defbc3-b9a3-40c7-87dc-61f897025dce | vukrosic/dreambooth-vuk-images | [
"region:us"
]
| 2022-12-27T20:18:22+00:00 | {} | 2022-12-28T20:28:01+00:00 |
9aadb4105b0e4e32c8514f272cac57c18fc98dc1 | # Dataset Card for "olm-december-2022-tokenized-1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | olm/olm-december-2022-tokenized-1024 | [
"region:us"
]
| 2022-12-27T21:41:08+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 86220997560, "num_examples": 14006010}], "download_size": 22866321750, "dataset_size": 86220997560}} | 2022-12-27T22:08:19+00:00 |
2d464151cb47742d4a7c724f41f4c44c10cc08cc | # Dataset Card for "dreambooth-hackathon-images-miko"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davidaponte/dreambooth-hackathon-images-miko | [
"region:us"
]
| 2022-12-27T22:01:44+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 42574511.0, "num_examples": 14}], "download_size": 42573847, "dataset_size": 42574511.0}} | 2022-12-27T22:01:58+00:00 |
2d816e74320c835019ecc05f0078465eb90a1ed6 | Russian dataset for ELQ (Entity Linking for Questions) model (https://github.com/facebookresearch/BLINK/tree/main/elq) | GulPav/elqa_dataset | [
"task_categories:token-classification",
"language:ru",
"region:us"
]
| 2022-12-27T22:13:38+00:00 | {"language": ["ru"], "task_categories": ["token-classification"], "pretty_name": "Russian ELQ dataset"} | 2023-01-08T23:12:13+00:00 |
4eb37a7a2e5d19154cbf7923beb30bfbd51220d5 | New Russian dataset for ELQ (Entity Linking for Questions) model (https://github.com/facebookresearch/BLINK/tree/main/elq)
| GulPav/ru_elq_dataset | [
"task_categories:token-classification",
"language:ru",
"region:us"
]
| 2022-12-27T22:14:29+00:00 | {"language": ["ru"], "task_categories": ["token-classification"], "pretty_name": "Russian ELQ dataset"} | 2023-01-08T23:13:13+00:00 |
e496fcd86c3e49191778910783c02f899dfeca6c | breadlicker45/allwords | [
"license:other",
"region:us"
]
| 2022-12-27T22:18:00+00:00 | {"license": "other"} | 2022-12-27T22:18:08+00:00 |
|
2f317791067b2089a0a95dec5d299e7355b4d810 | breadlicker45/wizards-of-Waverly-place-scripts | [
"license:other",
"region:us"
]
| 2022-12-27T22:18:49+00:00 | {"license": "other"} | 2023-09-09T19:34:30+00:00 |
|
3327de626e1e48d7ce7a7d5135effd13e39a696d |
# Dataset Card for OASum Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Usage](#dataset-usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [OASum Dataset repository](https://github.com/tencent-ailab/OASum)
- **Paper:** [OASum: Large-Scale Open Domain Aspect-based Summarization](https://arxiv.org/pdf/2212.09233.pdf)
The OASum Dataset is an English-language dataset containing over 3.6M document, aspect, and summary triplets.
## Dataset Usage
You can directly download it with huggingface datasets.
``` python
from datasets import load_dataset
dataset = load_dataset("kqsong/OASum")
```
## Dataset Structure
### Data Instances
For each instance, there is a list of strings for the document, a list of strings for the summary, a string for the document title, a string for the aspect and a list of indices for the sentences in the corresponding section.
```json
{
"title": "Ker's WingHouse Bar & Grill",
"document":[
"After Clearwater, Florida chicken wing pioneering restaurant chain Hooters began rapidly expanding, Florida based, Canadian-born restaurant entrepreneur Ed Burnett saw the opportunity.",
"Burnett secured the rights to a closed restaurant (\"Knockers\") and opened \"The WingHouse\" restaurant at 7369 Ulmerton Road, Largo, Florida, a high traffic corridor.",
"He strategically selected the restaurant in between where people work (commercial real estate) and live (residential real estate), to appeal to the local lunch crowd and family dining crowd.",
"This flagship location proved to be a success soon after launching and is the model that the chain expanded on.",
"Burnett, looking to expand to additional locations, accepted a financing partner (Crawford Ker) during this time frame, to open additional locations and beyond.",
"Burnett's goal was to open 20 to 50 locations, and then sell the chain to a larger restaurant chain or investors.",
"Burnett would ultimately regret his choice of investor.","In 1992, Ker retired from the NFL and took a job selling cars at a local dealer.",
"In 1994, he invested half interest in a Largo, Florida wing restaurant called, \"Wing House\" that imitated Hooters.",
"The restaurant was always The Wing House, and the atmosphere was always toned down to make it more family friendly.",
"The restaurant did well and two additional locations were opened in the Tampa Bay area in the following three years.",
"Ker won a $1.2-million jury award from Hooters in late 2004, which had sued him for trademark violations for allegedly using their uniforms and decor.",
"After a three-week trial in which lawyers discussed hula hoops, surfboards, scrunchy socks, pantyhose, and something called \"vicarious sexual recreation\", the jury ruled that no trademark infringement existed and Hooters was penalized for their frivolous lawsuit.",
"Hooters appealed the decision, but in June, 2006, the 11th U.S. Circuit Court of Appeals in Atlanta upheld the verdict.",
"As of 2007, the company had 1,700 employees at 22 locations with revenue of nearly $60 million.",
"Ker attended, and the company participated in, the 2007 National Buffalo Wing Festival and placed first in the \"traditional x-hot sauce\" category and gained some national recognition.",
"On June 4, 2008 the company announced the launch of its national franchise program.",
"In mid-2008 the chain operated 19 locations in Florida and Texas and expected to add six franchises by the end of 2008, and 48 by 2011.",
"The initial focus was for franchises in the Southeastern US.",
"WingHouses feature several amenities that differ from other wing restaurants, including Hooters.",
"There is a full liquor bar in every store, sports memorabilia line the walls instead of NASCAR and most locations include a game room.",
"Super Bowl XLIII in Tampa, Florida attracted the rich and famous; WingHouse hosted three events to raise money for charity."
],
"aspect": "Opening",
"aspect_sents": [0,1,2,3,4,5,6,7,8,9,10],
"summary":[
"WingHouse Bar & Grill (formerly Ker\u2019s WingHouse Bar & Grill) is a restaurant chain based in Florida, created and founded by Ed Burnett, a Canadian restaurant entrepreneur.",
"After opening his first WingHouse location, Burnett sought out investors to open additional WingHouse locations.",
"Burnett accepted investor Crawford Ker (a former National Football League player) to assist financing the expansion."
]
}
```
The average token count for the articles and the highlights are provided below:
| Feature | Mean Token Count |
| ---------- | ---------------- |
| Document | 1,612 |
| Summary | 40 |
### Data Fields
- `title`: a string, containing the original Wikipedia title.
- `document`: a list of sentences, containing the original content in the Wikipedia sections except the first abstract section.
- `aspect`: a string, containing the section name and its parent section names.
- `aspect_sents`: a list of indices, representing the sentences in the `aspect` section.
- `summary`: a list of sentences, the corresponding aspect-based summary for the document.
### Data Splits
The OASum dataset has 3 splits: _train_, _valid_, and _test_. Below are the statistics for the Version 1.0.0 of the dataset.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 3,523,986 |
| Validation | 111,578 |
| Test | 112,005 |
## Additional Information
### Licensing Information
The OASum Dataset version 1.0.0 is released under the [CC-BY-SA-3.0 License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
### Citation Information
```
@article{yang2022oasum,
title={Oasum: Large-scale open domain aspect-based summarization},
author={Yang, Xianjun and Song, Kaiqiang and Cho, Sangwoo and Wang, Xiaoyang and Pan, Xiaoman and Petzold, Linda and Yu, Dong},
journal={arXiv preprint arXiv:2212.09233},
year={2022}
}
``` | kqsong/OASum | [
"task_categories:summarization",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-sa-3.0",
"summarization",
"Wikipedia",
"arxiv:2212.09233",
"region:us"
]
| 2022-12-27T22:27:17+00:00 | {"language": ["en"], "license": "cc-by-sa-3.0", "size_categories": ["1M<n<10M"], "task_categories": ["summarization"], "tags": ["summarization", "Wikipedia"]} | 2023-07-03T20:02:23+00:00 |
a6b5f2c3249161f195ce0353019079f7e3ef5391 | sinsforeal/yuyu | [
"license:openrail",
"region:us"
]
| 2022-12-27T22:33:31+00:00 | {"license": "openrail"} | 2022-12-27T22:36:03+00:00 |
|
8c3ad1482e60300da2a0204fc194a4bf6283202d |
# Dpin Style Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/dpin_style/resolve/main/dpin_showcase.png"/>
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"dpin_style"```
Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(dpin_style:0.8)"```
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/dpin_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
]
| 2022-12-27T22:53:41+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/dpin_style/resolve/main/dpin_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2022-12-27T23:01:15+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.