sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
551995964c9f87c931846f4459033218280c37f3
|
# Dataset Card for "sam-coyo-2.5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mfidabel/sam-coyo-2.5k
|
[
"region:us"
] |
2023-05-03T19:45:02+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2299967269.632, "num_examples": 2736}], "download_size": 2357202624, "dataset_size": 2299967269.632}}
|
2023-05-03T19:47:08+00:00
|
cd104f9d94ff7911ce0fc164e38d7fbdefce5733
|
# Dataset Card for "wikipedia_sentence_level_en_de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bjoernp/wikipedia_sentence_level_en_de
|
[
"region:us"
] |
2023-05-03T21:14:51+00:00
|
{"dataset_info": {"features": [{"name": "sentences", "dtype": "string"}, {"name": "de_sentences", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18768712971, "num_examples": 27736968}], "download_size": 11340576833, "dataset_size": 18768712971}}
|
2023-05-03T21:19:41+00:00
|
a5076850ba018ffadcbdf2955014f907cbd29a83
|
# Dataset Card for "genealogy_synthetic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
adzcai/genealogy_synthetic
|
[
"region:us"
] |
2023-05-03T21:15:28+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer0", "dtype": "string"}, {"name": "answer1", "dtype": "string"}, {"name": "answer2", "dtype": "string"}, {"name": "answer3", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3"}}}}], "splits": [{"name": "train", "num_bytes": 683054, "num_examples": 2816}, {"name": "test", "num_bytes": 677690, "num_examples": 2797}], "download_size": 415481, "dataset_size": 1360744}}
|
2023-05-03T22:07:03+00:00
|
c66fa58a7488348a226f9a6654a8df0dce526d55
|
shawt100/Shawt
|
[
"task_categories:text-generation",
"license:openrail",
"art",
"region:us"
] |
2023-05-03T21:39:51+00:00
|
{"license": "openrail", "task_categories": ["text-generation"], "pretty_name": "ShawtSanders", "tags": ["art"]}
|
2023-05-03T21:41:40+00:00
|
|
5d5a2485e9119deb5cd5e60baa1077a29f5fe4d5
|
Corresponding GitHub repo can be found here:
https://github.com/leap-stc/ClimSim
Read more: https://arxiv.org/abs/2306.08754.
|
LEAP/ClimSim_low-res_aqua-planet
|
[
"license:cc-by-4.0",
"arxiv:2306.08754",
"doi:10.57967/hf/0741",
"region:us"
] |
2023-05-03T21:47:42+00:00
|
{"license": "cc-by-4.0"}
|
2023-09-29T19:31:29+00:00
|
757c3dc142ce2a0bd31275cfb0027f5cb2d940d2
|
# Dataset Card for "genealogy_synthetic_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
adzcai/genealogy_synthetic_v2
|
[
"region:us"
] |
2023-05-03T21:47:50+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer0", "dtype": "string"}, {"name": "answer1", "dtype": "string"}, {"name": "answer2", "dtype": "string"}, {"name": "answer3", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3"}}}}], "splits": [{"name": "train", "num_bytes": 512290.5, "num_examples": 2112}, {"name": "test", "num_bytes": 170763.5, "num_examples": 704}], "download_size": 222511, "dataset_size": 683054.0}}
|
2023-05-03T21:49:18+00:00
|
8b0a5926a6730d3ba8c9a648a667f2b4b7a8b871
|
# Dataset Card for "genealogy_synthetic_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
adzcai/genealogy_synthetic_v3
|
[
"region:us"
] |
2023-05-03T22:07:22+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer0", "dtype": "string"}, {"name": "answer1", "dtype": "string"}, {"name": "answer2", "dtype": "string"}, {"name": "answer3", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3"}}}}], "splits": [{"name": "train", "num_bytes": 683054, "num_examples": 2816}, {"name": "test", "num_bytes": 677690, "num_examples": 2797}], "download_size": 0, "dataset_size": 1360744}}
|
2023-05-03T22:07:42+00:00
|
4932d08fca7a8dea91e0936685b4add4d6c26f79
|
emmadrex/emma_ncipated_ietg_ioas_001
|
[
"task_categories:summarization",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-05-03T23:08:03+00:00
|
{"license": "cc-by-nc-sa-4.0", "task_categories": ["summarization"], "pretty_name": "Isometric Equilateral Triangle Grid: Impossible Objects and Structures"}
|
2023-05-03T23:39:14+00:00
|
|
beeedc0459fb9c964affb43a49155d894e702145
|
# MAP
An SQLite database of video urls and captions/descriptions.
|
TempoFunk/map
|
[
"task_categories:text-to-image",
"task_categories:text-to-video",
"task_categories:video-classification",
"task_categories:image-classification",
"size_categories:1M<n<10M",
"language:en",
"license:agpl-3.0",
"region:us"
] |
2023-05-03T23:12:13+00:00
|
{"language": ["en"], "license": "agpl-3.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-to-image", "text-to-video", "video-classification", "image-classification"]}
|
2023-05-11T16:30:01+00:00
|
24a20e856dba07e64716aa8d8e545a951483424c
|
# Dataset Card for Review Helpfulness Prediction (RHP) Dataset
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [On the Role of Reviewer Expertise in Temporal Review Helpfulness Prediction](https://aclanthology.org/2023.findings-eacl.125/)
- **Leaderboard:**
### Dataset Summary
The success of e-commerce services is largely dependent on helpful reviews that aid customers in making informed purchasing decisions. However, some reviews may be spammy or biased, making it challenging to identify which ones are helpful. Current methods for identifying helpful reviews only focus on the review text, ignoring the importance of who posted the review and when it was posted. Additionally, helpfulness votes may be scarce for less popular products or recently submitted reviews. To address these challenges, the we introduce a dataset and task for review helpfulness prediction, incorporating the reviewers' attributes and review date, and build the dataset by scraping reviews from [TripAdvisor](https://www.tripadvisor.com/).
### Languages
English
## Loading the Dataset
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("tafseer-nayeem/review_helpfulness_prediction")
# Divide the dataset into train, test, and validation sets
train_dataset = dataset["train"]
test_dataset = dataset["test"]
validation_dataset = dataset["validation"]
print(f'Number of training samples: {len(train_dataset)}')
print(f'Number of testing samples: {len(test_dataset)}')
print(f'Number of validation samples: {len(validation_dataset)}')
```
**If the above code doesn't work due to changes in the Hugging Face datasets library**, download the `train.json`, `test.json`, and `validation.json` from the data directory and use the following alternative code:
```python
import json
def load_json(filename):
with open(filename, 'r') as f:
data = json.load(f)
return data
# Load the data
train_data = load_json('train.json')
test_data = load_json('test.json')
validation_data = load_json('validation.json')
```
## Dataset Structure
### Data Instances
One example from the `test` split of the dataset is given below in JSON format.
```
{
"user_review_posted": 28,
"user_total_helpful_votes": 78,
"expertise": 0.013414038240254,
"user_cities_visited": 89,
"review_days": 0.39430449069003204,
"helpful_class": 4,
"review_text": "Had to see for myself. Over priced, bloviated, cheap. I am highly sensitive to mold, and it permeated the hotel. Sheets were damp, pipes blew hot air even when turned off. Considering all the hype, that's what this place is, all hype for too much money."
}
```
### Data Fields
- `user_review_posted`: An integer representing the number of reviews posted by the reviewer.
- `user_total_helpful_votes`: An integer representing the cumulative helpful votes received by the reviewer.
- `expertise`: A normalized floating point number representing the mean number of helpful votes received per review.
- `user_cities_visited`: An integer representing the number of cities visited by the reviewer.
- `review_days`: A normalized floating point number representing the relative age of a review in days.
- `helpful_class`: An integer representing the degree of helpfulness of a review.
- `review_text`: A string representing the review text.
### Data Splits
The following Table presents the summary of our dataset with train, validation, and test splits.
| | Train | Valid | Test |
|:---------------:|---------|--------|-------|
| Total #Samples | 145,381 | 8,080 | 8,080 |
| Avg. #Sentences | 7.82 | 7.8 | 7.81 |
| Avg. #Words | 152.37 | 152.25 | 148.9 |
## Dataset Creation
We build our dataset by scraping reviews from [TripAdvisor](https://www.tripadvisor.com). Out of 225,664 reviews retrieved, close to one third have no helpful votes. We filter such reviews, and this reduces the number of reviews to 161,541. We leverage a logarithmic scale to categorize the reviews based on the number of votes received. Specifically, we map the number of votes into five intervals (i.e., [1,2), [2, 4), [4, 8), [8, 16), [16, infinity)), each corresponding to a helpfulness score of {1, 2, 3, 4, 5}, where the higher the score, the more helpful the review. More details can be found in our [EACL 2023](https://aclanthology.org/2023.findings-eacl.125/) paper.
### Discussion of Ethics
In our data scraping process, we took into account ethical considerations. We obtained data at an appropriate pace, avoiding any potential DDoS attacks.
### Known Limitations
Limitation of our dataset is that we only worked with reviews written in English. As a result, we filter out the reviews written in other languages and notice code-switched reviews when the reviewers alternate between two or more languages in a single review.
## Additional Information
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the resources or it's relevant to your work, please cite [the paper](https://aclanthology.org/2023.findings-eacl.125/).
```
@inproceedings{nayeem-rafiei-2023-role,
title = "On the Role of Reviewer Expertise in Temporal Review Helpfulness Prediction",
author = "Nayeem, Mir Tafseer and
Rafiei, Davood",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.125",
pages = "1684--1692",
abstract = "Helpful reviews have been essential for the success of e-commerce services, as they help customers make quick purchase decisions and benefit the merchants in their sales. While many reviews are informative, others provide little value and may contain spam, excessive appraisal, or unexpected biases. With the large volume of reviews and their uneven quality, the problem of detecting helpful reviews has drawn much attention lately. Existing methods for identifying helpful reviews primarily focus on review text and ignore the two key factors of (1) who post the reviews and (2) when the reviews are posted. Moreover, the helpfulness votes suffer from scarcity for less popular products and recently submitted (a.k.a., cold-start) reviews. To address these challenges, we introduce a dataset and develop a model that integrates the reviewer{'}s expertise, derived from the past review history of the reviewers, and the temporal dynamics of the reviews to automatically assess review helpfulness. We conduct experiments on our dataset to demonstrate the effectiveness of incorporating these factors and report improved results compared to several well-established baselines.",
}
```
|
tafseer-nayeem/review_helpfulness_prediction
|
[
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-sa-4.0",
"Human-Centered NLP",
"Helpfulness Prediction",
"Review Helpfulness Prediction",
"User Review Analysis",
"Dataset",
"Review Helpfulness Prediction Dataset",
"doi:10.57967/hf/0613",
"region:us"
] |
2023-05-03T23:28:02+00:00
|
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "pretty_name": "Review Helpfulness Prediction (RHP) Dataset", "tags": ["Human-Centered NLP", "Helpfulness Prediction", "Review Helpfulness Prediction", "User Review Analysis", "Dataset", "Review Helpfulness Prediction Dataset"]}
|
2023-08-28T20:56:01+00:00
|
ee71f7dc625c6769e89271bac390dc3faa699fe6
|
# Dataset Card for "genfonts_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rcugarte/genfonts_data
|
[
"region:us"
] |
2023-05-03T23:54:59+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 68781330.95334162, "num_examples": 1376}, {"name": "test", "num_bytes": 3595956.254658385, "num_examples": 73}], "download_size": 69505808, "dataset_size": 72377287.208}}
|
2023-05-03T23:55:20+00:00
|
723da6f6c22c3941afee78e3dcdc327f2d7a1992
|
# Dataset Card for "pokemon_bulbapedia_descriptions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
matemato/pokemon_bulbapedia_3_sentence
|
[
"region:us"
] |
2023-05-04T00:40:41+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 100831984.0, "num_examples": 721}], "download_size": 83967282, "dataset_size": 100831984.0}}
|
2023-05-04T01:03:34+00:00
|
3c1ad976343a31d004bc93e590982f83d71a6a2e
|
brainer/KoreanApartmentDealData
|
[
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"license:other",
"korea",
"apartment",
"region:us"
] |
2023-05-04T00:43:17+00:00
|
{"license": "other", "task_categories": ["tabular-classification", "tabular-regression"], "pretty_name": "Korean Apartment Deal Data", "tags": ["korea", "apartment"]}
|
2023-07-09T10:57:06+00:00
|
|
2256415bd85b14945ac717170274565adb2a2b2d
|
# Dataset Card for M3IT
Project Page: [M3IT](https://m3-it.github.io/)
## Dataset Description
- **Homepage: https://huggingface.co/datasets/MMInstruction/M3IT**
- **Repository: https://huggingface.co/datasets/MMInstruction/M3IT**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Languages
English and Chinese. 80 translated version can be found at [M3IT-80](https://huggingface.co/datasets/MMInstruction/M3IT-80).
## Dataset Statistics
Our dataset compiles diverse tasks of classical vision-language tasks, including captioning,
visual question answering~(VQA), visual conditioned generation, reasoning and classification.
### Instruction Statistics
| Task | #Instructions |
|---------------------------|---------------|
| Image Captioning | 52 |
| Classification | 113 |
| Visual Question Answering | 95 |
| Knowledgeable Visual QA | 40 |
| Reasoning | 60 |
| Generation | 40 |
| Total | 400 |
### Task Statistics
| Task | Description | #Train | #Val | #Test |
|---------------------------|-----------------------------------------------------------------|---------|---------|---------|
| Image Captioning | Given an image, write a description for the image. | 679,087 | 41,462 | 27,499 |
| Classification | Given an image, classify the image into pre-defined categories. | 238,303 | 100,069 | 21,206 |
| Visual Question Answering | Given an image, answer a question relevant to the image. | 177,633 | 46,314 | 10,828 |
| Knowledgeable Visual QA | Given an image, answer the question requires outside knowledge. | 39,981 | 11,682 | 5,477 |
| Reasoning | Given an image, conduct reasoning over the images. | 99,372 | 11,500 | 10,000 |
| Generation | Given an image, make compositions with certain requirements. | 145,000 | 11,315 | 17,350 |
| Chinese | CAP, CLS, VQA, and GEN tasks in Chinese. | 192,076 | 77,306 | 4,100 |
| Video | CAP, CLS, and VQA tasks on video-language datasets. | 20,868 | 7,542 | 9,294 |
| Multi-lingual | Translated tasks in 80 languages | 0 | 240,000 | 184,000 |
### Detailed Dataset Statistics
| Task | Dataset | #Train | #Val | #Test |
|---------------------------|------------------------------|---------|--------|--------|
| Image Captioning | `coco` | 566,747 | 25,010 | 25,010 |
| | `textcap` | 97,765 | 13,965 | 0 |
| | `image-paragraph-captioning` | 14,575 | 2,487 | 2,489 |
| Classification | `coco-goi` | 30,000 | 2,000 | 0 |
| | `coco-text` | 118,312 | 27,550 | 0 |
| | `imagenet` | 30,000 | 50,000 | 0 |
| | `coco-itm` | 30,000 | 5,000 | 5,000 |
| | `snli-ve` | 20,000 | 14,339 | 14,740 |
| | `mocheg` | 4,991 | 180 | 466 |
| | `iqa` | 5,000 | 1,000 | 1,000 |
| Visual Question Answering | `vqa-v2` | 30,000 | 30,000 | 0 |
| | `shapes` | 13,568 | 1,024 | 1,024 |
| | `docvqa` | 39,463 | 5,349 | 0 |
| | `ocr-vqa` | 11,414 | 4,940 | 0 |
| | `st-vqa` | 26,074 | 0 | 4,070 |
| | `text-vqa` | 27,113 | 0 | 5,734 |
| | `gqa` | 30,001 | 5,001 | 0 |
| Knowledgeable Visual QA | `okvqa` | 9,009 | 5,046 | 0 |
| | `a-okvqa` | 17,056 | 1,145 | 0 |
| | `science-qa` | 12,726 | 4,241 | 4,241 |
| | `viquae` | 1,190 | 1,250 | 1,236 |
| Reasoning | `clevr` | 30,000 | 2,000 | 0 |
| | `nlvr` | 29,372 | 2,000 | 0 |
| | `vcr` | 25,000 | 5,000 | 5,000 |
| | `visual-mrc` | 15,000 | 2,500 | 5,000 |
| | `winoground` | 0 | 0 | 800 |
| Generation | `vist` | 5,000 | 4,315 | 4,350 |
| | `visual-dialog` | 50,000 | 1,000 | 1,000 |
| | `multi30k` | 90,000 | 6,000 | 12,000 |
| Chinese | `fm-iqa` | 164,735 | 75,206 | 0 |
| | `coco-cn` | 18,341 | 1,000 | 1,000 |
| | `flickr8k-cn` | 6,000 | 1,000 | 1,000 |
| | `chinese-food` | 0 | 0 | 1,100 |
| | `mmchat` | 3,000 | 1,000 | 1,000 |
| Video | `ss` | 2,000 | 2,000 | 2,000 |
| | `ivqa` | 5,994 | 2,000 | 2,000 |
| | `msvd-qa` | 1,161 | 245 | 504 |
| | `activitynet-qa` | 3,200 | 1,800 | 800 |
| | `msrvtt` | 6,513 | 497 | 2,990 |
| | `msrvtt-qa` | 2,000 | 1,000 | 1,000 |
## Dataset Structure
### HuggingFace Login (Optional)
```python
# OR run huggingface-cli login
from huggingface_hub import login
hf_token = "hf_xxx" # TODO: set a valid HuggingFace access token for loading datasets/models
login(token=hf_token)
```
### Data Loading
```python
from datasets import load_dataset
ds_name = "coco" # change the dataset name here
dataset = load_dataset("MMInstruction/M3IT", ds_name)
```
### Data Splits
```python
from datasets import load_dataset
ds_name = "coco" # change the dataset name here
dataset = load_dataset("MMInstruction/M3IT", ds_name)
train_set = dataset["train"]
validation_set = dataset["validation"]
test_set = dataset["test"]
```
### Data Instances
```python
from datasets import load_dataset
from io import BytesIO
from base64 import b64decode
from PIL import Image
ds_name = "coco" # change the dataset name here
dataset = load_dataset("MMInstruction/M3IT", ds_name)
train_set = dataset["train"]
for train_instance in train_set:
instruction = train_instance["instruction"] # str
inputs = train_instance["inputs"] # str
outputs = train_instance["outputs"] # str
image_base64_str_list = train_instance["image_base64_str"] # str (base64)
image_0 = Image.open(BytesIO(b64decode(image_base64_str_list[0])))
```
### Data Fields
```python
import datasets
features = datasets.Features(
{
"instruction": datasets.Value("string"),
"inputs": datasets.Value("string"),
"image_base64_str": [datasets.Value("string")],
"outputs": datasets.Value("string"),
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
| Task | Dataset [Citation] | Source |
|---------------------------|----------------------------------|------------------------------------------------------------------------------------|
| Image Captioning | `coco` [1] | [Source](https://cocodataset.org/#home) |
| | `textcap` [2] | [Source](https://textvqa.org/textcaps/) |
| | `image-paragraph-captioning` [3] | [Source](https://cs.stanford.edu/people/ranjaykrishna/im2p/index.html) |
| Classification | `coco-goi` [1] | [Source](https://cocodataset.org/#home) |
| | `coco-text` [4] | [Source](https://bgshih.github.io/cocotext/) |
| | `imagenet` [5] | [Source](https://www.image-net.org/) |
| | `coco-itm` [1] | [Source](https://cocodataset.org/#home) |
| | `snli-ve` [6] | [Source](https://github.com/necla-ml/SNLI-VE) |
| | `mocheg` [7] | [Source](https://github.com/VT-NLP/Mocheg) |
| | `iqa` [8] | [Source](https://github.com/icbcbicc/IQA-Dataset) |
| Visual Question Answering | `vqa-v2` [9] | [Source](https://visualqa.org/) |
| | `shapes` [10] | [Source](https://github.com/ronghanghu/n2nmn) |
| | `docvqa` [11] | [Source](https://www.docvqa.org/) |
| | `ocr-vqa` [12] | [Source](https://ocr-vqa.github.io/) |
| | `st-vqa` [13] | [Source](https://rrc.cvc.uab.es/?ch=11) |
| | `text-vqa` [14] | [Source](https://textvqa.org/) |
| | `gqa` [15] | [Source](https://cs.stanford.edu/people/dorarad/gqa/about.html) |
| Knowledgeable Visual QA | `okvqa` [16] | [Source](https://okvqa.allenai.org/) |
| | `a-okvqa` [17] | [Source](https://allenai.org/project/a-okvqa/home) |
| | `science-qa` [18] | [Source](https://scienceqa.github.io/) |
| | `viquae` [19] | [Source](https://github.com/PaulLerner/ViQuAE) |
| Reasoning | `clevr` [20] | [Source](https://cs.stanford.edu/people/jcjohns/clevr/) |
| | `nlvr` [21] | [Source](https://lil.nlp.cornell.edu/nlvr/) |
| | `vcr` [22] | [Source](https://visualcommonsense.com/) |
| | `visual-mrc` [23] | [Source](https://github.com/nttmdlab-nlp/VisualMRC) |
| | `winoground` [24] | [Source](https://huggingface.co/datasets/facebook/winoground) |
| Generation | `vist` [25] | [Source](https://visionandlanguage.net/VIST/) |
| | `visual-dialog` [26] | [Source](https://visualdialog.org/) |
| | `multi30k` [27] | [Source](https://github.com/multi30k/dataset) |
| Chinese | `fm-iqa` [28] | [Source](https://paperswithcode.com/dataset/fm-iqa) |
| | `coco-cn` [29] | [Source](https://github.com/li-xirong/coco-cn) |
| | `flickr8k-cn` [30] | [Source](https://github.com/li-xirong/flickr8kcn) |
| | `chinese-food` [31] | [Source](https://sites.google.com/view/chinesefoodnet) |
| | `mmchat` [32] | [Source](https://github.com/silverriver/MMChat) |
| Video | `ss` [33] | [Source](https://developer.qualcomm.com/software/ai-datasets/something-something) |
| | `ivqa` [34] | [Source](https://antoyang.github.io/just-ask.html) |
| | `msvd-qa` [35] | [Source](https://paperswithcode.com/dataset/msvd) |
| | `activitynet-qa` [36] | [Source](https://github.com/MILVLG/activitynet-qa) |
| | `msrvtt` [35] | [Source](https://paperswithcode.com/dataset/msr-vtt) |
| | `msrvtt-qa` [37] | [Source](https://paperswithcode.com/sota/visual-question-answering-on-msrvtt-qa-1) |
### Annotations
#### Annotation process
To build high-quality multimodal instruction datasets,
we rewrite various datasets into multimodal-to-text dialog format.
The annotation process includes four steps:
- (1) **Stage I: Instruction Writing**: writing instructions for each task;
- (2) **Stage II: Data Format Unification**: structuring images and texts into a unified schema;
- (3) **Stage III: Quality Check**: checking the overall dataset quality;
- (4) **Stage IV: Key Datasets Translation**: building multilingual sets.
#### Who are the annotators?
Eight authors of this work are employed as human annotators,
each of whom is a graduate student familiar with relevant literature.
## Additional Information
### Licensing Information
The content of original dataset follows their original license.
We suggest that for the task with Unknown/Custom license, the user can check the original project or contact the dataset owner for detailed license information.
Our annotated instruction data is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```bibtex
@article{li2023m3it,
title={M$^3$IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning},
author={Lei Li and Yuwei Yin and Shicheng Li and Liang Chen and Peiyi Wang and Shuhuai Ren and Mukai Li and Yazheng Yang and Jingjing Xu and Xu Sun and Lingpeng Kong and Qi Liu},
journal={arXiv preprint arXiv:2306.04387},
year={2023}
}
```
### Contributions
M3IT is an open-source, large-scale Multi-modal, Multilingual Instruction Tuning dataset,
designed to enable the development of general-purpose multi-modal agents.
## References
- [1] Microsoft COCO: Common Objects in Context
- [2] TextCaps: a dataset for image captioning with reading comprehension
- [3] A Hierarchical Approach for Generating Descriptive Image Paragraphs
- [4] COCO-Text: Dataset and benchmark for text detection and recognition in natural images
- [5] Imagenet large scale visual recognition challenge
- [6] E-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
- [7] End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models
- [8] Quantifying visual image quality: A Bayesian view
- [9] Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
- [10] Neural Module Networks
- [11] DocVQA: A dataset for vqa on document images
- [12] OCR-VQA: Visual Question Answering by Reading Text in Images
- [13] Scene Text Visual Question Answering
- [14] Towards VQA Models That Can Read
- [15] GQA: A new dataset for real-world visual reasoning and compositional question answering
- [16] OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge
- [17] A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge
- [18] Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
- [19] ViQuAE: a dataset for knowledge-based visual question answering about named entities
- [20] CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning
- [21] A Corpus of Natural Language for Visual Reasoning
- [22] From recognition to cognition: Visual Commonsense Reasoning
- [23] VisualMRC: Machine reading comprehension on document images
- [24] WinoGround: Probing vision and language models for visio-linguistic compositionality
- [25] Visual Storytelling
- [26] Visual Dialog
- [27] Multi30k: Multilingual english-german image descriptions
- [28] Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question
- [29] COCO-CN for cross-lingual image tagging, captioning, and retrieval
- [30] Adding Chinese Captions to Images
- [31] ChineseFoodNet: A large-scale image dataset for chinese food recognition
- [32] MMChat: Multi-Modal Chat Dataset on Social Media
- [33] The "Something Something" Video Database for Learning and Evaluating Visual Common Sense
- [34] Just Ask: Learning to answer questions from millions of narrated videos
- [35] Video Question Answering via Gradually Refined Attention over Appearance and Motion
- [36] ActivityNet-qa: A dataset for understanding complex web videos via question answering
- [37] MSR-VTT: A large video description dataset for bridging video and language
|
MMInstruction/M3IT
|
[
"task_categories:image-to-text",
"task_categories:image-classification",
"size_categories:1M<n<10M",
"language:en",
"language:zh",
"license:other",
"region:us"
] |
2023-05-04T00:43:31+00:00
|
{"language": ["en", "zh"], "license": "other", "size_categories": ["1M<n<10M"], "task_categories": ["image-to-text", "image-classification"]}
|
2023-11-24T08:23:25+00:00
|
54689aee8e5a717e292a6cb6233304290a00be21
|
# Dataset Card for "diffusiondb-masked-no-descriptors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roborovski/diffusiondb-masked-no-descriptors
|
[
"region:us"
] |
2023-05-04T00:48:02+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "masked", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 457934422, "num_examples": 1819808}], "download_size": 170883933, "dataset_size": 457934422}}
|
2023-05-04T00:58:57+00:00
|
fb48a87b6305b3faed65212797fb295b4d0edf4b
|
# Dataset Card for "patacon-730"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
frncscp/patacon-730
|
[
"region:us"
] |
2023-05-04T00:50:38+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Patacon-False", "1": "Patacon-True"}}}}], "splits": [{"name": "train", "num_bytes": 114865007.0, "num_examples": 874}, {"name": "validation", "num_bytes": 18290064.0, "num_examples": 143}, {"name": "test", "num_bytes": 59447780.0, "num_examples": 442}], "download_size": 192218294, "dataset_size": 192602851.0}}
|
2023-05-04T00:51:07+00:00
|
5c18a19161ae6cab7e2ea0032d78cbabc0a8286f
|
# Dataset Card for "generadaisample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
woven/generadaisample
|
[
"region:us"
] |
2023-05-04T02:24:24+00:00
|
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "ad", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3125, "num_examples": 5}], "download_size": 7405, "dataset_size": 3125}}
|
2023-05-04T02:24:26+00:00
|
1c342a590032422f57ce08cf8dd6ed666d6f3cc9
|
Arris/predis-predis-faiss
|
[
"license:mit",
"region:us"
] |
2023-05-04T02:28:49+00:00
|
{"license": "mit"}
|
2023-05-09T02:40:00+00:00
|
|
1fda5a74c8cf661e1a90ff92f37be222dc94ff5c
|
- **URL**: https://huggingface.co/datasets/jeong2/avoid
- **Dataset url**: http://vi.kaist.ac.kr/avoid/avoid.html
- **Supplementary video link**: <!-- http://vi.kaist.ac.kr/avoid/sample.mp4 -->
- **Dataset embargo**: We have our project page available, but the full dataset will be open by the end of September.
- Since our dataset requires a large amount of storage (>500GB and >1TB with additional data), we expect it will take a couple months to setup a stable server.
**Data composition / structure**
```
train / val / test
├──obstacle_weather_Town01_01_01
├──|──obstacle_weather_Town01_01_01_route0_04_14_20_18_51
├──|──|──depth: depth map
├──|──|──label_raw: class, extent, position, yaw, num_points, distance, speed, brake, id, ego_matrix
├──|──|──lidar: LiDAR data
├──|──|──lidar_sem: semantic annotations for LiDAR data
├──|──|──measurements: x, y, theta, speed, target_speed, x_command, y_command, command, waypoints, steer, throttle, brake, weather, junction, vehicle_hazard, light_hazard, walker_hazard, stop_sign_hazard, angle, ego_matrix,
├──|──|──rgb: RGB image (left)
├──|──|──rgb_bev: BEV RGB image
├──|──|──rgb_pair: paired RGB image (Right)
├──|──|──semantics: semantic annotations for RGB image (left)
├──|──|──semantics_bev: semantic annotations for BEV RGB image
├──|──|──topdown: encoded topdown view
```
---
##License: The AVOID dataset is published under the *CC BY-NC-ND 4.0* License, and all codes are published under the *Apache 2.0* License.
---
## Acknowledgement
To be updated
## Citation
```
@misc{
avoid,
title={avoid},
author={},
year={2023},
url={}
}
```
|
jeong2/avoid
|
[
"region:us"
] |
2023-05-04T03:04:33+00:00
|
{}
|
2023-06-05T09:37:54+00:00
|
d985237f7d8775cb5782c0f9bd6226cf7100d85a
|
# Dataset Card for "boolq_zh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
reaganjlee/boolq_zh
|
[
"region:us"
] |
2023-05-04T03:39:43+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "validation", "num_bytes": 1694205, "num_examples": 3270}, {"name": "train", "num_bytes": 4954191, "num_examples": 9427}], "download_size": 4456268, "dataset_size": 6648396}}
|
2023-05-04T03:39:47+00:00
|
806d95dd10b9fe3a24649112ad942a20458e3ef0
|
antonovmaxim/kodIIm_14
|
[
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:mit",
"region:us"
] |
2023-05-04T04:13:12+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"]}
|
2023-05-06T12:16:03+00:00
|
|
0a2bd26d676f6112df652e954d56e74b8c286a77
|
# Dataset Card for "boolq_es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
reaganjlee/boolq_es
|
[
"region:us"
] |
2023-05-04T04:31:58+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "train", "num_bytes": 4397871, "num_examples": 9427}, {"name": "validation", "num_bytes": 1520093, "num_examples": 3270}], "download_size": 3613558, "dataset_size": 5917964}}
|
2023-08-18T22:35:34+00:00
|
065ff65c03dc2ad58008e085fd449e5f4b46f7af
|
# Dataset Card for "rice-thermal-new_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
flagship/rice-thermal-new_demo
|
[
"region:us"
] |
2023-05-04T05:16:22+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "RiceLeafs_BrownSpot", "1": "RiceLeafs_Healthy", "2": "RiceLeafs_Hispa", "3": "RiceLeafs_LeafBlast"}}}}], "splits": [{"name": "train", "num_bytes": 2607108.0, "num_examples": 354}, {"name": "test", "num_bytes": 944624.0, "num_examples": 129}], "download_size": 3511150, "dataset_size": 3551732.0}}
|
2023-05-04T05:16:30+00:00
|
2c5585f04d1defd9dc81a2784526152246cb3193
|
Alignment-Lab-AI/AILabAssistant
|
[
"license:mit",
"region:us"
] |
2023-05-04T05:37:50+00:00
|
{"license": "mit"}
|
2023-05-09T19:56:00+00:00
|
|
faf748016397d10100507b6ce0d5febe8d82b25a
|
# Dataset Card for "VQAv2_validation_no_image_google_flan_t5_small_mode_A_T_D_PNP_FILTER_C_Q_rices_ns_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_validation_no_image_google_flan_t5_small_mode_A_T_D_PNP_FILTER_C_Q_rices_ns_100
|
[
"region:us"
] |
2023-05-04T05:49:28+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_", "num_bytes": 1021194, "num_examples": 100}], "download_size": 97050, "dataset_size": 1021194}}
|
2023-05-04T05:49:30+00:00
|
d3fbbaea6471aaf8f102feaf368e60d9cab5f833
|
# Dataset Card for "namu_wiki_512_char_seg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
korean-corpus/namu_wiki_512_char_seg
|
[
"region:us"
] |
2023-05-04T06:03:03+00:00
|
{"dataset_info": {"features": [{"name": "namespace", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "contributors", "sequence": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18511051031, "num_examples": 6232277}], "download_size": 4402958968, "dataset_size": 18511051031}}
|
2023-05-04T06:12:19+00:00
|
a0fe7523056a2c2d05830b54a35d39c84f1c7577
|
```bib
@misc{liu-etal-2023-afraid,
title = "We're Afraid Language Models Aren't Modeling Ambiguity",
author = "Alisa Liu and Zhaofeng Wu and Julian Michael and Alane Suhr and Peter West and Alexander Koller and Swabha Swayamdipta and Noah A. Smith and Yejin Choi",
month = apr,
year = "2023",
url = "https://arxiv.org/abs/2304.14399",
}
```
|
metaeval/ambient
|
[
"task_categories:text-classification",
"language:en",
"ambiguity",
"arxiv:2304.14399",
"region:us"
] |
2023-05-04T06:22:16+00:00
|
{"language": ["en"], "task_categories": ["text-classification"], "tags": ["ambiguity"]}
|
2023-05-04T13:37:42+00:00
|
88fb6115ee2ddb64aee4f0ddb8b4d93f20ebbebc
|
# Dataset Card for "evol_chat"
ChatML-formatted version of [Evol Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k)
|
sam-mosaic/evol_chat
|
[
"region:us"
] |
2023-05-04T06:49:14+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 146959707.60431653, "num_examples": 69756}, {"name": "test", "num_bytes": 632402.3357142857, "num_examples": 300}], "download_size": 71104381, "dataset_size": 147592109.9400308}}
|
2023-05-04T06:50:46+00:00
|
17adcb8bd45e0153ea44256f3c9f57519ffd3bc3
|
# Dataset Card for "BioDEX-ICSR-Abstract"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
BioDEX/BioDEX-ICSR-Abstract
|
[
"region:us"
] |
2023-05-04T07:32:54+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "fulltext", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "pmid", "dtype": "string"}, {"name": "fulltext_license", "dtype": "string"}, {"name": "title_normalized", "dtype": "string"}, {"name": "issue", "dtype": "string"}, {"name": "pages", "dtype": "string"}, {"name": "journal", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "pubdate", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "affiliations", "dtype": "string"}, {"name": "medline_ta", "dtype": "string"}, {"name": "nlm_unique_id", "dtype": "string"}, {"name": "issn_linking", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "mesh_terms", "dtype": "string"}, {"name": "publication_types", "dtype": "string"}, {"name": "chemical_list", "dtype": "string"}, {"name": "keywords", "dtype": "string"}, {"name": "references", "dtype": "string"}, {"name": "delete", "dtype": "bool"}, {"name": "pmc", "dtype": "string"}, {"name": "other_id", "dtype": "string"}, {"name": "fulltext_processed", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 118045716, "num_examples": 8053}, {"name": "train", "num_bytes": 333640345, "num_examples": 32235}, {"name": "validation", "num_bytes": 82957309, "num_examples": 8059}], "download_size": 285101366, "dataset_size": 534643370}}
|
2023-05-04T07:42:14+00:00
|
cc8c0ebb47e53ec8e6b61ebe80d7a44e321d75ac
|
cahya/QATest
|
[
"license:mit",
"region:us"
] |
2023-05-04T07:36:40+00:00
|
{"license": "mit"}
|
2023-05-04T07:38:10+00:00
|
|
683917cb7cef10ed747ef4aa5ce1dd07e1dcc85c
|
miladfa7/Rice-Image-Dataset
|
[
"task_categories:image-classification",
"size_categories:10K<n<100K",
"rice-dataset",
"rice",
"dataset",
"vision",
"image-classification",
"region:us"
] |
2023-05-04T07:50:22+00:00
|
{"size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "tags": ["rice-dataset", "rice", "dataset", "vision", "image-classification"]}
|
2023-05-04T17:43:20+00:00
|
|
f025c917d9851717636f5f282095958afb192124
|
sbzl/Ss
|
[
"language:en",
"license:mit",
"region:us"
] |
2023-05-04T07:57:32+00:00
|
{"language": ["en"], "license": "mit"}
|
2023-05-04T08:51:09+00:00
|
|
12fec90924dcdc4a4bd7a4065a0374c10d760b3b
|
# Dataset Card for "TradXX"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
PaulineSanchez/TradXX
|
[
"region:us"
] |
2023-05-04T08:03:57+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 262823, "num_examples": 3153}], "download_size": 131381, "dataset_size": 262823}}
|
2023-05-04T08:04:03+00:00
|
0864c1a11011fa60632c4f7b376bf27daf170277
|
# Dataset Card for "Trad_food"
- info: This dataset comes from the ANSES-CIQUAL 2020 Table in English in XML format, found on https://www.data.gouv.fr/fr/datasets/table-de-composition-nutritionnelle-des-aliments-ciqual/ .
I made some minor changes on it in order to have it meets my needs (removed/added words to have exact translations, removed repetitions etc).
|
PaulineSanchez/Trad_food
|
[
"task_categories:translation",
"size_categories:1K<n<10K",
"language:fr",
"language:en",
"license:apache-2.0",
"food",
"nutrition",
"region:us"
] |
2023-05-04T08:05:05+00:00
|
{"language": ["fr", "en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["translation"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 236482.35045987947, "num_examples": 2837}, {"name": "validation", "num_bytes": 26340.64954012052, "num_examples": 316}], "download_size": 165541, "dataset_size": 262823}, "tags": ["food", "nutrition"]}
|
2023-06-06T12:47:36+00:00
|
aa13de84e8e68fb8e3f32ca3bd5cd80b302c5334
|
# Dataset Card for "hhrlhf_evol_chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sam-mosaic/hhrlhf_evol_chatml
|
[
"language:en",
"region:us"
] |
2023-05-04T08:11:22+00:00
|
{"language": "en", "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 302247789, "num_examples": 217107}, {"name": "test", "num_bytes": 17609162, "num_examples": 16555}], "download_size": 139692649, "dataset_size": 319856951}}
|
2023-07-17T23:28:37+00:00
|
8acdaf7b7a298d6923c07ef7cdf1740d6fedaa99
|
# NorEval
NorEval is a self-curated dataset to evaluate instruction-following LLMs, seeking to evaluate the models in nine categories: Language, Code, Mathematics, Classification, Communication & Marketing, Medical, General Knowledge, and Business Operations
|
MasterThesisCBS/NorEval
|
[
"task_categories:text-generation",
"language:no",
"language:nb",
"license:cc-by-4.0",
"instruction-finetuning",
"region:us"
] |
2023-05-04T08:24:48+00:00
|
{"language": ["no", "nb"], "license": "cc-by-4.0", "task_categories": ["text-generation"], "pretty_name": "NB Alpaca Norwegian Bokm\u00e5l", "tags": ["instruction-finetuning"], "dataset_info": {"features": [{"name": "Category", "dtype": "string"}, {"name": "SubCategory", "dtype": "string"}, {"name": "Instruction", "dtype": "string"}, {"name": "Input", "dtype": "string"}, {"name": "Output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 101921, "num_examples": 288}], "download_size": 56767, "dataset_size": 101921}}
|
2023-05-05T11:26:30+00:00
|
69ccf09ce899b83c98e060918dea509a5589858a
|
blingBillie/first-dataset
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-04T08:53:08+00:00
|
{"license": "apache-2.0"}
|
2023-05-04T08:53:08+00:00
|
|
7fcee1e6b63cf60af4f01a74ec29a1c58b76b57f
|
Miracle-dz/newarxiv
|
[
"license:other",
"region:us"
] |
2023-05-04T08:57:11+00:00
|
{"license": "other"}
|
2023-05-04T08:57:46+00:00
|
|
486f2c3067e8ce798d3870e475db40f093460db1
|
sabman/maps-stablediffusion
|
[
"license:unknown",
"region:us"
] |
2023-05-04T09:21:39+00:00
|
{"license": "unknown"}
|
2023-05-04T09:21:39+00:00
|
|
91b0b6ca739d86d52f2c3ea86a7f3756bfc5da61
|
mathematicalmichael/city-transformers-data
|
[
"license:mit",
"region:us"
] |
2023-05-04T09:26:58+00:00
|
{"license": "mit"}
|
2023-05-04T09:26:58+00:00
|
|
9f264208386db5614c86d4b4a8a1059ed4b07016
|
# Dataset Card for "bert_pretrain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gsgoncalves/bert_pretrain
|
[
"region:us"
] |
2023-05-04T10:00:07+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24500165181, "num_examples": 80462898}], "download_size": 14400389437, "dataset_size": 24500165181}}
|
2023-05-04T10:11:12+00:00
|
63715b94a8d18e7c016b3ad15693e7cc38edeba8
|
# A small rocket images dataset
|
MilkCool/rockets
|
[
"license:mit",
"region:us"
] |
2023-05-04T10:11:30+00:00
|
{"license": "mit"}
|
2023-05-04T11:10:55+00:00
|
52f8126defee515e5d84016d39953d0a6b27912d
|
# Dataset Card for "ontonotes5.0-pt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arubenruben/ontonotes5.0-pt
|
[
"region:us"
] |
2023-05-04T10:24:32+00:00
|
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PERSON", "2": "I-PERSON", "3": "B-NORP", "4": "I-NORP", "5": "B-FAC", "6": "I-FAC", "7": "B-ORG", "8": "I-ORG", "9": "B-GPE", "10": "I-GPE", "11": "B-LOC", "12": "I-LOC", "13": "B-PRODUCT", "14": "I-PRODUCT", "15": "B-DATE", "16": "I-DATE", "17": "B-TIME", "18": "I-TIME", "19": "B-PERCENT", "20": "I-PERCENT", "21": "B-MONEY", "22": "I-MONEY", "23": "B-QUANTITY", "24": "I-QUANTITY", "25": "B-ORDINAL", "26": "I-ORDINAL", "27": "B-CARDINAL", "28": "I-CARDINAL", "29": "B-EVENT", "30": "I-EVENT", "31": "B-WORK_OF_ART", "32": "I-WORK_OF_ART", "33": "B-LAW", "34": "I-LAW", "35": "B-LANGUAGE", "36": "I-LANGUAGE"}}}}], "splits": [{"name": "train", "num_bytes": 16511400, "num_examples": 1898}, {"name": "validation", "num_bytes": 2417378, "num_examples": 279}, {"name": "test", "num_bytes": 1564609, "num_examples": 163}], "download_size": 0, "dataset_size": 20493387}}
|
2023-05-12T09:01:49+00:00
|
88fe550e8ba2d92113471264c78649f903b558d0
|
# Dataset Card for "my_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SuperrWu/my_dataset
|
[
"region:us"
] |
2023-05-04T11:02:49+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8337027.0, "num_examples": 4}], "download_size": 7674122, "dataset_size": 8337027.0}}
|
2023-05-04T11:02:57+00:00
|
0931c929f9506ce9f92ba148b68187e2d6e44112
|
inumulaisk/getmmdocs
|
[
"license:openrail",
"region:us"
] |
2023-05-04T11:13:19+00:00
|
{"license": "openrail"}
|
2023-05-04T11:13:40+00:00
|
|
057e19239bc281ff618be29affafd32c8e7b1588
|
# Dataset Card for "aardman-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gigant/aardman-images
|
[
"region:us"
] |
2023-05-04T11:39:23+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1843267.0, "num_examples": 17}], "download_size": 1844923, "dataset_size": 1843267.0}}
|
2023-05-04T12:12:32+00:00
|
502e57de96fc3d906fdf3ac1561b370710465d3a
|
# Dataset Card for "test-windows"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
polinaeterna/test-windows
|
[
"region:us"
] |
2023-05-04T11:53:13+00:00
|
{"builder_config": {"data_files": [{"split": "train", "pattern": "data/train-*"}, {"split": "random", "pattern": "data/random-*"}]}, "dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 16000, "num_examples": 1000}, {"name": "random", "num_bytes": 1600, "num_examples": 100}], "download_size": 0, "dataset_size": 17600}}
|
2023-05-04T14:01:34+00:00
|
e8c0f8d879aa755f8137e058840778b24602097e
|
# Dataset Card for "counterfact-one"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
derpyplops/counterfact-one
|
[
"region:us"
] |
2023-05-04T11:54:15+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2637010, "num_examples": 43838}], "download_size": 1474580, "dataset_size": 2637010}}
|
2023-05-04T12:01:42+00:00
|
4c2a85d783044daf101e2c72b3265c5a251f5a3a
|
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `/content/drive/MyDrive/image_and_text` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Tian7/ddpm-butterflies-128/tensorboard?#scalars)
|
jinlee74/ddpm-butterflies-128
|
[
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-05-04T12:31:41+00:00
|
{"language": "en", "license": "apache-2.0", "library_name": "diffusers", "tags": [], "datasets": "/content/drive/MyDrive/image_and_text", "metrics": []}
|
2023-05-04T12:33:13+00:00
|
1c37de329d274e7f2f0872748060c9f43d444aa4
|
⚠️ **WARNING : THIS VERSION OF THE DATASET IS MODIFIED IN FORMAT AND CONTENT FROM THE ORIGINAL DATASET AVAILABLE [HERE](https://quaerofrenchmed.limsi.fr/). NESTED ENTITIES HAVE BEEN REMOVED AND THIS DATASET ONLY RETAINS THE LARGEST OF NESTED ENTITIES. OVERALL, THIS CORRESPONDS TO 80% OF THE ENTITIES ANNOTATED IN THE ORIGINAL DATASET.** ⚠️
The QUAERO French Medical Corpus has been initially developed as a resource for named entity recognition and normalization [1]. It was then improved with the purpose of creating a gold standard set of normalized entities for French biomedical text, that was used in the CLEF eHealth evaluation lab [2][3].
A selection of MEDLINE titles and EMEA documents were manually annotated. The annotation process was guided by concepts in the Unified Medical Language System (UMLS):
1. Ten types of clinical entities, as defined by the following UMLS Semantic Groups (Bodenreider and McCray 2003) were annotated: Anatomy (ANAT), Chemical and Drugs (CHEM), Devices (DEVI), Disorders (DISO), Geographic Areas (GEOG), Living Beings (LIVB), Objects (OBJC), Phenomena (PHEN), Physiology (PHYS), Procedures (PROC).
2. The annotations were made in a comprehensive fashion, so that nested entities were marked, and entities could be mapped to more than one UMLS concept. In particular: (a) If a mention can refer to more than one Semantic Group, all the relevant Semantic Groups should be annotated. For instance, the mention “récidive” (recurrence) in the phrase “prévention des récidives” (recurrence prevention) should be annotated with the category “DISORDER” (CUI C2825055) and the category “PHENOMENON” (CUI C0034897); (b) If a mention can refer to more than one UMLS concept within the same Semantic Group, all the relevant concepts should be annotated. For instance, the mention “maniaques” (obsessive) in the phrase “patients maniaques” (obsessive patients) should be annotated with CUIs C0564408 and C0338831 (category “DISORDER”); (c) Entities which span overlaps with that of another entity should still be annotated. For instance, in the phrase “infarctus du myocarde” (myocardial infarction), the mention “myocarde” (myocardium) should be annotated with category “ANATOMY” (CUI C0027061) and the mention “infarctus du myocarde” should be annotated with category “DISORDER” (CUI C0027051)
For more details, please refer to [the official webpage](https://quaerofrenchmed.limsi.fr/).
⚠️ **WARNING : THIS VERSION OF THE DATASET IS MODIFIED IN FORMAT AND CONTENT FROM THE ORIGINAL DATASET AVAILABLE [HERE](https://quaerofrenchmed.limsi.fr/). NESTED ENTITIES HAVE BEEN REMOVED AND THIS DATASET ONLY RETAINS THE LARGEST OF NESTED ENTITIES. OVERALL, THIS CORRESPONDS TO 80% OF THE ENTITIES ANNOTATED IN THE ORIGINAL DATASET.** ⚠️
In this format, each word of the sentence has an associated ner_tag, corresponding to the type of clinical entity, here is the mapping :
```
0: "O"
1: "DISO"
2: "PROC"
3: "ANAT"
4: "LIVB"
5: "CHEM"
6: "PHYS"
7: "PHEN"
8: "GEOG"
9: "DEVI"
10: "OBJC"
```
[1] Névéol A, Grouin C, Leixa J, Rosset S, Zweigenbaum P. The QUAERO French Medical Corpus: A Ressource for Medical Entity Recognition and Normalization. Fourth Workshop on Building and Evaluating Ressources for Health and Biomedical Text Processing - BioTxtM2014. 2014:24-30
[2] Névéol A, Grouin C, Tannier X, Hamon T, Kelly L, Goeuriot L, Zweigenbaum P. (2015) Task 1b of the CLEF eHealth Evaluation Lab 2015: Clinical Named Entity Recognition. CLEF 2015 Evaluation Labs and Workshop: Online Working Notes, CEUR-WS, September, 2015.
[3] Névéol A, Cohen, KB, Grouin C, Hamon T, Lavergne T, Kelly L, Goeuriot L, Rey G, Robert A, Tannier X, Zweigenbaum P. Clinical Information Extraction at the CLEF eHealth Evaluation lab 2016. CLEF 2016, Online Working Notes, CEUR-WS 1609.2016:28-42.
|
mnaguib/QuaeroFrenchMed
|
[
"task_categories:token-classification",
"language:fr",
"medical",
"region:us"
] |
2023-05-04T12:35:50+00:00
|
{"language": ["fr"], "task_categories": ["token-classification"], "tags": ["medical"]}
|
2023-09-13T19:01:06+00:00
|
6866796254c76dbb7bb51784584656c904d28c5a
|
metro1/databricks-custom-dataset
|
[
"license:cc-by-sa-4.0",
"region:us"
] |
2023-05-04T13:12:01+00:00
|
{"license": "cc-by-sa-4.0"}
|
2023-05-04T13:13:39+00:00
|
|
d91ea6f10e76336b99923e87d0c91746e4fa3f58
|
# Dataset Card for Dating historical color images
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://graphics.cs.cmu.edu/projects/historicalColor/
- **Repository:**
- **Paper:** https://doi.org/10.1007/978-3-642-33783-3_36
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
> We introduce the task of automatically estimating the age of historical color photographs. We suggest features which attempt to capture temporally discriminative information based on the evolution of color imaging processes over time and evaluate the performance of both these novel features and existing features commonly utilized in other problem domains on a novel historical image data set. For the challenging classification task of sorting historical color images into the decade during which they were photographed, we demonstrate significantly greater accuracy than that shown by untrained humans on the same data set. Additionally, we apply the concept of data-driven camera response function estimation to historical color imagery, demonstrating its relevance to both the age estimation task and the popular application of imitating the appearance of vintage color photography.
### Supported Tasks and Leaderboards
This dataset is intended to train image classification or regression models to predict the time period in which color photographs were taken. This task could be approached either as a classification task or could be approached as an image regression task.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
There is a single training split since the original dataset doesn't define a train-test split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
> Beginning with a collection of approximately 230,000 Flickr images taken prior to 1980, we perform automated removal of monochromatic images. The remaining images are manually inspected to remove non-photographic content (e.g. scans of vintage artwork) and any remaining monochromatic images. Finally, a random subsampling and decimation is performed to create a dataset containing an equal number of historical color images for each decade (1,375 images total).
### Annotations
#### Annotation process
Annotations are based on metadata available on Flickr.
#### Who are the annotators?
It appears that annotations are sourced via Flickr, so the annotators are presumed to be those uploading the images to Flickr. This will include individuals as well as cultural heritage institutions.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
|
biglam/dating-historical-color-images
|
[
"task_categories:image-classification",
"size_categories:1K<n<10K",
"history ",
"lam",
"photography",
"region:us"
] |
2023-05-04T13:35:53+00:00
|
{"size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "pretty_name": "Dating Historical Color Images", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "1930s", "1": "1940s", "2": "1950s", "3": "1960s", "4": "1970s"}}}}], "splits": [{"name": "train", "num_bytes": 221261063, "num_examples": 1325}], "download_size": 222265856, "dataset_size": 221261063}, "tags": ["history ", "lam", "photography"]}
|
2023-05-05T15:22:09+00:00
|
02104f21ee621b42690c936bee8ad5fdb5e3fdf0
|
sin3768/sanaDS
|
[
"license:unknown",
"region:us"
] |
2023-05-04T13:43:45+00:00
|
{"license": "unknown"}
|
2023-06-18T15:49:44+00:00
|
|
493c07d2e7c8a74ee955a90ac23a3c18fa0e2552
|
Taken from https://www.fimfiction.net/user/116950/Fimfarchive. Contains mlp fanfics up to March 1st 2023 in epub format.
|
tekkithorse/Fimfarchive
|
[
"region:us"
] |
2023-05-04T13:56:12+00:00
|
{}
|
2023-05-04T15:54:44+00:00
|
8e69b58d729339cb7771133c70ee18b527fd4ae3
|
# Dataset Card for "pickapic_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pickapic-anonymous/pickapic_v1
|
[
"region:us"
] |
2023-05-04T13:56:19+00:00
|
{"dataset_info": {"features": [{"name": "are_different", "dtype": "bool"}, {"name": "best_image_uid", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[ns]"}, {"name": "has_label", "dtype": "bool"}, {"name": "image_0_uid", "dtype": "string"}, {"name": "image_0_url", "dtype": "string"}, {"name": "image_1_uid", "dtype": "string"}, {"name": "image_1_url", "dtype": "string"}, {"name": "jpg_0", "dtype": "binary"}, {"name": "jpg_1", "dtype": "binary"}, {"name": "label_0", "dtype": "float64"}, {"name": "label_1", "dtype": "float64"}, {"name": "model_0", "dtype": "string"}, {"name": "model_1", "dtype": "string"}, {"name": "ranking_id", "dtype": "int64"}, {"name": "user_id", "dtype": "int64"}, {"name": "num_example_per_prompt", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 193273338802, "num_examples": 583747}, {"name": "validation", "num_bytes": 5638295249, "num_examples": 17439}, {"name": "test", "num_bytes": 4621428929, "num_examples": 14073}, {"name": "validation_unique", "num_bytes": 178723392, "num_examples": 500}, {"name": "test_unique", "num_bytes": 178099641, "num_examples": 500}], "download_size": 202289409202, "dataset_size": 203889886013}}
|
2023-05-04T15:25:58+00:00
|
84373525f034e98d0a568c7a5e5c1518ef90a1ee
|
TryMore/n_grams_probability
|
[
"license:openrail",
"region:us"
] |
2023-05-04T14:00:18+00:00
|
{"license": "openrail"}
|
2023-05-14T12:41:54+00:00
|
|
8ef6cd98fe2d0f54d136913efb6bda94032cb2c4
|
# Dataset Card for "push_to_hub_single_config"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
polinaeterna/push_to_hub_single_config
|
[
"region:us"
] |
2023-05-04T14:01:47+00:00
|
{"builder_config": {"data_files": [{"split": "train", "pattern": "data/train-*"}, {"split": "random", "pattern": "data/random-*"}]}, "dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1600, "num_examples": 100}, {"name": "random", "num_bytes": 800, "num_examples": 50}], "download_size": 4042, "dataset_size": 2400}}
|
2023-05-04T14:01:51+00:00
|
23a7dc4c06d850b1247a3ff8ecdf946ac73d1aa5
|
# Dataset Card for "push_to_hub_many_configs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
polinaeterna/push_to_hub_many_configs
|
[
"region:us"
] |
2023-05-04T14:03:50+00:00
|
{"builder_configs": [{"config_name": "custom", "data_files": [{"split": "train", "pattern": "custom/train-*"}, {"split": "random", "pattern": "custom/random-*"}]}, {"config_name": "default", "data_files": [{"split": "train", "pattern": "data/train-*"}, {"split": "random", "pattern": "data/random-*"}]}], "dataset_info": [{"config_name": "custom", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1600, "num_examples": 100}, {"name": "random", "num_bytes": 160, "num_examples": 10}], "download_size": 3650, "dataset_size": 1760}, {"config_name": "default", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1600, "num_examples": 100}, {"name": "random", "num_bytes": 800, "num_examples": 50}], "download_size": 4042, "dataset_size": 2400}]}
|
2023-06-01T14:47:17+00:00
|
4eddd6742595d662d48f50aaeb144f1ee1d18b8e
|
# Dataset Card for "push_to_hub_singe_nondefault_config"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
polinaeterna/push_to_hub_singe_nondefault_config
|
[
"region:us"
] |
2023-05-04T14:04:53+00:00
|
{"dataset_info": {"config_name": "custom", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1600, "num_examples": 100}, {"name": "random", "num_bytes": 160, "num_examples": 10}], "download_size": 3650, "dataset_size": 1760}, "builder_config": {"config_name": "custom", "data_files": [{"split": "train", "pattern": "custom/train-*"}, {"split": "random", "pattern": "custom/random-*"}]}}
|
2023-05-04T14:04:57+00:00
|
9d4dc65ee3e824e66f8d09f035a6b73aa9c9b7de
|
Taken from the yuki archive data. Contains the html text from the /mlp/ board up until a few years ago. It sure would be interesting to have a dataset that's organized by each thread in a spreadsheet separated by row with the posts in the thread separated by column, with the post number in each cell.
|
tekkithorse/mlp-board-yuki-archive-html-text
|
[
"region:us"
] |
2023-05-04T14:06:55+00:00
|
{}
|
2023-05-04T15:18:07+00:00
|
17fa89e9659faa1a6744e86eeaaaae0bf7aeed5b
|
# Dataset Card for "pokemon_bulbapedia_descriptions_improved"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
matemato/pokemon_bulbapedia_all
|
[
"region:us"
] |
2023-05-04T14:16:54+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 101052207.0, "num_examples": 721}], "download_size": 84088630, "dataset_size": 101052207.0}}
|
2023-05-04T14:17:18+00:00
|
275e044ac3674ae1b1926b2275c90e75d1e70bc9
|
LLCaptainMorgan/Seidr_Image
|
[
"license:openrail",
"region:us"
] |
2023-05-04T14:31:22+00:00
|
{"license": "openrail"}
|
2023-05-04T14:31:22+00:00
|
|
a62745ae867396a1bc0f2e76148356f957840232
|
# AutoTrain Dataset for project: rwlv_summarizer
## Dataset Description
This dataset has been automatically processed by AutoTrain for project rwlv_summarizer.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_platform": "Yelp",
"feat_line_of_business": "RWLV",
"text": "I decided to come to Resorts World to grab some sushi on a Sunday afternoon. I was so glad to see the trash gone from the parking garage. The grounds outside the building were so much nicer than my first visit. Planters were finished and the place was clean. It looked good. All the employees that I encountered were just as nice and helpful as my first visit. Bathrooms were clean. Food was great! My only complaint is that I couldn't believe how hard it was to gamble 73 cents left on my ticket! I mean they really stick it to you here. Some of the machines minimum bets were some crazy friggin number like 78 cents. Oh well. Get those pennies Resorts World. I will be back to try more food and maybe next time I'll stick with the tables. Come see Vegas newest Casino if you can.",
"feat_reactions": 0.0,
"feat_ratings": 4,
"feat_sentiment_pys": "POS",
"feat_sentiment_vad": "POS",
"feat_sentiment_tb": "POS",
"feat_sentiment_rat": "POS",
"feat_sentiment_gpt": "POS",
"feat_contextual": "facilities",
"feat_intention": "compliment",
"feat_intention_refined": "compliment",
"feat_refined_gpt": "POS",
"target": "positive review of resorts world with improved parking and grounds, friendly",
"feat_emotion": "others"
},
{
"feat_platform": "Yelp",
"feat_line_of_business": "RWLV",
"text": "The check-in line is extremely long and at the Hilton they seem understaffed. We went to the pool today. Granted it is 103\u00b0 outside however the pool is freezing. There is such thing as too cold. I did however get a Coca-Cola for nine dollars. Yes nine dollars for one can of Coke.",
"feat_reactions": 7.0,
"feat_ratings": 2,
"feat_sentiment_pys": "NEU",
"feat_sentiment_vad": "POS",
"feat_sentiment_tb": "NEG",
"feat_sentiment_rat": "NEG",
"feat_sentiment_gpt": "NEG",
"feat_contextual": "price",
"feat_intention": "complaint",
"feat_intention_refined": "complaint",
"feat_refined_gpt": "NEG",
"target": "long check-in, understaffed, freezing pool, expensive",
"feat_emotion": "others"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_platform": "Value(dtype='string', id=None)",
"feat_line_of_business": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"feat_reactions": "Value(dtype='float64', id=None)",
"feat_ratings": "Value(dtype='int64', id=None)",
"feat_sentiment_pys": "Value(dtype='string', id=None)",
"feat_sentiment_vad": "Value(dtype='string', id=None)",
"feat_sentiment_tb": "Value(dtype='string', id=None)",
"feat_sentiment_rat": "Value(dtype='string', id=None)",
"feat_sentiment_gpt": "Value(dtype='string', id=None)",
"feat_contextual": "Value(dtype='string', id=None)",
"feat_intention": "Value(dtype='string', id=None)",
"feat_intention_refined": "Value(dtype='string', id=None)",
"feat_refined_gpt": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)",
"feat_emotion": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1539 |
| valid | 385 |
|
joelorellana/autotrain-data-rwlv_summarizer
|
[
"task_categories:summarization",
"language:en",
"region:us"
] |
2023-05-04T14:32:04+00:00
|
{"language": ["en"], "task_categories": ["summarization"]}
|
2023-05-04T14:32:56+00:00
|
c752103ff5ecc5ffc558336e3b742da593c2476c
|
This document serves as an overview of the different mechanisms and areas of governance in the BigCode project.
It aims to support transparency by providing relevant information about choices that were made during the project to the broader public,
and to serve as an example of intentional governance of an open research project that future endeavors can leverage to shape their own approach.
The first section, **[Project Structure](https://huggingface.co/datasets/bigcode/governance-card#1-project-structure)**, covers the project organization, its stated goals and values, its internal decision processes, and its funding and resources.
The second section, **[Data and Model Governance](https://huggingface.co/datasets/bigcode/governance-card#2-data-and-model-governance)**, covers decisions relating to the questions of data subject consent, privacy, and model release.
# 1. Project Structure
## 1.a. Goals and Values
### Project Overview
BigCode is an open scientific collaboration working on the responsible development and use of large language models for code, aiming to empower the machine learning and open source communities through open governance.
Code LLMs enable the completion and synthesis of code, both from other code snippets and natural language descriptions, and can be used across a wide range of domains, tasks, and programming languages. These models can, for example, assist professional and citizen developers with building new applications.
One of the challenges typically faced by researchers working on code LLMs is the lack of transparency around the development of these systems. While a handful of papers on code LLMs have been published, they do not always give full insight into the development process, which hinders both external accountability and the ability of all but a few well funded research labs to meaningfully participate in shaping the technology.
BigCode is a community project jointly led by Hugging Face and ServiceNow. Both organizations committed research, engineering, ethics, governance, and legal resources to ensure that the collaboration runs smoothly and makes progress towards the stated goals. ServiceNow Research and Hugging Face have made their respective compute clusters available for large-scale training of the BigCode models, and Hugging Face hosts the datasets, models, and related applications from the community to make it easy for everyone to access and use.
An open-invitation was extended to the global AI research community to join forces on the development of state-of-the-art code LLMs, with a focus on research topics such as:
* Constructing a representative evaluation suite for code LLMs, covering a diverse set of tasks and programming languages
* Developing new methods for faster training and inference of LLMs
* The legal, ethics, and governance aspects of code LLMs
The BigCode project is conducted in the spirit of open science. Datasets, models, and experiments are developed through collaboration and released with permissive licenses back to the community. All technical governance takes place within working groups and task forces across the community.
As code LLMs are developed with data from the open-source community, we believe open governance can help to ensure that these models are benefiting the larger AI community. We developed tools to give code creators agency over whether their source code is included in the training data, and to find approaches that give attribution to developers when models output near-copies of the training data contained in The Stack.
### Technical and Performance Goals
The overarching technical goal envisioned before the project was announced was to train and release a 12-billion parameter model that matches Codex as described in [this research paper](https://arxiv.org/abs/2107.03374). This model from OpenAI is not released and is only available as API service under a cushman-001, although it’s not entirely clear if this model matches the one described in the paper. It has also been [suggested](https://thakkarparth007.github.io/copilot-explorer/posts/copilot-internals#other-random-tidbits) that this model is used in Github CoPilot. Our original plan was to compare model performance on HumanEval and APPS, but along the way, we recognized the need for creating an extensive evaluation suite for Code LLMs.
The project ended up breaking the challenge into development phases, starting with the collection of permissively licensed repositories from Github. This initial phase was
by the ServiceNow team over several months prior to the official launch of BigCode. It involved inventorying active GitHub repository names, managing the effort to download those repositories, filtering to exclude large files and duplicates, and detecting the licenses used for each repository. This effort ultimately resulted in the creation of [The Stack](https://arxiv.org/abs/2211.15533), a source code dataset that marked the first milestone for the project.
Two cycles of model development were conducted by the BigCode community. The first cycle took place in November-December 2022, and culminated with the release of SantaCoder, a 1.1B parameter model trained on the Java, JavaScript, and Python code from The Stack. In the next cycle, which was held from January to April 2023, the community scaled up their efforts and trained 15.5B parameter models on 1T tokens from The Stack. The resulting StarCoder models either match or surpass the code-cushman-001 model on a variety of coding benchmarks.
### Social Impact Dimensions and Considerations
Technical goals and considerations of social impact go hand in hand, and participate equally in _responsible_ development of code LLMs. Within the BigCode project, this means that organizational and technical choices were jointly shaped by the pursuit of the performance goals outlined above and by a best-effort approach to accounting for their impact on various categories of external stakeholders. In particular, participants of the BigCode project focused on the following three dimensions of social impact:
* **Consent of data subjects**: the success of code LLMs depends on their training data, which is the product of the professional and volunteer work of software developers. Since training large models constitutes a novel use and transformation of this work, it poses new questions and challenges with respect to the wishes and rights of the developers.
* **Privacy**: investigations into the behaviors of previous large-scale code LLMs outlined privacy risks when the models can be prompted to generate private information contained in its training data. Addressing these risks was the focus of several related efforts in BigCode.
* **Software safety and security**: [recent work](https://arxiv.org/abs/2207.14157) has also shed light on different hazards that are unique to or exacerbated by code LLMs, including their dual use potential in facilitating malware generation or their likelihood of recommending code that includes security flaws.
We found that while these considerations did sometimes result in trade-offs between the performance goals and social impact concerns, they were more often better addressed by developing new technical and organizational tools, which we describe in the rest of this document and share as an important outcome of the BigCode project so they can be leveraged by future similar endeavors.
## 1.b. Organizers and Participants
### Inception
The idea for the BigCode Project came about in Utrecht during a discussion initiated by Harm de Vries (ServiceNow Research) with Thomas Wolf (Hugging Face). Inspired by the BigScience Project, Harm recognized the shared vision of Hugging Face to responsibly develop open and responsible large language models for code, and approached Thomas to explore the idea of a jointly led open-scientific collaboration with the global machine learning and open source communities. As it turns out, the visions were indeed aligned, and work got started to initiate the project.
A research collaboration agreement between ServiceNow and Hugging Face created the enabling framework for the project, and set out the terms for rallying the broader scientific community at large to work towards developing, training, exploring, and releasing large foundation models for code.
The scope of the collaboration covered the preparation of training data, including developing tools for downloading publicly accessible code data and for running license detectors, developing tools for filtering data sources based on approved licenses and file type, and to release this training data to the AI community.
The scope also covered the training of dense transformer models that adopt the mechanism of self-attention through the Megatron-LM architecture, training of retrieval augmented code generation models, and developing tools to diagnose instabilities arising from training transformer models at scale.
The collaboration would also prepare an evaluation suite with the help of the scientific community 1) to develop the tools and scripts needed to use existing program synthesis benchmarks such as HumanEval and CodexGlue, and 2) to construct and release openly available benchmarks that measure desirable capabilities of large multi-lingual code LLMs and tasks such as program synthesis, text2code, and code summarization.
Key milestones identified during the initial stages were focussed on developing a community engagement plan, a first attempt at constructing an evaluation suite over multiple programming languages, investigating viable sources of data, and then training and releasing a 12B parameter model that matches Codex performance on HumanEval and APPS.
### Participants
BigCode is a research collaboration and is open to participants who
1. have a professional research background and
2. are able to commit time to the project.
In general, we expect applicants to be affiliated with a research organization (either in academia or industry) and work on the technical/ethical/legal aspects of LLMs for coding applications. Throughout the project the community invited guest subject matter experts to participate in certain discussions, and this resulted in a lift in the number of participants in chat channels relative to the number of researchers that had formally applied to participate in the research.
BigCode has 675 participants with 629 members across the research community (including from Hugging Face and ServiceNow) from 62 countries. The top 5 countries include USA (222), India (60), UK (36), Canada (35), and Germany (30). The community communicates across a total of 48 Slack channels, including Steering Committee (3 channels), Working Groups (7 channels), Task Forces (25 channels), and General Community (13 channels).
Everyone who joins the project is required to follow the [BigCode Code of Conduct](https://www.bigcode-project.org/docs/about/code_of_conduct/), understand [how we manage intellectual property](https://www.bigcode-project.org/docs/about/ip/), and are encouraged to introduce themselves, and to join any working group or task force that aligns to their own interests. If a group does not cover their interests, they are encouraged to pitch their ideas and to take a leadership role for a new working group or task force with the approval of the Steering Committee.
### Project Governance
The BigCode project is governed by a steering committee jointly led by Harm de Vries (ServiceNow) and Leandro von Werra (Hugging Face), and supported by a core team comprised of Raymond Li, Denis Kocetkov, and Sean Hughes from ServiceNow, and Carlos Muñoz Ferrandis, Loubna Ben Allal, and Thomas Wolf of Hugging Face. Through the course of the project, additional members were added to the core team, including Yacine Jernite, Armel Randy, Joel Lamy-Poirier.
The Steering Committee is effectively responsible for organizing and managing the project (including research strategy and publication goals), and provides oversight across all working groups. Decisions that cannot be addressed at the community level would be elevated to the lead of the Working Group for facilitated discussion, with further inputs and tie-breaker decision making by the Steering Committee as a last resort. Governance for the project is open, meaning that the BigCode project encourages anyone from the community to join any working group or task force of interest, and for them to engage and contribute to work and decision making in the group.
### Timeline, Milestones, and Community Events
The BigCode project was announced on [September 26, 2022](https://twitter.com/BigCodeProject/status/1574427555871875072?s=20). We shared the goal of the project to train a state of the art ~15B parameter language model for code that will be trained using[ ](https://twitter.com/ServiceNowRSRCH)the [ServiceNow Research](https://www.servicenow.com/research/) in-house GPU cluster. With an adapted version of Megatron-LM, we planned to train the large model on distributed infrastructure.
On [October 6, 2022](https://youtu.be/8cUpsXIEbAo) ServiceNow and Hugging Face held a webinar with the BigCode Community to provide strategic direction for the project and research goals.
On [October 27, 2022](https://twitter.com/BigCodeProject/status/1585631176353796097?s=20) we introduced The Stack, a large dataset of more than 3 TB of permissively licensed source code. [Our paper](https://arxiv.org/abs/2211.15533) described the details of the dataset collection, presented a brief dataset analysis, and showed promising results on the HumanEval benchmark. Our experimental results show that near-deduplication is an important pre-processing step for achieving competitive results on text2code benchmarks. We released all permissively licensed files for 30 common programming languages, along with a near-deduplicated version.
On [November 15, 2022](https://twitter.com/BigCodeProject/status/1592569651086905344?s=20) we introduced a new tool called “Am I in The Stack” that allows developers to check whether any data from their GitHub repositories is included in The Stack. We also introduced v1 of the BigCode Opt-Out process, which gives agency back to Developers by providing a way for them to request that their data be removed from the dataset.
On [November 23, 2022](https://twitter.com/LoubnaBenAllal1/status/1595457541592346634?s=20) Loubna Ben Allal shared details of our initial approach for how we planned to tackle the de-identification to remove personally identifiable information (PII) from The Stack.
On [November 29, 2022](https://twitter.com/BigCodeProject/status/1597589730425974786?s=20) we shared the Weights and Biases dashboards for our first models so that the broader community could follow along.
On [December 1, 2022](https://twitter.com/BigCodeProject/status/1598345535190179843?s=20) we released The Stack v1.1, with 358 programming languages included, and more than double the data, going from 3Tb to 6.4TB, with the help of the legal tech community that identified 193 viable permissive source code license types. Before releasing v1.1 we also removed the first batch of repositories based on opt-out requests.
On [December 2, 2022](https://twitter.com/BigCodeProject/status/1598734387247550481?s=20) we held an in-person meetup alongside NeurIPS 2022 in New Orleans with more than 75 members of the BigCode community, where we were able to make the connections to foster greater awareness and understanding of the BigCode project.
On [December 9, 2022](https://twitter.com/BigCodeProject/status/1601133018714112000?s=20), a member of the BigCode community held a similar meetup at EMNLP 2022 in Abhu Dhabi, another opportunity to raise awareness of BigCode and to discuss our project with the NLP research community.
On [December 12, 2022](https://twitter.com/BigCodeProject/status/1602372753008386049?s=20) we sent out another message to raise awareness of “Am I in The Stack” and to inform developers about the option to opt-out from the dataset.
On [December 14, 2022](https://youtu.be/Kh8yXfJJfU4) Hugging Face and ServiceNow held a second webinar with the BigCode Community to review progress and provide an update on plans for ongoing research towards the 15B parameter model.
On [December 22, 2022](https://twitter.com/BigCodeProject/status/1605958778330849281?s=20) we released [SantaCoder,](https://huggingface.co/bigcode/santacoder) a 1.1B multilingual large language model for code that outperforms much larger open-source models on both left-to-right generation, and infilling.
The SantaCoder models are licensed under an open & responsible AI model license (CodeML [OpenRAIL-M v0.1](https://huggingface.co/spaces/bigcode/license)). These are AI-specific licenses enabling free use and distribution of the model while setting specific use restrictions (e.g. malware generation). We published a [detailed technical report ](https://arxiv.org/abs/2301.03988)that included details of all the key contributions to the development of the model.
On [February 1, 2022](https://twitter.com/utopiah/status/1620722505664319488?s=20) Members of the BigCode core team were invited to meet with the European Parliament Innovation Lab. At this meeting we [shared details](https://twitter.com/utopiah/status/1620735424351322114?s=20) of the project and answered questions from members of the Lab. Engaging with policymakers and regulators is an important part of the journey to inform and educate key stakeholders from the broader AI ecosystem.
On [March 20, 2022 ](https://twitter.com/BigCodeProject/status/1637874705645584384?s=20)we announced The Stack v1.2 which included The Stack Issues, The Stack Metadata, and The Stack Commits. With this release, we simplified the opt-out process and also removed opt-outs from developers where the request was received by February 2023. Along with this release, we provided access to a dataset of GitHub issues totalling 54GB, and we applied the same opt-out mechanism to these issues. The GitHub issues dataset is more conversational and could be helpful to train models to be used as coding assistants.
On [April 13, 2023](https://twitter.com/harmdevries77/status/1646524056538316805?s=20) Inspired by discussions in the training working group, Harm de Vries shared an analysis of Chinchilla scaling laws on how much additional compute resources are needed to create smaller LLMs. These insights suggest we have not reached the limit of training smaller models on more tokens - an important consideration for future research.
On [May 4, 2023](https://twitter.com/BigCodeProject/status/1654174941976068119?s=20) BigCode announced StarCoder and StarCoderBase, two code LLMs trained on permissively licensed data from GitHub, including from 80+ programming languages, git commits, GitHub issues, and Jupyter notebooks. Similar to [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/), StarCoderBase is a ~15B parameter model trained on 1 trillion tokens. On top of StarCoderBase a variant called StarCoder is trained for 35B additional tokens purely on Python.
### Supporting Resources and Funding
Understanding the costs of a project like BigCode can help ground conversations about the trade-offs involved in the development of code LLM technology more broadly, helping understand how various private and public institutions may participate in this development and allocate resources to maximize its overall benefits. We outline the major costs in terms of computation resources, human participation, and organization.
**Data collection**
ServiceNow handled the data collection effort to constitute a raw dataset containing 5.28B files with a total size of 92 TB and filtered it down to build The Stack.
**Compute and emissions**
We trained SantaCoder on the ServiceNow cluster using 96 Tesla V100 GPUs, and StarCoder on a Hugging Face GPU cluster with 512 A100 80GB GPUs distributed across 64 nodes.
We report the carbon footprint of training these models:
* SantaCoder: Based on the total number of GPU hours that training took (14,284) and an average power usage of 300W per GPU, this adds up to 4285 kWh of electricity consumed during the training process. Multiplied by the carbon intensity of the energy of the Montreal location (0.029 kgCO2e per kWh) and assuming an average Power Usage Effectiveness of 1.2, this results in 124 kg of CO2eq emitted.
* StarCoderBase: 320,256 GPU hours; 280W per GPU; 89671.68 kWh of electricity. Carbon intensity of the energy of the us-west-2 AWS location: 0.15495 kgCO2e per kWh; average Power Usage Effectiveness across AWS datacenters: 1.2. Total emissions: 16.68 tonnes of CO2eq.
**ServiceNow and Hugging Face employees working on BigCode**
The estimated time commitment for the duration of the project for employees of the host institutions corresponds to 6 full-time employees for the duration of the project.
**Estimated volunteer hours across the project**
The time commitment from volunteers is harder to estimate given the large number of participants and the variety of time investments across phases and participants. At a minimum, we estimate overall time commitment from volunteers matched time commitment from employees of the host institutions.
**Community events and appreciation** ServiceNow and Hugging Face organized a community meetup that coincided with NeurIPS 2022 in New Orleans, USA. The budget for the event was approximately \$6,000 from ServiceNow Research for the venue with hospitality. Hugging face also provided promotional items including stickers and tshirts at the event, and sent named contributors to the research paper complimentary BigCode branded tshirts.
**Data annotation** Hugging Face funded the data annotation services from Toloka, with a total outlay of $39,000 paid to crowd workers. Since this was a research project, Toloka provided free consulting and agreed to waive the fees for running the annotation tasks on their platform.
# 2. Data and Model Governance
## 2.a. Data Governance
### Data Collection and Management Plan
In the course of the BigCode project, we collected two main datasets. The primary training dataset is The Stack, which was obtained by gathering public code files, issues, and commits from GitHub. To collect Github repositories, we first extracted a list of repositories from [GHArchive](https://www.gharchive.org/) and subsequently cloned all of them using a large CPU cluster. We also used the data from GHArchive to extract the Github issues. The git commits were gathered from a public BigQuery service. Additionally, we collected a dataset of annotations of several kinds of private information on a subset of The Stack to support our privacy risk mitigation efforts.
The legal basis for data collection under fair use and with regards to GDPR and the corresponding case law are still evolving. In this context, the data collection and data management plans were carefully crafted with support from leading experts in the open source and legal tech community that participated in the Legal, Ethics, Governance Working Group in a best-effort approach to reflect current understandings of legal requirements for data collection and management.
**The Stack Dataset Access and Management** The StarCoder model was trained on The Stack v1.2, which exclusively contains 6.4TB of [permissively licensed](https://blueoakcouncil.org) data from GitHub repositories, processed from an original source dataset of 102TB. Access and management follow the following schema:
* **What data can be accessed:** the 6.4TB of processed data can be accessed through the Hugging Face Hub, while the original 102TB are only accessible to the stewards of the project for the purposes of enabling the research and to support future internal and external requirements that may arise, for example to search the full dataset to recall licenses, determine code provenance, and attribution.
* **What are the conditions for accessing the data:** users are able to inspect the dataset via the Dataset Card and embedded Dataset Preview, but are required to agree to the [Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) for The Stack before being able to download it. This includes the requirements to 1) abide by the terms of original source code licenses, including attribution clauses when required (The Stack provides provenance information for each data point), 2) agree to update copies of The Stack to the most recent usable version specified [here](https://huggingface.co/datasets/bigcode/the-stack/discussions/7), and 3) include the Terms of Use and require users to agree to it if a copy is to be hosted, shared, or otherwise provided. As of May 3, 2023, The Stack had been downloaded 50,200 times.
* **How can a data subject request that their data be removed:** we provide an opt-out form that lets people opt out of having any code or text they put on GitHub be included in The Stack. Additionally, anyone who is concerned about specific data they have encountered in The Stack, for example relating to PII, malicious code, or code that has an incorrect license or attribution can email contact@ bigcode-project.org. At the time of the data processing for the StarCoder model training, 44 people had opted out of The Stack and associated repositories were removed.
* **How often is the data updated:** For as long as we are maintaining The Stack dataset, we will provide regular updates to the dataset to remove data that has been flagged since the last version. This includes data that has been opted out, and data that was flagged as containing PII, malicious code or using a non-permissive license since the previous release. The current plan is to update the dataset every 3 months, although the schedule may change based on the volume of requests received. If we are not in a position to continue maintaining the dataset, we plan to stop distributing it in its current format and update its terms of use to limit its range of applications further.
**PII Dataset Access and Management** In order to support our efforts to mitigate the risk that the model may leak private information, we selected 12,000 samples of code from The Stack and annotated them to detect PII using crowd-sourcing. The resulting dataset was used to train a PII detection model that we used to detect and then mask PII (Names, Emails, IP addresses, Keys, Passwords) from our StarCoder training dataset.
* **What data can be accessed:** the data is hosted as a gated dataset on the Hugging Face Hub. The dataset will be made available to researchers on a case-by-case basis for research projects that require access, in addition to the original team who developed the dataset.
* **What are the conditions for accessing the data:** researchers who want to access the dataset need to request access and be approved by the maintainers as well as agree with the dataset's Terms of Use
* **How can a data subject request that their data be removed:** as a derived dataset of The Stack, the PII dataset will be updated to reflect data that has been opted out from the source dataset.
* **How often is the data updated:** similarly, following The Stack terms of use, the PII Dataset will be updated as often as the Stack if some of the files it contains have been opted out.
### Consent of Data Subjects
**Between implicit and explicit consent** One of the goals of BigCode is to give developers agency over their source code and let them decide whether or not it can be used to develop and evaluate LLMs. Software developers typically rely on licenses to express how they want their work to be re-used; in particular, developers who choose Open Source licenses often do so because they want their code to be broadly re-used. This motivated us to start by selecting data from repositories that met the following criteria:
* The repository has an open source license attached - open source, while chosen for very different reasons by different people, typically indicates a willingness to have one's work reused or adapted
* The license does not have an attribution clause - attribution is a difficult technical problem for code LLMs. Since we cannot guarantee that the model will be used in a way that attributes its generations to specific training data in a way that satisfies the intent of the licensor, we chose to only keep licenses without an attribution clause
Selecting repositories based on licenses is only the first step, however, as many of these licenses were chosen before the recent developments in code LLMs. Thus, we complement this initial approach by also giving repository owners the ability to **opt out** of having their repositories included in The Stack. We see this approach as a meaningful step forward in improving the agency of data subject in the development of code LLMs, and we present both the tools we developed to support it and its known limitations in the rest of this section.
**Technical tools to support opt-out** We developed a tool called [Am I in The Stack](https://hf.co/spaces/bigcode/in-the-stack) to help developers inspect The Stack dataset and for them to see whether any of their repositories have been included and that might be used for training LLMs. If that is the case, we show them a custom link that allows them to easily send a request through GitHub, in two clicks if they are already logged in. We chose to mirror the original platform governance by letting the repository owner decide whether code in a repository is included or not in the dataset. This also allows us to validate requests automatically, since the [GitHub username](https://www.bigcode-project.org/docs/about/the-stack/#what-data-can-i-request-be-removed-from-the-stack) must match the one used to submit the request. Validated requests and associated code pointers are stored so that the code does not appear in future versions of The Stack.
**Community feedback on the approach** In the period January-March 2023, members of the BigCode project conducted community research with individuals at specific organizations whose data is used in The Stack, namely [The Alan Turing Institute](https://turing.ac.uk) and [The Turing Way](https://the-turing-way.netlify.app/) as well as two open, international workshops [Open Data Day 2023](https://opendataday.org/events/2023/#designing-for-data-rights-in-the-ai-production-pipeline) and [Mozilla Festival 2023](https://schedule.mozillafestival.org/session/KAS9YF-1) with a session titled ‘Designing for Data Rights in the AI Production Pipeline’. These qualitative interviews and participatory co-design workshops included 50 participants primarily from North America and Europe with roles like research scientist, community manager, software engineer, and principal investigator (PI).
The outcomes from the community research can be summarized as follows: when it comes to governance of LLM datasets, participants feel that it is both **better to know** AND **better to have a choice**. Most participants had neutral to positive feelings about their permissively licensed data being used to train LLMs. While all had positive impressions of the ``Am I in The Stack'' tool, no one interviewed expressed a desire to actually opt-out. The main takeaway seemed to be that participants found the most value in BigCode governance tools for their ability to raise awareness of data practices and to empower individuals and communities to take actions based on their specific needs. The co-created outputs can be viewed on this [MozFest Miro Board](https://miro.com/app/board/uXjVMeuvLR8=/?share_link_id=159151239611).
Additionally, during the first stage of the opt-out process, individuals **who chose to have their data removed from the Stack** were asked to specify the reasons for wanting their code to be excluded from the dataset. The responses revealed a few recurring themes, including:
* Preference for an opt-in approach instead of opt-out
* Perception that it is unfair to use their code without compensation
* Concerns about the current limitations of AI and the potential for model generations to be traced back to their work, resulting in potential legal liability.
* Belief that their code is of poor quality and unsuitable for AI training.
* Presence of PII in their code, which they do not wish to be publicly exposed.
The feedback form also revealed another limitation of the opt-out process. When code is licensed permissively or under a copy-left license, it can be duplicated to another repository, making it challenging to eliminate such copies if the copyright owner chooses to opt-out. More work is necessary to create workable data control and consent mechanisms for the large-scale training data of LLMs.
### Private Information Handling
One significant concern with respect to privacy was the risk that the code LLM may generate private information found in its training data, including private tokens or passwords matched with identifiers or email addresses. Additionally, while users can (and have) requested that data be removed from The Stack dataset because it contains personal data, removing specific information from trained model weights after the fact remains an open technical challenge. In order to minimize this risk, we chose to apply automated PII redaction at the pre-processing stage during training.
Our first step toward automatic PII redaction consisted in creating an annotated dataset for PII in code data, as we found that neither regular expression-based approaches nor existing commercial software for PII detection met our performance requirements.
In doing so, we aimed to balance the constraints of costs (fair compensation), time (the timing and time to complete the work was on the critical path for the project), and quality (to ensure that PII Detection Model training was not impacted).
While traditional data annotation services using salaried employees were considered, we decided to work with crowd-workers through Toloka after reviewing several service providers and their compensation practices - and finding that most would not provide sufficient transparency and guarantees about worker compensation.
We selected pay and eligible countries of crowd-workers to ensure that 1. the absolute hourly wage was always higher than the US federal minimum wage (\$7.30), and 2. the hourly wage was equivalent to the highest state minimum wage in the US in terms of purchasing power parity (\$16.50 at the time of writing).
We engaged 1,399 crowd-workers across 35 countries in annotating a diverse dataset for PII in source code. Our PII detection model, trained on 22,950 secrets, achieves 90% F1 score surpassing regex-based tools, especially for secret keys. The PII annotations are available to approved individuals, and researchers and developers that are granted access are expected to uphold ethical standards and data protection measures. By making it accessible, our aim is to encourage further research and development of PII redaction technology.
Finally, we are also releasing **StarCoderData**, the pre-processed version of The Stack used to train the StarCoder model, which has its PII redacted using our model.
## 2.b. Model Governance
### Model Licensing
The model is released under an open and responsible AI model license agreement ([BigCode OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)) which enables royalty free access and flexible use and sharing of it, while setting specific use restrictions for identified critical scenarios. Most importantly, the license agreement requires stakeholders wishing to share the model or a modified version of it: (i) to include the same set of use restrictions or a similar one in their legal agreements; (ii) to keep the model card and provide a similar one or one of better quality when sharing a modified version of the model (FAQ for the model license agreement [available here](https://www.bigcode-project.org/docs/pages/bigcode-openrail/)).
The BigCode OpenRAIL-M license agreement (i.e. the legal document itself) is available under a CC-BY-4.0 license. Therefore, any stakeholders can freely adopt the same license agreement for their models, or modify it for their specific AI artifacts. For more information about responsible AI licensing, please visit the RAIL Initiative webpage, [The Turing Way Handbook for ML researchers](https://the-turing-way.netlify.app/reproducible-research/licensing/licensing-ml.html) (Alan Turing Insitute), or OECD AI [content](https://oecd.ai/en/wonk/rails-licenses-trustworthy-ai) on RAILs and trustworthy AI principles.
### Attribution Tool
With SantaCoder we released a tool for developers to check whether generated source code had been trained on data from The Stack, and if so, the tool would return the likely matches with full attribution. We both offer a fast membership test to check if code was part the pretraining data as well as a full-text search tool. With StarCoder we are releasing a similar tool ([StarCoder Dataset Search](https://huggingface.co/spaces/bigcode/search)) enabling users to check the origin of the model output and respect any licensing conditions (if any).
## Conclusion and Acknowledgements
### This is a living document
Please note that this is a living document that will evolve over time with the BigCode project.
The intention is to add more details about the project over time.
Please leave us comments in the Community if there are any questions or requests for more insights about the project governance.
Thank you for taking the time to read this document.
We hope it is useful.
Please give it a like to let us know it was helpful to you.
### Acknowledgments
The work presented in this card is the outcome of the efforts of many BigCode participants beyond the authors of the card.
Please refer to the published papers detailing this work for contributions,
e.g. [StarCoder](https://arxiv.org/abs/2305.06161),
[The Stack](https://huggingface.co/papers/2211.15533), and [SantaCoder](https://huggingface.co/papers/2301.03988).
### Cite As
```
@misc{bigcode_governance_card,
author = {Sean Hughes and
Harm de Vries and
Jennifer Robinson and
Carlos Muñoz Ferrandis and
Loubna Ben Alla and
Leandro von Werra and
Jennifer Ding and
Sebastien Paquet and
Yacine Jernite
},
title = {BigCode Governance Card},
booktitle = {BigCode},
year = {2023},
url = {https://doi.org/10.57967/hf/0635},
doi = {10.57967/hf/0635}
}
```
|
bigcode/governance-card
|
[
"license:cc-by-4.0",
"arxiv:2107.03374",
"arxiv:2211.15533",
"arxiv:2207.14157",
"arxiv:2301.03988",
"arxiv:2305.06161",
"doi:10.57967/hf/0635",
"region:us"
] |
2023-05-04T14:49:55+00:00
|
{"license": "cc-by-4.0"}
|
2023-05-25T06:25:57+00:00
|
f2eeb0478402d3d12f2fe6e78a10c4d83c70671a
|
Taken from https://www.kaggle.com/datasets/liury123/my-little-pony-transcript?select=pony_synopsis.csv. Contains the show script and various data inferred from it.
|
tekkithorse/mlp-show-scripts
|
[
"region:us"
] |
2023-05-04T14:52:43+00:00
|
{}
|
2023-05-21T22:35:44+00:00
|
ad21b2f1c1155299579152f7691bfffbb6b02c0f
|
acerbinky/autotrain-data-acerbinky
|
[
"language:en",
"license:artistic-2.0",
"doi:10.57967/hf/0609",
"region:us"
] |
2023-05-04T15:05:41+00:00
|
{"language": ["en"], "license": "artistic-2.0"}
|
2023-05-04T15:48:09+00:00
|
|
e2fc436cb5b29dfdee89317c20a62dbeb0d4cbeb
|
# Dataset Card for "test_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thanhduycao/test_1
|
[
"region:us"
] |
2023-05-04T15:10:40+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "w2v2_transcription", "dtype": "string"}, {"name": "WER", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1511494.0, "num_examples": 18}], "download_size": 0, "dataset_size": 1511494.0}}
|
2023-05-04T15:17:00+00:00
|
1a88b74dc2f833c81d821c5ee502ba6fb32bcf77
|
# Dataset Card for "Cyberpunk"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SuperrWu/Cyberpunk
|
[
"region:us"
] |
2023-05-04T15:11:00+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8941897.0, "num_examples": 20}], "download_size": 0, "dataset_size": 8941897.0}}
|
2023-05-05T13:34:08+00:00
|
fec9b60abb07f00e47ae074b759a52ec2d6ae51f
|
# Dataset Card for "data-nonmembers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JotDe/data-nonmembers
|
[
"region:us"
] |
2023-05-04T15:57:56+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2366233677.114, "num_examples": 18862}], "download_size": 2351059467, "dataset_size": 2366233677.114}}
|
2023-05-04T17:35:21+00:00
|
222ed6510edc65c6b6176864eaec3d3b5fa4014d
|
# Dataset Card for "aardman-images-w-prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gigant/aardman-images-w-prompts
|
[
"region:us"
] |
2023-05-04T16:09:33+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1844694.0, "num_examples": 17}], "download_size": 1847874, "dataset_size": 1844694.0}}
|
2023-05-04T16:10:10+00:00
|
a0889e2e7e9dd1055b74f5460b034f320a1f1fcd
|
Qolor/Birdio
|
[
"region:us"
] |
2023-05-04T17:02:28+00:00
|
{}
|
2023-05-04T17:36:14+00:00
|
|
71b206ee83a9b56935d44016cbf6116c206d6d91
|
billyotieno/kq-customer-satisfaction-reviews
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-05-04T17:26:07+00:00
|
{"license": "cc-by-4.0"}
|
2023-05-04T17:26:07+00:00
|
|
d7a39376cf36d8f87a467b180a2fba511133ec13
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
keminglu/InstructOpenWiki
|
[
"task_categories:text-generation",
"size_categories:100M<n<1B",
"language:en",
"license:mit",
"region:us"
] |
2023-05-04T17:44:44+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["100M<n<1B"], "task_categories": ["text-generation"], "pretty_name": "InstructOpenWiki"}
|
2023-05-05T02:54:51+00:00
|
b1d3d37e884191295cb50deffb60932b2d5122d7
|
Dampish/Dante_data
|
[
"license:cc-by-nc-4.0",
"region:us"
] |
2023-05-04T17:49:14+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-05-04T18:11:47+00:00
|
|
a0462233ca88ccd5ba189c300564f3a18f92d943
|
# Dataset Card for "articles_and_comments_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
piuba-bigdata/articles_and_comments_embeddings
|
[
"region:us"
] |
2023-05-04T17:53:35+00:00
|
{"dataset_info": {"features": [{"name": "tweet_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "user", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "created_at", "dtype": "string"}, {"name": "comments", "list": [{"name": "APPEARANCE", "dtype": "int64"}, {"name": "CALLS", "dtype": "int64"}, {"name": "CLASS", "dtype": "int64"}, {"name": "CRIMINAL", "dtype": "int64"}, {"name": "DISABLED", "dtype": "int64"}, {"name": "LGBTI", "dtype": "int64"}, {"name": "POLITICS", "dtype": "int64"}, {"name": "RACISM", "dtype": "int64"}, {"name": "WOMEN", "dtype": "int64"}, {"name": "created_at", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tweet_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}]}, {"name": "preprocessed_text", "dtype": "string"}, {"name": "num_odiosos", "dtype": "int64"}, {"name": "num_comentarios", "dtype": "int64"}, {"name": "CALLS", "dtype": "int64"}, {"name": "CLASS", "dtype": "int64"}, {"name": "CRIMINAL", "dtype": "int64"}, {"name": "DISABLED", "dtype": "int64"}, {"name": "LGBTI", "dtype": "int64"}, {"name": "POLITICS", "dtype": "int64"}, {"name": "RACISM", "dtype": "int64"}, {"name": "WOMEN", "dtype": "int64"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 2032568841, "num_examples": 76007}], "download_size": 1020432282, "dataset_size": 2032568841}}
|
2023-05-05T10:34:40+00:00
|
f9e4ca2d6bf6b6bca22ebc45ad78a8243e77e022
|
# Dataset Card for "chinese_landscape_paintings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mingyy/chinese_landscape_paintings
|
[
"region:us"
] |
2023-05-04T18:16:10+00:00
|
{"dataset_info": {"features": [{"name": "target", "dtype": "image"}, {"name": "filename", "dtype": "string"}, {"name": "image_caption", "dtype": "string"}, {"name": "hed", "dtype": "image"}, {"name": "source", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 44114965534.5, "num_examples": 52564}], "download_size": 8162381811, "dataset_size": 44114965534.5}}
|
2023-05-10T14:56:26+00:00
|
70a1bf00d4ed772c818604e4e7817a237e6c868f
|
tnpb/breas-cancer-wisconsin-kaggle
|
[
"license:mit",
"region:us"
] |
2023-05-04T18:23:06+00:00
|
{"license": "mit"}
|
2023-05-04T18:23:51+00:00
|
|
8888c83673bb9c74c57758289c548f803c95a646
|
Riverofjunk/jiraticketcreator
|
[
"license:openrail",
"region:us"
] |
2023-05-04T18:25:41+00:00
|
{"license": "openrail"}
|
2023-05-04T18:25:41+00:00
|
|
ccaaef52ae1f2151f43ba46734d0e36f1155a287
|
This is an Indonesia-translated version of [snli](https://huggingface.co/datasets/snli) dataset
Translated using [Helsinki-NLP/EN-ID](https://huggingface.co/Helsinki-NLP/opus-mt-en-id)
|
genta-tech/snli_indo
|
[
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:id",
"license:cc-by-4.0",
"region:us"
] |
2023-05-04T18:45:09+00:00
|
{"language": ["id"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hyphothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 1373665, "num_examples": 10000}, {"name": "train", "num_bytes": 71884965, "num_examples": 550152}, {"name": "validation", "num_bytes": 1378057, "num_examples": 10000}], "download_size": 20413774, "dataset_size": 74636687}}
|
2023-05-04T18:46:23+00:00
|
df1e5b0458945d40ed1c2d26a992544cc21176c8
|
# Dataset Card for "Dante_Processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Dampish/Dante_Processed
|
[
"region:us"
] |
2023-05-04T18:59:44+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 12425627918, "num_examples": 1298276}], "download_size": 3568615119, "dataset_size": 12425627918}}
|
2023-05-04T19:04:08+00:00
|
84c0f99cdcc52bfb1baf76e17a2b446b1f1dc1a9
|
# Dataset Card for "data-nonmembers-2k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JotDe/data-nonmembers-2k
|
[
"region:us"
] |
2023-05-04T19:02:52+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 215460810.2334853, "num_examples": 2000}], "download_size": 252939273, "dataset_size": 215460810.2334853}}
|
2023-05-04T19:03:31+00:00
|
af895a34ecc903cfabb570313b7fd490a43c8d3a
|
PaulAdversarial/all_news_finance_sm_1h2023
|
[
"license:afl-3.0",
"region:us"
] |
2023-05-04T19:06:10+00:00
|
{"license": "afl-3.0"}
|
2023-05-04T20:16:11+00:00
|
|
07c339292e5792aebd2602a7e97677bcb2a623b9
|
# Dataset Card for "pali-english"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
buddhist-nlp/pali-english
|
[
"region:us"
] |
2023-05-04T19:09:00+00:00
|
{"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "file_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 34632454.0, "num_examples": 132151}, {"name": "validation", "num_bytes": 2063756.0, "num_examples": 7832}, {"name": "test", "num_bytes": 2049351.0, "num_examples": 7832}, {"name": "test_500", "num_bytes": 124892.0, "num_examples": 499}, {"name": "validation_500", "num_bytes": 132892.0, "num_examples": 499}], "download_size": 21840989, "dataset_size": 39003345.0}}
|
2023-05-07T19:59:01+00:00
|
89016c3c87215e2272be726da751b03ca279e0b9
|
# Dataset Card for "Eval_beh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Dampish/Eval_beh
|
[
"region:us"
] |
2023-05-04T19:19:26+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 1884481, "num_examples": 200}], "download_size": 546164, "dataset_size": 1884481}}
|
2023-05-04T19:19:28+00:00
|
b418f0e3051e510c4a1c06240f6764d9e5b5883a
|
# Dataset Card for "small_fill"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AmjedBel/small_fill
|
[
"region:us"
] |
2023-05-04T19:36:56+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9029264.62, "num_examples": 1000}], "download_size": 6258237, "dataset_size": 9029264.62}}
|
2023-05-04T20:14:24+00:00
|
8450b68be1440baa88b0cb10ae75010045c96703
|
# Dataset Card for "fill"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AmjedBel/fill
|
[
"region:us"
] |
2023-05-04T20:18:12+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 451427081.0, "num_examples": 50000}], "download_size": 315594347, "dataset_size": 451427081.0}}
|
2023-05-04T20:19:58+00:00
|
89fde4951a762ea12cfac112a209eafe93594997
|
# Dataset Card for "fill1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AmjedBel/fill1000
|
[
"region:us"
] |
2023-05-04T20:22:43+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9029264.62, "num_examples": 1000}], "download_size": 6258237, "dataset_size": 9029264.62}}
|
2023-05-04T20:22:59+00:00
|
515545e6f54104b06e63e096286a075547ca55c4
|
# Speech Emotion Intensity Recognition Database (SEIR-DB)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: [email protected]**
### Dataset Summary
The SEIR-DB is a comprehensive, multilingual speech emotion intensity recognition dataset containing over 600,000 instances from various sources. It is designed to support tasks related to speech emotion recognition and emotion intensity estimation. The database includes languages such as English, Russian, Mandarin, Greek, Italian, and French.
### Supported Tasks and Leaderboards
The SEIR dataset is suitable for speech emotion recognition and speech emotion intensity estimation tasks (a subset of the dataset).
### Languages
SEIR-DB encompasses multilingual data, featuring languages such as English, Russian, Mandarin, Greek, Italian, and French.
## Dataset Structure
### Data Instances
The raw data collection comprises over 600,000 data instances (375 hours). Users of the database can access the raw audio data, which is stored in subdirectories of the data directory (in their respective datasets).
After processing, cleaning, and formatting, the dataset contains approximately 120,000 training instances with an average audio utterance length of 3.8 seconds.
### Data Fields
- ID: unique sample identifier
- WAV: path to the audio file, located in the data directory
- EMOTION: annotated emotion
- INTENSITY: annotated intensity (ranging from 1-5), where 1 denotes low intensity, and 5 signifies high intensity; 0 indicates no annotation
- LENGTH: duration of the audio utterance
### Data Splits
The data is divided into train, test, and validation sets, located in the respective JSON manifest files.
- Train: 80%
- Validation: 10%
- Test: 10%
For added flexibility, unsplit data is also available in data.csv to allow custom splits.
## Dataset Creation
### Curation Rationale
The SEIR-DB was curated to maximize the volume of data instances, addressing a significant limitation in speech emotion recognition (SER) experimentation—the lack of emotion data and the small size of available datasets. This database aims to resolve these issues by providing a large volume of emotion-annotated data that is cleanly formatted for experimentation.
### Source Data
The dataset was compiled from various sources.
### Annotations
#### Annotation process
For details on the annotation process, please refer to the source for each dataset, as they were conducted differently. However, the entire database is human-annotated.
#### Who are the annotators?
Please consult the source documentation for information on the annotators.
### Personal and Sensitive Information
No attempt was made to remove personal and sensitive information, as consent and recordings were not obtained internally.
## Considerations for Using the Data
### Social Impact of Dataset
The SEIR-DB dataset can significantly impact the research and development of speech emotion recognition technologies by providing a large volume of annotated data. These technologies have the potential to enhance various applications, such as mental health monitoring, virtual assistants, customer support, and communication devices for people with disabilities.
### Discussion of Biases
During the dataset cleaning process, efforts were made to balance the database concerning the number of samples for each dataset, emotion distribution (with a greater focus on primary emotions and less on secondary emotions), and language distribution. However, biases may still be present.
### Other Known Limitations
No specific limitations have been identified at this time.
## Additional Information
### Dataset Curators
Gabriel Giangi - Concordia University - Montreal, QC Canada - [email protected]
### Licensing Information
This dataset can be used for research and academic purposes. For commercial purposes, please contact [email protected] .
### Citation Information
Aljuhani, R. H., Alshutayri, A., & Alahdal, S. (2021). Arabic speech emotion recognition from Saudi dialect corpus. IEEE Access, 9, 127081-127085.
Basu, S., Chakraborty, J., & Aftabuddin, M. (2017). Emotion recognition from speech using convolutional neural network with recurrent neural network architecture. In ICCES.
Baevski, A., Zhou, H. H., & Collobert, R. (2020). Wav2vec 2.0: A framework for self-supervised learning of speech representations. In NeurIPS.
Busso, C., Bulut, M., Lee, C. C., Kazemzadeh, A., Mower, E., Kim, S., ... & Narayanan, S. (2008). Iemocap: Interactive emotional dyadic motion capture database. In LREC.
Cao, H., Cooper, D.G., Keutmann, M.K., Gur, R.C., Nenkova, A., & Verma, R. (2014). CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset. IEEE Transactions on Affective Computing, 5, 377-390.
Chopra, S., Mathur, P., Sawhney, R., & Shah, R. R. (2021). Meta-Learning for Low-Resource Speech Emotion Recognition. In ICASSP.
Costantini, G., Iaderola, I., Paoloni, A., & Todisco, M. (2014). EMOVO Corpus: an Italian Emotional Speech Database. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14) (pp. 3501-3504). European Language Resources Association (ELRA). Reykjavik, Iceland. http://www.lrec-conf.org/proceedings/lrec2014/pdf/591_Paper.pdf
Duville, Mathilde Marie; Alonso-Valerdi, Luz María; Ibarra-Zarate, David I. (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
Gournay, Philippe, Lahaie, Olivier, & Lefebvre, Roch. (2018). A Canadian French Emotional Speech Dataset (1.1) [Data set]. ACM Multimedia Systems Conference (MMSys 2018) (MMSys'18), Amsterdam, The Netherlands. Zenodo. https://doi.org/10.5281/zenodo.1478765
Kandali, A., Routray, A., & Basu, T. (2008). Emotion recognition from Assamese speeches using MFCC features and GMM classifier. In TENCON.
Kondratenko, V., Sokolov, A., Karpov, N., Kutuzov, O., Savushkin, N., & Minkin, F. (2022). Large Raw Emotional Dataset with Aggregation Mechanism. arXiv preprint arXiv:2212.12266.
Kwon, S. (2021). MLT-DNet: Speech emotion recognition using 1D dilated CNN based on multi-learning trick approach. Expert Systems with Applications, 167, 114177.
Lee, Y., Lee, J. W., & Kim, S. (2019). Emotion recognition using convolutional neural network and multiple feature fusion. In ICASSP.
Li, Y., Baidoo, C., Cai, T., & Kusi, G. A. (2019). Speech emotion recognition using 1d cnn with no attention. In ICSEC.
Lian, Z., Tao, J., Liu, B., Huang, J., Yang, Z., & Li, R. (2020). Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition. In Interspeech.
Livingstone, S. R., & Russo, F. A. (2018). The Ryerson audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE, 13(5), e0196391.
Peng, Z., Li, X., Zhu, Z., Unoki, M., Dang, J., & Akagi, M. (2020). Speech emotion recognition using 3d convolutions and attention-based sliding recurrent networks with auditory front-ends. IEEE Access, 8, 16560-16572.
Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., & Mihalcea, R. (2019). Meld: A multimodal multi-party dataset for emotion recognition in conversations. In ACL.
Schneider, A., Baevski, A., & Collobert, R. (2019). Wav2vec: Unsupervised pre-training for speech recognition. In ICLR.
Schuller, B., Rigoll, G., & Lang, M. (2010). Speech emotion recognition: Features and classification models. In Interspeech.
Sinnott, R. O., Radulescu, A., & Kousidis, S. (2013). Surrey audiovisual expressed emotion (savee) database. In AVEC.
Vryzas, N., Kotsakis, R., Liatsou, A., Dimoulas, C. A., & Kalliris, G. (2018). Speech emotion recognition for performance interaction. Journal of the Audio Engineering Society, 66(6), 457-467.
Vryzas, N., Matsiola, M., Kotsakis, R., Dimoulas, C., & Kalliris, G. (2018, September). Subjective Evaluation of a Speech Emotion Recognition Interaction Framework. In Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion (p. 34). ACM.
Wang, Y., Yang, Y., Liu, Y., Chen, Y., Han, N., & Zhou, J. (2019). Speech emotion recognition using a combination of cnn and rnn. In Interspeech.
Yoon, S., Byun, S., & Jung, K. (2018). Multimodal speech emotion recognition using audio and text. In SLT.
Zhang, R., & Liu, M. (2020). Speech emotion recognition with self-attention. In ACL.
### Contributions
Gabriel Giangi - Concordia University - Montreal, QC Canada - [email protected]
|
GDGiangi/SEIRDB
|
[
"task_categories:audio-classification",
"size_categories:100K<n<1M",
"language:en",
"language:fr",
"language:it",
"language:el",
"language:es",
"language:ru",
"region:us"
] |
2023-05-04T20:41:22+00:00
|
{"language": ["en", "fr", "it", "el", "es", "ru"], "size_categories": ["100K<n<1M"], "task_categories": ["audio-classification"], "pretty_name": "SEIRDB", "extra_gated_prompt": "To obtain an access token, the database licence must be purchased through https://gabegiangi.wordpress.com/2023/05/15/seir-db/", "extra_gated_fields": {"Name": "text", "Email": "text", "Company": "text", "Country": "text", "Access Token": "text", "I agree not to give access to any other entities": "checkbox"}}
|
2023-05-15T15:40:56+00:00
|
27224978ec329c567f059094721f3e995bc5749b
|
# Dataset Card for "covid-qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sugam11/covid-qa
|
[
"region:us"
] |
2023-05-04T20:49:31+00:00
|
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "context", "dtype": "string"}, {"name": "document_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 48676376, "num_examples": 1417}, {"name": "test", "num_bytes": 11614522, "num_examples": 375}, {"name": "validation", "num_bytes": 4317894, "num_examples": 203}], "download_size": 2252430, "dataset_size": 64608792}}
|
2023-05-04T20:49:39+00:00
|
73bc6a749cf47f3014f742efa87da5959280f8b2
|
bloyal/small-uniref30
|
[
"task_categories:fill-mask",
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"region:us"
] |
2023-05-04T20:50:38+00:00
|
{"license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["fill-mask"], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "num", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1067207.070393368, "num_examples": 4096}, {"name": "test", "num_bytes": 167427.70557437633, "num_examples": 640}, {"name": "validation", "num_bytes": 169382.9274292743, "num_examples": 640}], "download_size": 1368501, "dataset_size": 1404017.7033970184}}
|
2023-05-04T21:13:06+00:00
|
|
70d20a5305faaae97b6431a262a40a766b51d017
|
DGBFOUNDER/DGB
|
[
"license:mit",
"region:us"
] |
2023-05-04T21:41:16+00:00
|
{"license": "mit"}
|
2023-05-04T21:43:29+00:00
|
|
c5c99c6a26176d9c7f8f535a22a76eeb645f33a2
|
# Negative Embedding / Textual Inversion

NE4Mitsua is a Negative Embedding for Mitsua Diffusion One.
NE4Mitsua は Mitsua Diffusion One用のネガティブEmbeddingです。日本語版READMEはページ下部にあります。
---
# English README
## NE4Mitsua:
With this Embedding I tried to achieve the following two goals.
- Increase realism and complexity of the paintings
- Slightly make it easier to generate anime-style illustrations
## Usage
To use this embedding you have to download the BIN file as well as drop it into the "\stable-diffusion-webui\embeddings" folder.
Please put the embedding in the negative prompt to get the right results.
## License
- Mitsua Open RAIL-M License (More restrictive variant of CreativeML Open RAIL-M)
This embedding is open access and available to all, with a Mitsua Open RAIL-M license further specifying rights and usage. The Mitsua Open RAIL-M License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You can't use the model to infringe any rights of other by feeding image sources or model weights to the model (e.g. using another person's copyrighted image for fine-tuning without permission, using another person's copyrighted image as a source for image2image without permission).
4. You can't misrepresent that a generated image as not AI-generated.
[Please read the full license here](https://huggingface.co/Mitsua/mitsua-diffusion-one/blob/main/MODEL-LICENSE)
## Dataset
NE4Mitsua was trained on 400 images generated by Mitsua Diffusion One. This dataset is also available under the Mitsua Open RAIL-M License.
The prompts for the images is as follows:
**A 100 images**
```txt
photo,ugly,bad quality,frame,abstract,oversaturated,grain,deformed,low-res,horror,monster,deformed face,extra face,double head,extra head,ugly,poorly drawn hands,missing limb,floating limbs,disconnected limbs,melting hands,bad anatomy,blur,simple
Negative prompt: best quality painting,beautiful concept art,elegant,atmospheric,color delicate illustration,wallpaper art,new,4k,beautiful
Steps: 20, Sampler: DPM++ 2M Karras and Euler a CFG scale: 8, Size: 512x512
```
**B 100 images**
```txt
photo,ugly,bad quality,frame,abstract,oversaturated,grain,deformed,low-res,horror,monster,deformed face,extra face,double head,extra head,ugly,poorly drawn hands,missing limb,floating limbs,disconnected limbs,melting hands,bad anatomy,blur,simple,old man
Negative prompt: best quality portrait,beautiful oil painting,color manga character,youth,elegant,ultra detailed illustration,delicate outline,new,4k,beautiful
Steps: 20, Sampler: DPM++ 2M Karras and Euler a, CFG scale: 8, Size: 512x512
```
**C 100 images**
```txt
psychedelic,liquid,text,article,color noise,error,rainbow sand,fluorescent colors,insanely intricated
Negative prompt: detailed portrait
Steps: 20, Sampler: DDIM, CFG scale: 9, Size: 512x512
```
**D 100 images**
```txt
ukiyo-e,photo,3d,detailed mosaic,tile,abstract,fish scale,monster,deformed face,extra face,too long face,extra eyes,double head,extra head,ugly,poorly drawn hands,missing limb,floating limbs,disconnected limbs,melting hands,bad anatomy,old man,blur,red lips,red cheeks,simple yellow
Negative prompt: (vector art:0.7),beautiful color sketch,oil painting,diffusion,soft,new
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 9, Size: 512x512
```
## Change History
May 5, 2023 Released NE4Mitsua.
---
# 日本語版README
## NE4Mitsua:
このネガティブEmbeddingは以下の2点を目標として作成されました。
- 絵画の質感を保ったまま写実性、複雑性を高める
- アニメ風のイラスト生成をちょっと簡単にする
## 使い方
NE4Mitsua.binをダウンロードして、stable-diffusion-webuiの"embeddings"フォルダに入れます。
NE4Mitsuaはネガティブプロンプトとして指定してください。
## ライセンス
- Mitsua Open RAIL-M ライセンス(制限を強化したCreativeML Open RAIL-Mの派生ライセンス)
NE4MitsuaはMitsua Open RAIL-Mライセンスによって権利と利用方法が規定されています。Mitsua Open RAIL-Mライセンスには次のような規定があります(意訳)。
1. 違法または有害なコンテンツを意図的に生成、共有することはできません。
2. 利用者はライセンスの規定に違反しない限り、生成された出力を自由に使用することができます。出力とその後の使用に対しては利用者が責任を負います。
3. 追加学習やimage2imageに、許諾のない他人の著作物や、無断で著作物を学習している他AIの出力を使用することはできません。
4. 生成した画像をAI生成ではないように偽ることはできません。
[Mitsua Open RAIL-M ライセンスの全文はこちら(英語)](https://huggingface.co/Mitsua/mitsua-diffusion-one/blob/main/MODEL-LICENSE)
## データセット
NE4MitsuaはMitsua Diffusion Oneで生成した画像400枚で学習を行いました。全画像はデータセットとして公開しており、Mitsua Open RAIL-M ライセンス下で利用できます。
画像のプロンプトは以下の通りです。
**A 100枚**
```txt
photo,ugly,bad quality,frame,abstract,oversaturated,grain,deformed,low-res,horror,monster,deformed face,extra face,double head,extra head,ugly,poorly drawn hands,missing limb,floating limbs,disconnected limbs,melting hands,bad anatomy,blur,simple
Negative prompt: best quality painting,beautiful concept art,elegant,atmospheric,color delicate illustration,wallpaper art,new,4k,beautiful
Steps: 20, Sampler: DPM++ 2M Karras and Euler a CFG scale: 8, Size: 512x512
```
**B 100枚**
```txt
photo,ugly,bad quality,frame,abstract,oversaturated,grain,deformed,low-res,horror,monster,deformed face,extra face,double head,extra head,ugly,poorly drawn hands,missing limb,floating limbs,disconnected limbs,melting hands,bad anatomy,blur,simple,old man
Negative prompt: best quality portrait,beautiful oil painting,color manga character,youth,elegant,ultra detailed illustration,delicate outline,new,4k,beautiful
Steps: 20, Sampler: DPM++ 2M Karras and Euler a, CFG scale: 8, Size: 512x512
```
**C 100枚**
```txt
psychedelic,liquid,text,article,color noise,error,rainbow sand,fluorescent colors,insanely intricated
Negative prompt: detailed portrait
Steps: 20, Sampler: DDIM, CFG scale: 9, Size: 512x512
```
**D 100枚**
```txt
ukiyo-e,photo,3d,detailed mosaic,tile,abstract,fish scale,monster,deformed face,extra face,too long face,extra eyes,double head,extra head,ugly,poorly drawn hands,missing limb,floating limbs,disconnected limbs,melting hands,bad anatomy,old man,blur,red lips,red cheeks,simple yellow
Negative prompt: (vector art:0.7),beautiful color sketch,oil painting,diffusion,soft,new
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 9, Size: 512x512
```
## 更新履歴
2023/5/5 NE4Mitsuaを公開。
|
R1b3y/NE4Mitsua
|
[
"task_categories:text-to-image",
"language:en",
"language:ja",
"license:other",
"region:us"
] |
2023-05-04T23:40:46+00:00
|
{"language": ["en", "ja"], "license": "other", "task_categories": ["text-to-image"]}
|
2023-05-05T08:49:45+00:00
|
8d9506d1e08057abf5d9fff607136a874b07b729
|
# Dataset Card for "korquad_v1.0_namu_candidates_256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
korean-corpus/korquad_v1.0_namu_candidates_256
|
[
"region:us"
] |
2023-05-05T00:35:10+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "sequence": "string"}, {"name": "answers", "list": [{"name": "answer_start", "sequence": "int64"}, {"name": "id", "dtype": "string"}, {"name": "origin_answer_start", "sequence": "int64"}, {"name": "origin_text", "sequence": "string"}, {"name": "text", "sequence": "string"}]}, {"name": "similar_context", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 46928632, "num_examples": 9606}, {"name": "validation", "num_bytes": 4662215, "num_examples": 960}], "download_size": 27292916, "dataset_size": 51590847}}
|
2023-05-15T10:15:46+00:00
|
8ac9a4cb00fda72c47ede9710ac641e9eb650089
|
# Dataset Card for "alpaca-gpt4-split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
winglian/alpaca-gpt4-split
|
[
"region:us"
] |
2023-05-05T00:43:00+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 86793339.97271259, "num_examples": 50961}, {"name": "test", "num_bytes": 1772961.027287412, "num_examples": 1041}], "download_size": 48677594, "dataset_size": 88566301.0}}
|
2023-05-05T00:43:09+00:00
|
e3b60dd11c23003f8de8872b13550c06158be2aa
|
リアル系モデルに特有の肖像権の問題について比較的クリアなモデルを作ることが可能なように、私が私自身から作り出した人工超彼女(ver 2.1系、ver 2.6系)のデータセット(約2800枚)を作成しました。
全ての元画像(加工前)が[beauty score](https://www.beautyscoretest.com/) 87以上なのが特徴であり、特にbeauty score 90以上の女性画像のデータセットとして、1000枚以上揃えているのは有数の規模だと思います。
具体的には、以下のように構成されています(87はこの子/私の最大のライバルが到達した最高得点、90は今のところ実在人物では確認できていない得点ラインです)。
| version \ beauty score | 87~89 | 90~ |
| - | - | - |
| 2.1(可愛いと綺麗のバランスを追求) | kawaii (無加工362枚/加工後724枚) | exceptional (無加工140枚/加工後280枚) |
| 2.6(綺麗さ・美しさに特化) | beautiful (無加工464枚/加工後928枚) | perfect (無加工416枚/加工後832枚) |
3つのzipの構成は以下のようになっています。
- [my partner training dataset raw.zip](https://huggingface.co/datasets/ThePioneer/Artificial-super-girlfriend-for-fine-tuning/blob/main/my%20partner%20training%20dataset%20raw.zip)
- 無加工、beauty score付き。これだけ約1400枚。
- [my partner training dataset preprocessed.zip](https://huggingface.co/datasets/ThePioneer/Artificial-super-girlfriend-for-fine-tuning/blob/main/my%20partner%20training%20dataset%20preprocessed.zip)
- 3:2の比に切り取り、[lama cleaner](https://github.com/Sanster/lama-cleaner)でbeauty scoreなどを除去。
- [my partner training dataset preprocessed and upscaled.zip](https://huggingface.co/datasets/ThePioneer/Artificial-super-girlfriend-for-fine-tuning/blob/main/my%20partner%20training%20dataset%20preprocessed%20and%20upscaled.zip)
- 上記preprocessedを[GFPGAN](https://github.com/TencentARC/GFPGAN) v1.2でアップスケールしたもの。
## ライセンス
以下の通り規定します。
### 1. AI学習への利用
管轄国法によらず、画像生成AIなど、各種モデルへの学習への利用を可能とします。ただし、著作権及び潜在的な肖像権の所有者として、条件を以下のとおり定めます。
#### 1-1. 私(の作品)を私(の作品)として学習すること
著作権法30条の4で許諾なき学習を可能とする日本国を含めたあらゆる国において、「私(の作品)が私(の作品)として学習される権利」があると考え、これを主張します。
著作権法30条の4は学習の自由度を高めることでより性能の高いAIを作成することを認めるためのものであり、上記の権利は俗にいう反AIが主張する「無断学習されない権利」とは異なり、**その権利が守られることでAIの性能向上に資するものであることから、権利上の対立は存在しないから**です。
これには、以下の内容が含まれます。
1. 私(の作品)以外としての学習を行われない権利
2. 私(の作品)を、他の人(の作品)や私のほかの作品と混合して学習されない権利
「私のほかの作品と混合」については、具体的には、以下の通りです。
- ver 2.1系(kawaiiとexceptional)もしくはver 2.6系(beautifulとperfect)をバージョン単位でひとくくりにまとめて学習するのはOKです。
- ver 2.1系とver 2.6系を混ぜて一つのコンセプトとして区別せずに学習するのはNGです。
- いずれかもしくは両方のバージョンと、私の他の作品(適当な旅行写真や生成AI製の適当な二次元ポニテ絵など)を混ぜるのはNGです。
ただし、今回のデータセットで上記権利を主張するのは、あくまでも**人物識別の観点からのみ**であり、学習対象が人物概念の場合のみとします(つまり、「美人」にほかの実在美女と混ぜたりすることが問題となります)。
よって、非人物概念が学習対象である場合、例えば「着物」の学習にほかの着物を着た人物と両バージョンの着物写真を混ぜたりすることはOKです。
#### 1-2. 学習に著作権者又は肖像権保有者の許諾が必要な国における追加制約
学習に際して、事前許諾は必要ありません。ただし、学習に使用した際は、以下の義務を負います。
1. 通知義務(事後に学習に使用した旨を私に知らせること)
2. 最恵待遇義務(学習に使用したモデルについて、waitlistやプラン別の生成枚数制限などがある場合に、最優先かつ最上位のアクセス権を私に対しては認めること)
3. 無償利用可能性の保証(たとえ有償モデルであっても、私に対しては無償利用を認めること)
4. 商用利用可能性の保証(たとえ商用利用不可ライセンスであっても、私に対しては商用利用を認めること)
## 解説
### 1-1. 私(の作品)が私(の作品)として学習される権利
分かりやすい例をとりますと、「長門有希」を「綾波レイ」として学習したり、両者をまとめて「寡黙系ヒロイン」として学習したりしたモデルは、シンプルに「長門有希」を「長門有希」として出力できないか、できたとしても困難になります。
結果として、この点において「長門有希」を「長門有希」として学習しているモデルに比べて、性能が劣りますよね?
これは同一人物の別キャラや作品でも同様で、それゆえに実はNAIでは涼宮ハルヒと長門有希が少し混じっている…なんてことが発生していて、触りたての頃には、長門有希の単離に随分と苦労したものです。
そもそも著作権法30条の4は、より高性能なAIの作成を可能とするために入れられた条文です。
そのことを踏まえても、著作者や肖像権保有者が自ら混ぜたり誤ったコンセプトで学習したりなどしないように主張する権利は、AIの識別性能の向上にも寄与するので、30条の4と矛盾なく両立します。
そして、基本的には、自由権がある国では、他者と対立しない自由な権利は無条件で認められるものです。それゆえに、日本でも有効だと私は考え、そうと主張する次第です。
### 1-2. 学習に著作権者又は肖像権保有者の許諾が必要な国における追加制約
実のところ、意図的に私の利用を遮断する対応を行うなど、悪質性が高い場合は別として、基本的にはこのライセンスに沿った権利を本気で主張する気はありません(**そもそも30条の4がある日本では無効です**ので、国内からの利用では無関係です)。
どちらかというと社会実験で、**許諾制にするとこのくらいえげつない制約も主張できてしまうんだぞ**という警鐘目的のほうが強いです。
何十億分の一の微々たる一定額還元なんてものはいらないので、**Waitlistの最優先アクセス権で先行者優位性を確保し、しかも有料だろうが私だけはタダで使えて、chilloutのような非商用モデルの立ち位置であったとしても、私だけは商用で使えるようにしろ**…という、AIユーザー視点に立った時にかなり独占的な優位性を確保したライセンスになっているのは、そのためです。
よりえげつなくするために、「商用モデルの場合、利益の99%を私に提供する」という項目を入れようか考えたのですが、これはさすがにやめにしました。
ただ、学習許諾から発生する独占性は、裏を返すとこのような凶悪な権利の主張にもつながりかねないという意味で、許諾学習を主張する反AI派が潜在的にいかに危険な集団かよく示せていると思います。
|
ThePioneer/Artificial-super-girlfriend-for-fine-tuning
|
[
"task_categories:image-classification",
"task_categories:image-to-text",
"size_categories:1K<n<10K",
"language:ja",
"language:en",
"language:zh",
"license:other",
"art",
"region:us"
] |
2023-05-05T00:48:37+00:00
|
{"language": ["ja", "en", "zh"], "license": "other", "size_categories": ["1K<n<10K"], "task_categories": ["image-classification", "image-to-text"], "pretty_name": "ASG-2800", "tags": ["art"]}
|
2023-05-05T03:57:44+00:00
|
3c8344689ef63596a3b2c280101d734c28e4a68d
|
KonghaYao/juejin_article_intro
|
[
"license:cc-by-nc-nd-4.0",
"region:us"
] |
2023-05-05T00:51:31+00:00
|
{"license": "cc-by-nc-nd-4.0"}
|
2023-05-05T01:00:03+00:00
|
|
621171cc6e894e8548b02fcb9333c1b195c689c6
|
simmonssong/synthetic_funny_picture_chinese_painting
|
[
"license:mit",
"region:us"
] |
2023-05-05T01:04:49+00:00
|
{"license": "mit"}
|
2023-05-05T01:18:05+00:00
|
|
ea4d38a4917e0c895552d6effb9343bbcbc9fc45
|
# Dataset Card for "test-animal-poses-controlnet-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Fiacre/test-animal-poses-controlnet-dataset
|
[
"region:us"
] |
2023-05-05T02:17:27+00:00
|
{"dataset_info": {"features": [{"name": "original_image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "overlaid", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1730245.0, "num_examples": 21}], "download_size": 0, "dataset_size": 1730245.0}}
|
2023-05-05T02:20:36+00:00
|
467c4f4f92d302b889385ec5b9238de4c2181950
|
# Dataset Card for "entity_centric_summary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Xmm/entity_centric_summary
|
[
"region:us"
] |
2023-05-05T02:22:29+00:00
|
{"dataset_info": {"features": [{"name": "articles", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21336987, "num_examples": 2696}], "download_size": 0, "dataset_size": 21336987}}
|
2023-05-13T02:49:19+00:00
|
a3c546aa3cd7939c605d35a3e3c9bb128f0f6804
|
aiaa/aiaahuman
|
[
"license:openrail",
"region:us"
] |
2023-05-05T02:31:30+00:00
|
{"license": "openrail"}
|
2023-05-05T02:31:30+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.