sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
d0a766b1b2ad1d3e4ef39fa6faff628811a14041
|
# ORCHESTRA-simple-1M
GitHub: [nk2028/ORCHESTRA-dataset](https://github.com/nk2028/ORCHESTRA-dataset)
**中文簡介**
ORCHESTRA (c**O**mp**R**ehensive **C**lassical c**H**in**ES**e poe**TR**y d**A**taset) 是一個全面的古典中文詩歌的數據集,數據來自[搜韻網](https://sou-yun.cn/)。本數據集由 [nk2028](https://nk2028.shn.hk/) 進行格式轉換並發佈,希望透過公開高品質的古典中文詩歌數據,促進對古典中文詩歌及古典中文自然語言處理的研究。
ORCHESTRA-simple 是 ORCHESTRA 數據集的簡化格式,僅保留 `id`, `title`, `group_index`, `type`, `dynasty`, `author`, `content` 這 7 個欄位,而去除其他欄位,以簡化使用。
本資料集可用於大型語言模型的訓練。如欲作其他用途,請向數據提供者[搜韻網](https://sou-yun.cn/)諮詢。
**English Introduction**
ORCHESTRA (c**O**mp**R**ehensive **C**lassical c**H**in**ES**e poe**TR**y d**A**taset) is a comprehensive dataset of classical Chinese poetry, with data sourced from [SouYun Website](https://sou-yun.cn/). This dataset was converted and published by [nk2028](https://nk2028.shn.hk/), with the hope that by publicly releasing high-quality classical Chinese poetry data, it can promote research in classical Chinese poetry and natural language processing of classical Chinese.
ORCHESTRA-simple is a simplified format of the ORCHESTRA dataset, retaining only 7 fields: `id`, `title`, `group_index`, `type`, `dynasty`, `author`, and `content`, while removing other fields to simplify the usage.
This dataset can be used for training large language models. If you wish to use it for other purposes, please consult with the data provider, [SouYun Website](https://sou-yun.cn/).
|
Ayaka/ORCHESTRA-simple-1M
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:zh",
"language:lzh",
"arts",
"poetry",
"region:us"
] |
2023-06-12T11:53:18+00:00
|
{"language": ["zh", "lzh"], "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "tags": ["arts", "poetry"]}
|
2023-06-12T13:01:47+00:00
|
f8dd4794b44cdf07444d47cd17d06ac87a5fbc54
|
ecosystems/keywords
|
[
"task_categories:text-classification",
"task_categories:summarization",
"size_categories:1M<n<10M",
"license:cc-by-4.0",
"region:us"
] |
2023-06-12T11:59:59+00:00
|
{"license": "cc-by-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-classification", "summarization"], "pretty_name": "Open Source Software descriptions and keywords"}
|
2023-06-12T12:35:04+00:00
|
|
bf80aaccff64dfe968a95c97560b45dc199a2ca0
|
tasksource/regset
|
[
"license:unknown",
"region:us"
] |
2023-06-12T12:05:23+00:00
|
{"license": "unknown"}
|
2023-06-12T12:09:19+00:00
|
|
6a22718141ab91a3a29b9c7b5f1625c57c887498
|
YangHao520/test
|
[
"license:openrail",
"region:us"
] |
2023-06-12T12:12:30+00:00
|
{"license": "openrail"}
|
2023-06-12T12:12:30+00:00
|
|
26572b6b46235f2a881c685b712a570a5a5e1f50
|
sloppysid/call_trans
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-12T12:19:00+00:00
|
{"license": "apache-2.0"}
|
2023-06-12T12:19:34+00:00
|
|
582d195d16dbc1adc03fa4fe7bea2ad19bd66f29
|
# Dataset Card for "modeling_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Gae8J/modeling_v1
|
[
"region:us"
] |
2023-06-12T12:36:21+00:00
|
{"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": {"class_label": {"names": {"0": "howl", "1": "growling", "2": "bark", "3": "panting", "4": "whimper"}}}}, {"name": "is_unknown", "dtype": "bool"}, {"name": "youtube_id", "dtype": "string"}, {"name": "youtube_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 226151046.0, "num_examples": 322}, {"name": "validation", "num_bytes": 29638692.0, "num_examples": 40}, {"name": "test", "num_bytes": 28166844.0, "num_examples": 38}], "download_size": 264333043, "dataset_size": 283956582.0}}
|
2023-06-12T12:37:26+00:00
|
20fc76dc014490ffa9fcfd56bbac1bd312025a98
|
Macropodus/MWP-Instruct
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-12T12:36:42+00:00
|
{"license": "apache-2.0"}
|
2023-06-12T12:40:16+00:00
|
|
2ade1fa26bdfba021f6ed76d5921bda475e3faca
|
# TextCaps in Vietnamese
This is Vietnamese version of [TextCaps dataset](https://textvqa.org/textcaps/). It has 109765 image-caption pairs for training, and 15830 ones for validation. It was built by using Google Translate API. The Vietnamese version has the almost metadata as English one. The Vietnamese version doesn't have the following keys for each data points.
- `caption_tokens`
- `reference_tokens`
- `reference_strs`
- `image_classes`
For English version, these keys are in English. Because my main focus is `caption_str`, there aren't Vietnamese version for them. I'm limited by time and disk space.
I provide both English and Vietnamese .json files.
|
dinhanhx/TextCaps-vi
|
[
"task_categories:image-to-text",
"task_ids:image-captioning",
"language:vi",
"language:en",
"license:unknown",
"TextCaps",
"TextCaps-vi",
"region:us"
] |
2023-06-12T12:41:19+00:00
|
{"language": ["vi", "en"], "license": "unknown", "task_categories": ["image-to-text"], "task_ids": ["image-captioning"], "pretty_name": "TextCaps in Vietnamese", "source-datasets": ["TextCaps", "OpenImages"], "tags": ["TextCaps", "TextCaps-vi"]}
|
2023-06-17T22:33:07+00:00
|
a0d1b676fc686eee0f4f3d915ef1f6e02fc46adf
|
Demo to save data from a Space to a Dataset. Goal is to provide reusable snippets of code.
- Documentation: https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#scheduled-uploads
- Space: https://huggingface.co/spaces/Wauplin/space_to_dataset_saver/
- JSON dataset: https://huggingface.co/datasets/Wauplin/example-commit-scheduler-json
- Image dataset: https://huggingface.co/datasets/Wauplin/example-commit-scheduler-image
- Image (zipped) dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-image-zip
|
Wauplin/example-space-to-dataset-image-zip
|
[
"region:us"
] |
2023-06-12T12:41:40+00:00
|
{}
|
2024-01-22T08:29:09+00:00
|
bb0aee073d4706bfef18603b7f1565eaf524de78
|
lucadiliello/cc_news
|
[
"region:us"
] |
2023-06-12T12:43:26+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 413779125872, "num_examples": 149954415}], "download_size": 0, "dataset_size": 413779125872}}
|
2023-06-20T11:11:53+00:00
|
|
9689d7030cec0b9d9f8bdc4c5d11cc02931d3fcc
|
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
DuyOtaku/Genshin
|
[
"license:mit",
"region:us"
] |
2023-06-12T13:16:30+00:00
|
{"license": "mit", "title": "RVC Genshin Impact", "emoji": "\ud83c\udfa4", "colorFrom": "red", "colorTo": "purple", "sdk": "gradio", "sdk_version": "3.32.0", "app_file": "app.py", "pinned": true}
|
2023-06-12T13:23:25+00:00
|
77870b1309650877ffd7612fea942f706abe718b
|
Demo to save data from a Space to a Dataset. Goal is to provide reusable snippets of code.
- Documentation: https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#scheduled-uploads
- Space: https://huggingface.co/spaces/Wauplin/space_to_dataset_saver/
- JSON dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-json
- Image dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-image
- Image (zipped) dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-image-zip
|
Wauplin/example-space-to-dataset-json
|
[
"region:us"
] |
2023-06-12T13:18:54+00:00
|
{}
|
2024-02-17T14:37:40+00:00
|
2fd4b725174afaf2b7ba7a2cc39ea922eaefb373
|
# TextVQA in Vietnamese
This is Google-translated version of [TextVQA](https://textvqa.org/) in Vietnamese. The process of building Vietnamese version as follows:
- In en/ folder,
- Download `TextVQA_0.5.1_train.json`, `TextVQA_0.5.1_val.json`.
- By using [set data structure](https://docs.python.org/3/tutorial/datastructures.html#sets), generate txt files of unique text: train_answer_list.txt, train_question_list.txt, val_answer_list.txt, val_question_list.txt.
- In vi/ folder
- By translating 4 en/.txt files, generate train_answer_list.jsonl, train_question_list.jsonl, val_answer_list.jsonl, val_question_list.jsonl. In each of entry of each file, the key is the original english text, the value is the translated text in vietnamese.
To load Vietnamese version in your code, you need original English version. Then just use English text as key to retrieve Vietnamese value from jsonl files. I provide both English and Vietnamese version.
Please refer [this code](https://github.com/dinhanhx/velvet/blob/main/scripts/cherry_pick_textvqa.py) then [this code](https://github.com/dinhanhx/velvet/blob/main/scripts/apply_translate_textvqa.py) to apply translation.
|
dinhanhx/TextVQA-vi
|
[
"task_categories:visual-question-answering",
"task_ids:visual-question-answering",
"language:vi",
"language:en",
"license:unknown",
"TextVQA",
"TextVQA-vi",
"region:us"
] |
2023-06-12T13:19:15+00:00
|
{"language": ["vi", "en"], "license": "unknown", "task_categories": ["visual-question-answering"], "task_ids": ["visual-question-answering"], "pretty_name": "TextVQA in Vietnamese", "source-datasets": ["TextVQA", "OpenImages"], "tags": ["TextVQA", "TextVQA-vi"]}
|
2023-09-21T09:26:46+00:00
|
7eecfcb6c8989c2ec6958a79bd8152d37e795915
|
# Dataset Card for "sam-controlnet-final-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
baptistecolle/sam-controlnet-final-test
|
[
"region:us"
] |
2023-06-12T13:25:31+00:00
|
{"dataset_info": {"features": [{"name": "conditioning_image", "dtype": "image"}, {"name": "image", "dtype": "image"}, {"name": "filepath", "dtype": "string"}, {"name": "sentids", "list": "int32"}, {"name": "filename", "dtype": "string"}, {"name": "imgid", "dtype": "int32"}, {"name": "split", "dtype": "string"}, {"name": "cocoid", "dtype": "int32"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62596425.0, "num_examples": 200}], "download_size": 62532095, "dataset_size": 62596425.0}}
|
2023-06-12T13:27:08+00:00
|
d851a70144e82de4e4ba7ac3c16d1ec487ee7802
|
Andrijan/self_improving
|
[
"license:other",
"region:us"
] |
2023-06-12T14:03:54+00:00
|
{"license": "other"}
|
2023-06-12T14:04:11+00:00
|
|
b39ea73cfccd56958e15d506bb5f5269989fddbe
|
# Dataset Card for "llm-tolkien"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Prashantbhatt20/llm-tolkien1
|
[
"region:us"
] |
2023-06-12T14:05:53+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 98352.0, "num_examples": 12}, {"name": "test", "num_bytes": 32784.0, "num_examples": 4}], "download_size": 57850, "dataset_size": 131136.0}}
|
2023-06-13T17:56:00+00:00
|
28fce883e924329040a5bf0e080f18891473f309
|
A continuation of [gpt4-1.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1), with:
* over 1000 new coding instructions, along with several hundred prompts using `PLAINFORMAT` to *hopefully* allow non-markdown/backtick/verbose code generation
* nearly 4000 additional math/reasoning, but this time using the ORCA style "[prompt]. Explain like I'm five." / Justify your logic, etc.
* several hundred roleplaying data
* additional misc/general data
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-gpt4-1.2
|
[
"license:cc-by-nc-4.0",
"region:us"
] |
2023-06-12T14:08:59+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-06-22T14:00:42+00:00
|
e141319e467cc5a13ebf9eea361bff902c9253bc
|
# Dataset Card for "liveqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
truehealth/liveqa
|
[
"region:us"
] |
2023-06-12T14:13:08+00:00
|
{"dataset_info": {"features": [{"name": "questionid", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "message", "dtype": "string"}, {"name": "focus", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "answerid", "dtype": "string"}, {"name": "pairid", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 888907, "num_examples": 635}], "download_size": 429730, "dataset_size": 888907}}
|
2023-06-12T17:47:46+00:00
|
d5253f5c21f8c3579d3bb2f7803d3ac758e25769
|
JunyaL/test
|
[
"license:unknown",
"region:us"
] |
2023-06-12T14:23:30+00:00
|
{"license": "unknown"}
|
2023-06-13T10:04:35+00:00
|
|
b88dbd19cbc16c70b5b161284e156466ec748389
|
# Dataset Card for "lotr-book"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Prashantbhatt20/lotr-book
|
[
"region:us"
] |
2023-06-12T14:38:51+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 137844, "num_examples": 1}], "download_size": 29849, "dataset_size": 137844}}
|
2023-06-26T13:42:45+00:00
|
03ab8a8e30f1029375bf3c267cf71d7c82e80baf
|
# Dataset Card for "processed_dwi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
deetsadi/processed_dwi
|
[
"region:us"
] |
2023-06-12T14:43:18+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "conditioning_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 15336901.0, "num_examples": 200}], "download_size": 0, "dataset_size": 15336901.0}}
|
2023-06-13T17:56:28+00:00
|
b8699ee15eb9824b4241de4958f28b19753756e2
|
# Dataset Card for "CIFAR10_test_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/CIFAR10_test_embeddings
|
[
"region:us"
] |
2023-06-12T14:54:18+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "vision_embeddings", "sequence": "float32"}], "splits": [{"name": "openai_clip_vit_large_patch14", "num_bytes": 53491580.0, "num_examples": 10000}], "download_size": 59803880, "dataset_size": 53491580.0}}
|
2023-06-12T14:54:30+00:00
|
91e850f2b712c037611ab18a3c4722996e7aabd3
|
# Dataset Card for "CIFAR10_train_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/CIFAR10_train_embeddings
|
[
"region:us"
] |
2023-06-12T14:56:43+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "vision_embeddings", "sequence": "float32"}], "splits": [{"name": "openai_clip_vit_large_patch14", "num_bytes": 267448310.0, "num_examples": 50000}], "download_size": 298997114, "dataset_size": 267448310.0}}
|
2023-06-12T14:57:21+00:00
|
02283c5473a09ae5c43427c5188da55e8feca57f
|
# Dataset Card for "abimages"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Zorigami/abimages
|
[
"region:us"
] |
2023-06-12T15:24:32+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 639313.0, "num_examples": 13}], "download_size": 639921, "dataset_size": 639313.0}}
|
2023-06-12T15:24:34+00:00
|
c4e896070e4aee022b6c9d355261c1cabdda8051
|
Edoh/manim_python
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-06-12T15:42:06+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-06-12T16:01:54+00:00
|
|
904e7f6d4cca7cf9623fd1366fa8515badd8efdc
|
# Dataset Card for "Arabic_guanaco_oasst1"
This dataset is the openassistant-guanaco dataset a subset of the Open Assistant dataset translated to Arabic.
You can find the original dataset here: https://huggingface.co/datasets/timdettmers/openassistant-guanaco
Or the main dataset here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main
This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
For further information, please see the main dataset.
License: Apache 2.0
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Ali-C137/Arabic_guanaco_oasst1
|
[
"size_categories:1K<n<10K",
"language:ar",
"license:apache-2.0",
"region:us"
] |
2023-06-12T16:25:00+00:00
|
{"language": ["ar"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20962143, "num_examples": 9846}, {"name": "test", "num_bytes": 1102534, "num_examples": 518}], "download_size": 10417464, "dataset_size": 22064677}}
|
2023-06-12T16:30:07+00:00
|
4af57243df272e5d9f52e35b0a2bfe3728a05499
|
hehehasd/Images
|
[
"region:us"
] |
2023-06-12T16:48:19+00:00
|
{}
|
2023-06-12T16:50:52+00:00
|
|
4a17849defacc60a6a47e21700e081b2de39215e
|
sdmattpotter/hftest61223
|
[
"license:mit",
"region:us"
] |
2023-06-12T17:08:30+00:00
|
{"license": "mit"}
|
2023-06-12T17:12:45+00:00
|
|
9cb24c5d48291930f8d29e9f50397e3acf09571e
|
winddude/IHOPv01
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-12T17:17:34+00:00
|
{"license": "apache-2.0"}
|
2023-06-12T17:18:52+00:00
|
|
9e4b526bed31e93abfec228d485b7e991a390579
|
# Dataset Card for "pixel_glue_qqp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Nadav/pixel_glue_qqp
|
[
"region:us"
] |
2023-06-12T17:39:41+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 4725063877.25, "num_examples": 363846}, {"name": "validation", "num_bytes": 525056314.25, "num_examples": 40430}], "download_size": 5039025536, "dataset_size": 5250120191.5}}
|
2023-06-12T18:21:21+00:00
|
a07909a3e0c20d653fdd69872aee78234ac880f2
|
# curation-corpus
## Dataset Description
- **Homepage:** [https://github.com/CurationCorp/curation-corpus](https://github.com/CurationCorp/curation-corpus)
- **Repository:** [https://github.com/CurationCorp/curation-corpus](https://github.com/CurationCorp/curation-corpus)
## Source
Data from [this official repo](https://github.com/CurationCorp/curation-corpus) with downloaded news articles content.
## Citation
```
@misc{curationcorpusbase:2020,
title={Curation Corpus Base},
author={Curation},
year={2020}
}
```
|
d0rj/curation-corpus
|
[
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"news",
"summarization",
"region:us"
] |
2023-06-12T18:22:21+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "paperswithcode_id": "curation-corpus", "pretty_name": "Curation Corpus for Abstractive Text Summarisation", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "article_content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 127948910, "num_examples": 30455}], "download_size": 76620775, "dataset_size": 127948910}, "tags": ["news", "summarization"]}
|
2023-06-13T12:25:32+00:00
|
7b8464d660f8fe84324386d41218be629b42919b
|
# Dataset Card for "c4_vi"
Num tokens: 14,998,688,762 tokens
|
vietgpt/c4_vi
|
[
"region:us"
] |
2023-06-12T18:24:23+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "perplexity", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 74501968937.28577, "num_examples": 16203296}], "download_size": 40109713280, "dataset_size": 74501968937.28577}}
|
2023-06-22T05:38:28+00:00
|
408407a232ae1a9b4bc81971cd8cd6b123263b07
|
# Dataset Card for the American Stories dataset
## Dataset Description
- **Homepage:** Coming Soon
- **Repository:** https://github.com/dell-research-harvard/AmericanStories
- **Paper:** Coming Soon
=- **Point of Contact:** [email protected]
### Dataset Summary
The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets.
It was created using a novel deep learning pipeline that incorporates layout detection, legibility classification, custom OCR, and the association of article texts spanning multiple bounding boxes. It employs efficient architectures specifically designed for mobile phones to ensure high scalability.
The dataset offers high-quality data that can be utilized for various purposes. It can be used to pre-train large language models and improve their understanding of historical English and world knowledge.
The dataset can also be integrated into retrieval-augmented language models, making historical information more accessible, including interpretations of political events and details about people's ancestors.
Additionally, the structured article texts in the dataset enable the use of transformer-based methods for applications such as detecting reproduced content. This significantly enhances accuracy compared to relying solely on existing OCR techniques.
The American Stories dataset serves as an invaluable resource for developing multimodal layout analysis models and other multimodal applications. Its vast size and silver quality make it ideal for innovation and research in this domain.
### Languages
English (en)
## Dataset Structure
The raw data on this repo contains compressed chunks of newspaper scans for each year. Each scan has its own JSON file named as the {scan_id}.json.
The data loading script takes care of the downloading, extraction, and parsing to outputs of two kinds :
+ Article-Level Output: The unit of the Dataset Dict is an associated article
+ Scan Level Output: The unit of the Dataset Dict is an entire scan with all the raw unparsed data
### Data Instances
Here are some examples of what the output looks like.
#### Article level
```
{
'article_id': '1_1870-01-01_p1_sn82014899_00211105483_1870010101_0773',
'newspaper_name': 'The weekly Arizona miner.',
'edition': '01', 'date': '1870-01-01',
'page': 'p1',
'headline': '',
'byline': '',
'article': 'PREyors 10 leaving San Francisco for Wash ington City, our Governor, A. r. K. Saford. called upon Generals Thomas and Ord and nt the carrying out of what (truncated)'
}
```
#### Scan level
```
{'raw_data_string': '{"lccn": {"title": "The Massachusetts spy, or, Thomas\'s Boston journal.", "geonames_ids": ["4930956"],....other_keys:values}
```
### Data Fields
#### Article Level
+ "article_id": Unique Id for an associated article
+ "newspaper_name": Newspaper Name
+ "edition": Edition number
+ "date": Date of publication
+ "page": Page number
+ "headline": Headline Text
+ "byline": Byline Text
+ "article": Article Text
#### Scan Level
"raw_data_string": Unparsed scan-level data that contains scan metadata from Library of Congress, all content regions with their bounding boxes, OCR text and legibility classification
### Data Splits
There are no train, test or val splits. Since the dataset has a massive number of units (articles or newspaper scans), we have split the data by year. Once the dataset is loaded,
instead of the usual way of accessing a split as dataset["train"], specific years can be accessed using the syntax dataset["year"] where year can be any year between 1774-1963 as long as there is at least one scan for the year.
The data loading script provides options to download both a subset of years and all years at a time.
### Accessing the Data
There are 4 config options that can be used to access the data depending upon the use-case.
```
from datasets import load_dataset
# Download data for the year 1809 at the associated article level (Default)
dataset = load_dataset("dell-research-harvard/AmericanStories",
"subset_years",
year_list=["1809", "1810"]
)
# Download and process data for all years at the article level
dataset = load_dataset("dell-research-harvard/AmericanStories",
"all_years"
)
# Download and process data for 1809 at the scan level
dataset = load_dataset("dell-research-harvard/AmericanStories",
"subset_years_content_regions",
year_list=["1809"]
)
# Download ad process data for all years at the scan level
dataset = load_dataset("dell-research-harvard/AmericanStories",
"all_years_content_regions")
```
## Dataset Creation
### Curation Rationale
The dataset was created to provide researchers with a large, high-quality corpus of structured and transcribed newspaper article texts from historical local American newspapers.
These texts provide a massive repository of information about topics ranging from political polarization to the construction of national and cultural identities to the minutiae of the daily lives of people's ancestors.
The dataset will be useful to a wide variety of researchers including historians, other social scientists, and NLP practitioners.
### Source Data
#### Initial Data Collection and Normalization
The dataset is drawn entirely from image scans in the public domain that are freely available for download from the Library of Congress's website.
We processed all images as described in the associated paper.
#### Who are the source language producers?
The source language was produced by people - by newspaper editors, columnists, and other sources.
### Annotations
#### Annotation process
Not Applicable
#### Who are the annotators?
Not Applicable
### Personal and Sensitive Information
Not Applicable
## Considerations for Using the Data
### Social Impact of Dataset
This dataset provides high-quality data that could be used for pre-training a large language model to achieve better understanding of historical English and historical world knowledge.
The dataset could also be added to the external database of a retrieval-augmented language model to make historical information - ranging from interpretations of political events to minutiae about the lives of people's ancestors - more widely accessible.
Furthermore, structured article texts that it provides can facilitate using transformer-based methods for popular applications like detection of reproduced content, significantly improving accuracy relative to using the existing OCR.
It can also be used for innovating multimodal layout analysis models and other multimodal applications.
### Discussion of Biases
This dataset contains unfiltered content composed by newspaper editors, columnists, and other sources.
In addition to other potentially harmful content, the corpus may contain factual errors and intentional misrepresentations of news events.
All content should be viewed as individuals' opinions and not as a purely factual account of events of the day.
## Additional Information
### Dataset Curators
Melissa Dell (Harvard), Jacob Carlson (Harvard), Tom Bryan (Harvard) , Emily Silcock (Harvard), Abhishek Arora (Harvard), Zejiang Shen (MIT), Luca D'Amico-Wong (Harvard), Quan Le (Princeton), Pablo Querubin (NYU), Leander Heldring (Kellog School of Business)
### Licensing Information
The dataset has a CC-BY 4.0 license
### Citation Information
Coming Soon
### Contributions
Coming Soon
|
dell-research-harvard/AmericanStories
|
[
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text-retrieval",
"task_categories:summarization",
"task_categories:question-answering",
"size_categories:100M<n<1B",
"language:en",
"license:cc-by-4.0",
"social science",
"economics",
"news",
"newspaper",
"large language modeling",
"nlp",
"lam",
"doi:10.57967/hf/0757",
"region:us"
] |
2023-06-12T18:42:34+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["100M<n<1B"], "task_categories": ["text-classification", "text-generation", "text-retrieval", "summarization", "question-answering"], "pretty_name": "AmericanStories", "tags": ["social science", "economics", "news", "newspaper", "large language modeling", "nlp", "lam"]}
|
2023-09-08T17:33:32+00:00
|
b104ddf6af60fd528fd02587ec7d24270ecc1a21
|
# curation-corpus-ru
## Dataset Description
- **Repository:** [https://github.com/CurationCorp/curation-corpus](https://github.com/CurationCorp/curation-corpus)
Translated version of [d0rj/curation-corpus](https://huggingface.co/datasets/d0rj/curation-corpus) into Russian.
|
d0rj/curation-corpus-ru
|
[
"task_categories:summarization",
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:d0rj/curation-corpus",
"language:ru",
"license:cc-by-4.0",
"news",
"summarization",
"region:us"
] |
2023-06-12T18:49:36+00:00
|
{"language_creators": ["translated"], "language": ["ru"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["d0rj/curation-corpus"], "task_categories": ["summarization"], "pretty_name": "Curation Corpus (ru)", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "article_content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 237436901.42479068, "num_examples": 30454}], "download_size": 116826702, "dataset_size": 237436901.42479068}, "tags": ["news", "summarization"]}
|
2023-06-13T12:31:27+00:00
|
0fa134110fc19d3226f5d7178792c89a39672631
|
# Dataset Card for "Imagenet1k_validation_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_50000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/Imagenet1k_validation_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_50000
|
[
"region:us"
] |
2023-06-12T19:05:07+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_laion_ViT_H_14_2B_simple_specific_rices", "num_bytes": 21188875, "num_examples": 50000}, {"name": "fewshot_0__Attributes_ViT_L_14_descriptors_text_davinci_003_full_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 22293724, "num_examples": 50000}], "download_size": 16302328, "dataset_size": 43482599}}
|
2023-06-13T05:32:06+00:00
|
1a46efc4d45d0e3756656695ed2994dfce9254fc
|
# Dataset Summary
CIRAL is a collection for cross-lingual information retrieval research across four (4) African languages. The collection comprises English queries and query-passage relevance judgements for passages in the African languages.
This dataset repo contains only the queries and relevance judgements. The corpus collection can be found here [here](https://huggingface.co/datasets/CIRAL/ciral-corpus)
# Dataset Structure
1. To download the files: The queries can be found under `ciral-{lang}/topics` and are in `.tsv` formats with each line in the form:
```
qid\tquery
```
while the judgements are in the folder `ciral-{lang}/qrels`, with each file in the standard TREC format:
```
qid Q0 docid relevance
```
2. To access the dataset via `datasets`:
```
ciral_dataset = load_dataset("ciral/ciral", "hausa") #or swahili, somali, yoruba
for data in ciral_dataset['dev']: # or 'testA' or 'testB'
query_id = data['query_id']
query = data['query']
pos_qrels = data['positive_passages']
neg_qrels = data['negative_passages']
# To load test set A's pool judgments
pools_pos_qrels = data['pools_positive_passages']
pools_neg_qrels = data['pools_negative_passages']
for qrel in pos_qrels:
docid = qrel['docid']
text = qrel['text']
```
## Citation
```
@misc{CiralHfCite,
title = {{CIRAL: A Test Suite for {CLIR} in {A}frican Languages}},
author = {Mofetoluwa Adeyemi and
Akintunde Oladipo and
Xinyu Zhang and
David Alfonso-Hermelo and
Mehdi Rezagholizadeh and
Boxing Chen and
Jimmy Lin},
year = 2023,
url = {https://huggingface.co/datasets/CIRAL/ciral},
urldate = {2023-12-19}
}
```
|
CIRAL/ciral
|
[
"task_categories:text-retrieval",
"language:ha",
"language:so",
"language:sw",
"language:yo",
"license:apache-2.0",
"region:us"
] |
2023-06-12T19:06:09+00:00
|
{"language": ["ha", "so", "sw", "yo"], "license": "apache-2.0", "task_categories": ["text-retrieval"], "mutilinguality": ["multilingual"], "viewer": true}
|
2024-02-15T17:47:33+00:00
|
5b1a1917c7fbb33db978035921c67852a961af54
|
sdmattpotter/sdcc61223
|
[
"license:mit",
"region:us"
] |
2023-06-12T19:08:57+00:00
|
{"license": "mit"}
|
2023-06-12T19:11:13+00:00
|
|
0ae58db35c1b0f0bf562435811e8ef51e34e43eb
|
# Dataset Card for "multinose_test_controlnet_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
killah-t-cell/multinose_test_controlnet_dataset
|
[
"region:us"
] |
2023-06-12T19:48:04+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 411804.0, "num_examples": 9}], "download_size": 0, "dataset_size": 411804.0}}
|
2023-06-12T19:49:03+00:00
|
1da62a047acc141aa0a6e517e13bf42d69824453
|
# Dataset Card for "reddit-ah-dialog-annotations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Deojoandco/reddit-ah-dialog-annotations
|
[
"region:us"
] |
2023-06-12T19:49:12+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "num_comments", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "upvote_ratio", "dtype": "float64"}, {"name": "distinguished", "dtype": "string"}, {"name": "over_18", "dtype": "bool"}, {"name": "created_utc", "dtype": "int64"}, {"name": "comments", "list": [{"name": "body", "dtype": "string"}, {"name": "created_utc", "dtype": "float64"}, {"name": "distinguished", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "permalink", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}, {"name": "best_num_comments", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "dialog", "dtype": "string"}, {"name": "annotation_success", "dtype": "bool"}, {"name": "annotation_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33847703, "num_examples": 2921}, {"name": "validation", "num_bytes": 3120682, "num_examples": 293}, {"name": "test", "num_bytes": 3377043, "num_examples": 292}], "download_size": 23040594, "dataset_size": 40345428}}
|
2023-06-12T19:49:38+00:00
|
e9d48a0edf45ec1b3b2128ca182193bf6e2e5404
|
xiaoyaoyou/WMT19
|
[
"license:openrail",
"region:us"
] |
2023-06-12T19:56:39+00:00
|
{"license": "openrail"}
|
2023-06-12T19:56:39+00:00
|
|
bf8e531bac9d19dcdefe9c07cf554dd31e48735e
|
# Dataset Card for "multinose_train_controlnet_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
killah-t-cell/multinose_train_controlnet_dataset
|
[
"region:us"
] |
2023-06-12T20:45:45+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2326065485.613, "num_examples": 44263}], "download_size": 2126832094, "dataset_size": 2326065485.613}}
|
2023-06-12T20:54:38+00:00
|
c53e7e726a9da19a369acd8cd0e046639fd0cbc7
|
Abhimanu/dd
|
[
"license:unknown",
"region:us"
] |
2023-06-12T20:56:46+00:00
|
{"license": "unknown"}
|
2023-06-12T20:56:46+00:00
|
|
e511e44c399829357a8c5a1396ccc9df39fdd517
|
steinhaug/onceUponAtimeInPornVille
|
[
"license:other",
"region:us"
] |
2023-06-12T21:19:56+00:00
|
{"license": "other"}
|
2024-02-17T02:57:13+00:00
|
|
a58347e636d5d51d92ce3a99fa50a99ab4b13610
|
# Dataset Card for "chai-experiment-v0-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/chai-experiment-v0-chatml
|
[
"region:us"
] |
2023-06-12T21:45:43+00:00
|
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "do_train", "dtype": "bool"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1765802637.0, "num_examples": 322064}], "download_size": 909265481, "dataset_size": 1765802637.0}}
|
2023-06-14T16:22:04+00:00
|
2f14c26626092f0a52559c7fd54ad8b3495de155
|
# Dataset Card for "islamic_art"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
adhamelarabawy/islamic_art
|
[
"region:us"
] |
2023-06-12T22:08:53+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "similarity", "dtype": "float64"}, {"name": "img", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 558665361.6, "num_examples": 1292}, {"name": "test", "num_bytes": 139666340.4, "num_examples": 323}], "download_size": 698157582, "dataset_size": 698331702.0}}
|
2023-06-12T23:45:33+00:00
|
14028d26b1b30b6de45959fbf3a1a33ef53060fd
|
# Dataset Card for "Imagenet1k_validation_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_50000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/Imagenet1k_validation_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_50000
|
[
"region:us"
] |
2023-06-12T22:20:52+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_laion_ViT_H_14_2B_simple_specific_rices", "num_bytes": 21191760, "num_examples": 50000}, {"name": "fewshot_0__Attributes_ViT_L_14_descriptors_text_davinci_003_full_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 22301150, "num_examples": 50000}], "download_size": 16305421, "dataset_size": 43492910}}
|
2023-06-13T08:56:32+00:00
|
a5d16cdf77bbd17627c5bddc04109d3522983059
|
# Dataset Card for "chai-experiment-v1-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/chai-experiment-v1-chatml
|
[
"region:us"
] |
2023-06-12T22:25:09+00:00
|
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "do_train", "dtype": "bool"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2519356815.0, "num_examples": 499663}], "download_size": 1321137823, "dataset_size": 2519356815.0}}
|
2023-06-14T16:04:30+00:00
|
c903f94b5bc01b835ddf7e53abdacf17b8d6c12d
|
# Dataset Card for "chai-experiment-v3-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/chai-experiment-v3-chatml
|
[
"region:us"
] |
2023-06-12T23:17:05+00:00
|
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "do_train", "dtype": "bool"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2865527164.0, "num_examples": 670324}], "download_size": 1499912734, "dataset_size": 2865527164.0}}
|
2023-06-14T17:31:30+00:00
|
4ea93d45a24071fe6747a571b3371af741555dba
|
vwxyzjn/lm-human-preferences
|
[
"license:mit",
"region:us"
] |
2023-06-12T23:20:43+00:00
|
{"license": "mit"}
|
2023-09-01T01:02:15+00:00
|
|
b04fbf7e276df1d74d71dd91f443c7a7b4b5f83e
|
# Dataset Card for "ecommerce-faq-chatbot-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dltdojo/ecommerce-faq-chatbot-dataset
|
[
"region:us"
] |
2023-06-13T00:02:44+00:00
|
{"dataset_info": {"features": [{"name": "a_hant", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "q_hant", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28737, "num_examples": 79}], "download_size": 17499, "dataset_size": 28737}}
|
2023-06-13T04:50:52+00:00
|
03f5fde12eecbf283e344db49520b281b91154e6
|
# Dataset Card for "processed_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ephmecx/processed_demo
|
[
"region:us"
] |
2023-06-13T00:44:35+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "seed", "dtype": "uint32"}, {"name": "step", "dtype": "uint16"}, {"name": "cfg", "dtype": "float32"}, {"name": "sampler", "dtype": "string"}, {"name": "width", "dtype": "uint16"}, {"name": "height", "dtype": "uint16"}, {"name": "user_name", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[us, tz=UTC]"}, {"name": "image_nsfw", "dtype": "float32"}, {"name": "prompt_nsfw", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 707995291.0, "num_examples": 1000}], "download_size": 707533020, "dataset_size": 707995291.0}}
|
2023-06-13T00:45:22+00:00
|
765261d6578eec08294352d9e4cc55247f75524b
|
# Dataset Card for "dottamemes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dotta/dottamemes
|
[
"region:us"
] |
2023-06-13T00:59:19+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 19993249.0, "num_examples": 35}], "download_size": 0, "dataset_size": 19993249.0}}
|
2023-06-17T00:19:33+00:00
|
63b289c780988693218cb9fbad3f2c2464a681ef
|
# Dataset Card for "Synthetic-Salt-Luganda-13-6-23"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Sunbird/Synthetic-Salt-Luganda-13-6-23
|
[
"region:us"
] |
2023-06-13T01:00:55+00:00
|
{"dataset_info": {"features": [{"name": "audio", "sequence": {"sequence": "float32"}}, {"name": "sample_rate", "dtype": "int64"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8360315972, "num_examples": 25000}], "download_size": 8282006533, "dataset_size": 8360315972}}
|
2023-06-13T01:05:35+00:00
|
9238a8c0d15306223af9f785df7001e0d10a6f85
|
# Dataset Card for "pixel_glue_mnli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Nadav/pixel_glue_mnli
|
[
"region:us"
] |
2023-06-13T01:11:30+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2"}}}}], "splits": [{"name": "train", "num_bytes": 5503541554.25, "num_examples": 392702}, {"name": "validation", "num_bytes": 278770933.125, "num_examples": 19647}], "download_size": 5641852302, "dataset_size": 5782312487.375}}
|
2023-06-13T01:17:07+00:00
|
109ede6937a5820943158a3355ecd791debe7e90
|
CogniVerse/tpmify
|
[
"license:other",
"region:us"
] |
2023-06-13T02:59:49+00:00
|
{"license": "other"}
|
2023-06-13T03:13:26+00:00
|
|
e4313242a575093f57cc4c9f11eb09fb3bd2eed5
|
# Dataset Card for "Ashaar_diacritized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arbml/Ashaar_diacritized
|
[
"region:us"
] |
2023-06-13T03:05:39+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1307555.018811609, "num_examples": 23481}, {"name": "test", "num_bytes": 72669.7883203079, "num_examples": 1305}, {"name": "valid", "num_bytes": 72669.7883203079, "num_examples": 1305}], "download_size": 6698907, "dataset_size": 1452894.5954522246}}
|
2023-06-13T03:05:52+00:00
|
52fb0dbf806e5fccc344258fd0640e6296c0858d
|
# Dataset Card for "spanish_attitude"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jorgeortizfuentes/spanish_attitude
|
[
"region:us"
] |
2023-06-13T03:26:20+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "prediction", "dtype": "null"}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "list": [{"name": "end", "dtype": "int64"}, {"name": "label", "dtype": "string"}, {"name": "start", "dtype": "int64"}]}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "annotated", "struct": [{"name": "mentions", "list": [{"name": "capitalness", "dtype": "string"}, {"name": "chars_length", "dtype": "int64"}, {"name": "density", "dtype": "float64"}, {"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "tokens_length", "dtype": "int64"}, {"name": "value", "dtype": "string"}]}, {"name": "tags", "list": [{"name": "tag", "dtype": "string"}, {"name": "value", "dtype": "string"}]}]}, {"name": "predicted", "struct": [{"name": "mentions", "sequence": "null"}, {"name": "tags", "sequence": "null"}]}, {"name": "text_length", "dtype": "int64"}, {"name": "tokens", "list": [{"name": "capitalness", "dtype": "string"}, {"name": "char_end", "dtype": "int64"}, {"name": "char_start", "dtype": "int64"}, {"name": "custom", "dtype": "null"}, {"name": "idx", "dtype": "int64"}, {"name": "length", "dtype": "int64"}, {"name": "score", "dtype": "null"}, {"name": "tag", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "tokens_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 3791404, "num_examples": 801}], "download_size": 956149, "dataset_size": 3791404}}
|
2023-06-13T03:26:27+00:00
|
60ba07440f6662eedb7fe9993739eb73853a59b8
|
# Dataset Card for Novelupdates Webnovels
### Dataset Summary
This dataset contains information about webnovels from Novelupdates, a popular webnovel platform. It includes details such as novel ID, URL, title, associated names, cover image URL, show type, genres, tags, description, related series, recommendations, recommendation lists, rating, language, authors, artists, year, status, licensing information, translation status, publishers, release frequency, rankings, total reading list rank, and chapters.
### Supported Tasks and Leaderboards
The dataset can be used for various tasks such as text classification, zero-shot classification, and feature extraction. It currently does not have an established leaderboard.
### Languages
The dataset is primarily in English.
## Dataset Structure
### Data Instances
The dataset contains 14,713 data instances.
### Data Fields
The dataset includes the following fields:
- novel_id: integer
- url: string
- title: string
- associated_names: list of strings
- img_url: string
- showtype: string
- genres: list of strings
- tags: list of strings
- description: string
- related_series: struct
- related_series: list of structs
- title: string
- url: string
- total: integer
- recommendations: struct
- recommendations: list of structs
- recommended_user_count: integer
- title: string
- url: string
- total: integer
- recommendation_lists: struct
- list: list of structs
- title: string
- url: string
- total: integer
- rating: string
- language: string
- authors: list of strings
- artists: list of strings
- year: string
- status_coo: string
- licensed: string
- translated: string
- publishers: list of strings
- en_pubs: list of strings
- release_frequency: string
- weekly_rank: string
- monthly_rank: string
- all_time_rank: string
- monthly_rank_reading_list: string
- all_time_rank_reading_list: string
- total_reading_list_rank: string
- chapters: struct
- chapters: list of structs
- title: string
- url: string
- total: integer
### Data Splits
The dataset includes a single split:
- Train: 11.8K examples
- Test: 2.94K examples
## Dataset Creation
### Curation Rationale
The dataset was curated to provide a comprehensive collection of webnovel information from Novelupdates for various text analysis tasks.
### Source Data
#### Initial Data Collection and Normalization
The initial data was collected from the Novelupdates website and normalized for consistency and structure.
#### Who are the source language producers?
The source language producers are the authors and publishers of the webnovels.
### Annotations
#### Annotation process
The dataset does not contain explicit annotations. It consists of the information available on the Novelupdates website.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
The dataset does not include any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
shhossain/webnovels
|
[
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"task_categories:feature-extraction",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] |
2023-06-13T04:06:28+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "zero-shot-classification", "feature-extraction"], "pretty_name": "Novelupdates Dataset", "dataset_info": {"features": [{"name": "novel_id", "dtype": "int64"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "associated_names", "sequence": "string"}, {"name": "img_url", "dtype": "string"}, {"name": "showtype", "dtype": "string"}, {"name": "genres", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "description", "dtype": "string"}, {"name": "related_series", "struct": [{"name": "related_series", "list": [{"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "total", "dtype": "int64"}]}, {"name": "recommendations", "struct": [{"name": "recomendations", "list": [{"name": "recommended_user_count", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "total", "dtype": "int64"}]}, {"name": "recommendation_lists", "struct": [{"name": "list", "list": [{"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "total", "dtype": "int64"}]}, {"name": "rating", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "authors", "sequence": "string"}, {"name": "artists", "sequence": "string"}, {"name": "year", "dtype": "string"}, {"name": "status_coo", "dtype": "string"}, {"name": "licensed", "dtype": "string"}, {"name": "translated", "dtype": "string"}, {"name": "publishers", "sequence": "string"}, {"name": "en_pubs", "sequence": "string"}, {"name": "release_frequency", "dtype": "string"}, {"name": "weekly_rank", "dtype": "string"}, {"name": "monthly_rank", "dtype": "string"}, {"name": "all_time_rank", "dtype": "string"}, {"name": "monthly_rank_reading_list", "dtype": "string"}, {"name": "all_time_rank_reading_list", "dtype": "string"}, {"name": "total_reading_list_rank", "dtype": "string"}, {"name": "chapters", "struct": [{"name": "chapters", "list": [{"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "total", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 58948539.85115204, "num_examples": 11770}, {"name": "test", "num_bytes": 14739639.148847958, "num_examples": 2943}], "download_size": 22367283, "dataset_size": 73688179.0}}
|
2023-06-15T14:35:51+00:00
|
f5deaaed1096f2df9f1deb9dbff642fcf5f934ec
|
The official repository of paper: "Check Me If You Can: Detecting ChatGPT-Generated Academic Writing using CheckGPT".
|
julianzy/GPABenchmark
|
[
"region:us"
] |
2023-06-13T04:09:53+00:00
|
{}
|
2023-06-13T04:21:59+00:00
|
7080f15d095034953bee918ada80217df4e3cfb3
|
# Summary
`databricks-dolly-15k-uk` is an open source dataset based on [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) instruction-following dataset, but machine translated using [facebook/m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) model.
Tasks covered include brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
Expect this dataset to not be grammatically correct and having obvious pitfalls of machine translation.
<details>
<summary>Original Summary</summary>
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Ukrainian
Version: 1.0
**Owner: Databricks, Inc.**
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT.
Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including
the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using
information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly
instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors.
They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context`
field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts,
this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper.
For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a
corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to
restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might
provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from
these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source,
human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT.
Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including
academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization)
contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the
target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical
of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of
rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors.
</details>
|
robinhad/databricks-dolly-15k-uk
|
[
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:uk",
"license:cc-by-sa-3.0",
"arxiv:2203.02155",
"region:us"
] |
2023-06-13T04:36:45+00:00
|
{"language": ["uk"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "summarization"]}
|
2023-06-13T04:43:34+00:00
|
2e0f20cf358b149ca21cd21d1e3d045d0360cc3c
|
EJinHF/SQuALITY_retrieve
|
[
"task_categories:summarization",
"language:en",
"region:us"
] |
2023-06-13T04:42:17+00:00
|
{"language": ["en"], "task_categories": ["summarization"]}
|
2023-06-13T04:51:19+00:00
|
|
f2da6014e9d03181d3adac961ad96562b949b47a
|
musiki/dwset
|
[
"license:other",
"region:us"
] |
2023-06-13T05:05:54+00:00
|
{"license": "other"}
|
2023-06-13T05:06:27+00:00
|
|
ec160a6405383a314ddb598bf050a2f1c145d773
|
This datastes is for llama-based chemistry condition generation model
|
yuyuc/chem-llama-instruct
|
[
"license:openrail",
"region:us"
] |
2023-06-13T06:27:00+00:00
|
{"license": "openrail"}
|
2023-06-13T06:51:48+00:00
|
741d504ce982199eebb3b5d6a6348063bf66a01f
|
# Dataset Card for "KS_Ashare_announce_NER"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zhili1990/KS_Ashare_announce_NER
|
[
"region:us"
] |
2023-06-13T06:48:10+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "entity_list", "list": [{"name": "type", "sequence": "string"}, {"name": "argument", "dtype": "string"}, {"name": "start", "dtype": "int64"}, {"name": "end", "dtype": "int64"}]}, {"name": "kstext", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6730713, "num_examples": 9277}], "download_size": 2177402, "dataset_size": 6730713}}
|
2023-06-13T06:49:02+00:00
|
f93d9875abd0dcafc920fe2932aeb7683b85cd0d
|
BerMaker/test
|
[
"task_categories:text-classification",
"size_categories:n<1K",
"license:apache-2.0",
"code",
"art",
"region:us"
] |
2023-06-13T07:12:00+00:00
|
{"license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "tags": ["code", "art"]}
|
2023-06-13T08:44:54+00:00
|
|
ac513b0a0d0a917d7df3229952b220d158b56c20
|
# FaceSwap Extension - Automatic 1111 - Proof of Concept
This repository contains a quick and dirty proof of concept inspired by [roop](https://github.com/s0md3v/roop) and using [insightface](https://github.com/deepinsight/insightface), allowing you to swap faces in images generated by stable diffusion.
The main objective of this extension is to enable face swapping for single images in stable diffusion.
To ensure compatibility, this extension currently runs only on CPU. However, it can be easily ported to GPU for improved performance.

**Ethical viewpoint :** The primary goal of this extension is to ensure consistency in generated images through the capability of face swapping. It is essential to clarify that this extension does not incorporate censorship functionalities. Although censorship mechanisms can be implemented (as demonstrated by roop), they inherently remain vulnerable to bypassing when users have access to the source code. Consequently, it is crucial to exercise responsible usage of this extension and abstain from employing it for malicious purposes. We strongly emphasize the ethical application of this tool, urging respect for the privacy and consent of individuals when swapping faces in images. **Engaging in activities that may cause harm, violate privacy rights, or infringe upon the well-being of others is strictly discouraged.**
Furthermore, it is equally important to raise awareness among the general public about the existence of such tools and the ease with which deepfakes can be generated. As the technology advances, it becomes increasingly crucial for individuals to exercise critical thinking and skepticism when encountering media content. By fostering a culture of media literacy, we can collectively mitigate the potential negative impacts associated with the misuse of these tools and promote responsible use in the digital realm.
**In the event of violation of the legal and ethical requirements of the user's country or region, this code repository is exempt from liability**
## Install
To install the extension, follow these steps:
+ Clone the repository to your automatic 1111 extensions directory.
+ Download the pre-trained model used by "Roop" and place it in the models directory of this extension (/stable-diffusion-webui/extensions/sd-webui-faceswap/models/ or /stable-diffusion-webui/models/FaceSwap). The model file required is "inswapper_128.onnx".Mirrors are given the roop project [installation guide](https://github.com/s0md3v/roop/wiki/1.-Installation).
On Windows, Microsoft Visual C++ 14.0 or greater is required. [During the install, make sure to include the Python and C++ packages.](https://github.com/s0md3v/roop/issues/153)
The inswapper_128.onnx model I use has the following sha1sum : 17a64851eaefd55ea597ee41e5c18409754244c5
**Use of the models must comply with their respective terms of the license (see [insightface repository](https://github.com/deepinsight/insightface/tree/master/python-package))**. No model will be directly provided or hosted here.
## Usage
To use the FaceSwap extension, follow these instructions:
1. In the face swap box, import an image containing a face.
2. Click on the "Activate" before generate.
3. Optionally, select the face number you wish to swap (from right to left) if multiple faces are detected in the image.
4. The resulting swapped face will be displayed.
5. If the quality is not satisfactory (and it is often quite average), you can try using the "Restore Face" feature or explore additional options in the "Extra" tab for further improvement. You can also select an upscaler from the menu. This will activate CodeFormer and the chosen upscaler (scale=1). The result may be satisfactory, but gives less control than the extra tab.
### Img2Img :
You can choose to activate the swap on the source image or on the generated image, or on both. Activate on source image allows you to start from a given base and apply the diffusion process to it.
Inpainting should work but only the masked part will be swapped.
## Credits
+ The developers of roop for their great work.
+ deepinsight for their insightface project, which offers a well-crafted library and models that have greatly enhanced the capabilities of this project.
|
ngocuong/Ghepmat
|
[
"region:us"
] |
2023-06-13T07:24:41+00:00
|
{}
|
2023-06-13T07:27:48+00:00
|
c58fc4125aeac5f16f40cc10d26307b3274f1b8c
|
Fast Lean Pro is a naturally occurring fatty acid found in meat and dairy products. This supplement is gaining popularity and has become widely regarded as a contender for the weight-loss miracle pill. Animal studies have indicated that ALA can reduce the activity of the enzyme AMP-activated protein kinase , located in your brain’s hypothalamus.
A whole lot of processes work together to stimulate weight loss. Losing weight could be done through exercises and some other lifestyle and dietary changes, but you fast-track the weight loss action with these weight loss supplements. Trust me, with bad cholesterol regulation, you are at the starting point of blasting away your fat. It is a significant source of daily dietary protein requirements.
Although green tea extract is usually well tolerated, it can cause stomach pain, constipation, and nausea. However, a more recent review of randomized control trials indicated that glucomannan did not appear to result in significant weight loss. Studies appear to be conflicting on whether glucomannan can aid in weight loss, however. It works by absorbing water in the gut, leading to a feeling of fullness that may prompt people to eat less.
Fast Lean Pro is low in calories, and high in antioxidants such as Epigallocatechin Gallate . Scientists at the University of Colorado found that the EGCG content in Fast Lean Pro is 137 times more than Chinese green tea. These antioxidants can help flush out toxins, boost immunity, and reduce the body’s inflammation, which helps prevent weight gain and accelerates weight loss. My general opinion on supplementation for weight loss is that one need not focus on substances or external things in order to achieve weight loss. Read about the 3-step plan, along with other science-backed weight loss tips, here.
Fast Lean Pro is an extract from a plant in the mint family, claimed to be effective for losing weight. According to another review study from 2012, Fast Lean Pro can make you lose about 3 lbs (1.3 kg) of weight, compared to a dummy pill .
As mentioned on the official website, this all-natural supplement actively flushes out unhealthy toxins and boosts liver health to ensure a natural and long-term weight loss solution. It also provides additional health benefits to keep users feeling fresh and energetic. As per the official website of Fast Lean Pro, this supplement contains the purest and most natural ingredients, all bottled together after excessive research to ensure effective results. The supplement's ingredients boost the body's metabolism and help improve organ functionality, especially the liver, which is the most affected organ due to fat.
https://www.supplementz.org/fast-lean-pro/
https://www.supplementz.org/lean-gene-reviews/
https://www.supplementz.org/lipoxin-comentarios/
https://www.supplementz.org/testo-ultra-precio/
https://www.supplementz.org/exipure-reviews/
https://www.supplementz.org/
Fast Lean Pro,
Fast Lean Pro Reviews,
Fast Lean Pro Ingredients,
Fast Lean Pro Benefits,
Fast Lean Pro Side Effects,
Fast Lean Pro Price,
Fast Lean Pro Where to Buy,
Fast Lean Pro Official Website
|
fastleanpro/FastLeanPro
|
[
"region:us"
] |
2023-06-13T07:47:18+00:00
|
{}
|
2023-06-13T07:49:33+00:00
|
f3f59bd298153c2079884b5cea9755f4a469d62d
|
<div align="center">
<img width="640" alt="manot/pothole-segmentation" src="https://huggingface.co/datasets/manot/pothole-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['potholes', 'object', 'pothole', 'potholes']
```
### Number of Images
```json
{'valid': 157, 'test': 80, 'train': 582}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("manot/pothole-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/abdulmohsen-fahad-f7pdw/road-damage-xvt2d/dataset/3](https://universe.roboflow.com/abdulmohsen-fahad-f7pdw/road-damage-xvt2d/dataset/3?ref=roboflow2huggingface)
### Citation
```
@misc{ road-damage-xvt2d_dataset,
title = { road damage Dataset },
type = { Open Source Dataset },
author = { abdulmohsen fahad },
howpublished = { \\url{ https://universe.roboflow.com/abdulmohsen-fahad-f7pdw/road-damage-xvt2d } },
url = { https://universe.roboflow.com/abdulmohsen-fahad-f7pdw/road-damage-xvt2d },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jun },
note = { visited on 2023-06-13 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on June 13, 2023 at 8:47 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 819 images.
Potholes are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
|
manot/pothole-segmentation
|
[
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"region:us"
] |
2023-06-13T07:47:23+00:00
|
{"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface"]}
|
2023-06-13T09:20:28+00:00
|
483ae25dd99d31e1b6e0b4e545c503fa449a0549
|
# OK-VQA in multilang
This is Google-translated versions of [OK-VQA](https://okvqa.allenai.org/index.html) in many languages. Each language version stays in each folder.
The process of building Vietnamese version as follows:
- In `en/` folder,
- From [OK-VQA](https://okvqa.allenai.org/index.html), obtain all json files: `mscoco_train2014_annotations.json`, `mscoco_val2014_annotations.json`, `OpenEnded_mscoco_train2014_questions.json`, `OpenEnded_mscoco_val2014_questions.json`.
- By using [set data structure](https://docs.python.org/3/tutorial/datastructures.html#sets), generate txt files of unique text: `train_answer_list.txt`, `train_question_list.txt`, `val_answer_list.txt`, `val_question_list.txt`.
- In `vi/` folder,
- By translating 4 `en/.txt` files, generate `train_answer_list.jsonl`, `train_question_list.jsonl`, `val_answer_list.jsonl`, `val_question_list.jsonl`. In each of entry of each file, the key is the original english text, the value is the translated text in vietnamese.
To load Vietnamese version in your code, you need original English version. Then just use English text as key to retrieve Vietnamese value from jsonl files. I provide both English and Vietnamese version.
Other languages (if added) shall follow the same process.
Current languages:
- en
- vi
There will be more if I have time.
Please refer to [this code](https://github.com/dinhanhx/velvet/blob/main/scripts/apply_translate_okvqa.py) to apply translation.
|
dinhanhx/OK-VQA-multilang
|
[
"task_categories:visual-question-answering",
"task_ids:visual-question-answering",
"language:vi",
"language:en",
"license:unknown",
"OK-VQA",
"OK-VQA-vi",
"region:us"
] |
2023-06-13T07:50:30+00:00
|
{"language": ["vi", "en"], "license": "unknown", "task_categories": ["visual-question-answering"], "task_ids": ["visual-question-answering"], "pretty_name": "OK-VQA in multilang", "source-datasets": ["OK-VQA", "COCO"], "tags": ["OK-VQA", "OK-VQA-vi"]}
|
2023-09-21T09:27:44+00:00
|
135a4107aa314ee3d9d26422950180943c077bea
|
# Dataset Card for "earrings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
imnaveenk/earrings
|
[
"region:us"
] |
2023-06-13T07:57:22+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 107545898.846, "num_examples": 1626}], "download_size": 91556390, "dataset_size": 107545898.846}}
|
2023-06-14T03:50:46+00:00
|
598ebdd332d97ce226b2b3b52ba7a411cc9959a6
|
GautamR/grievance_agri
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-13T08:01:12+00:00
|
{"license": "apache-2.0"}
|
2023-09-12T12:25:57+00:00
|
|
c96ac0c31114efca5c00d895a8bd0ed371381f7d
|
# Grocery Shelves Dataset
## Facing is the process of arranging products on shelves and counters.
The dataset consist of labeled photographs of grocery store shelves.
The Grocery Shelves Dataset can be used to analyze and optimize product placement data, develop strategies for increasing product visibility, maximize the effectiveness of the product placements and increase sales.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=grocery-shelves-dataset) to discuss your requirements, learn about the price and buy the dataset.

# Dataset structure
- **img** - contains of original images of grocery store shelves
- **labels** - includes polyline labeling for the original images
- **annotations.xml** - contains coordinates of the polylines and labels, created for the original photo
# Data Format
Each image from `img` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the polylines for product placement. For each point, the x and y coordinates are provided.
### Attributes
- **is_flipped** - the product position (*true* if the product is flipped)
- **is_facing** - the product visability (*true* if the product's cover is turned towards us and can be clearly seen)
# Example of XML file structure
.png?generation=1686606438563238&alt=media)
# Product Facing might be made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=grocery-shelves-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
|
TrainingDataPro/grocery-shelves-dataset
|
[
"task_categories:image-segmentation",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"region:us"
] |
2023-06-13T08:13:48+00:00
|
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["image-segmentation"], "tags": ["code", "finance"], "dataset_info": {"features": [{"name": "image_id", "dtype": "uint32"}, {"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "shapes", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37976724, "num_examples": 16}], "download_size": 37970716, "dataset_size": 37976724}}
|
2023-09-14T15:53:45+00:00
|
124270bd1633d7f67779691cd4314f863e09b0a0
|
# Dataset Card for "common_voice_13_0_validated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
RikRaes/common_voice_13_0_validated
|
[
"region:us"
] |
2023-06-13T08:24:24+00:00
|
{"dataset_info": {"features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accents", "dtype": "string"}, {"name": "variant", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "validated", "num_bytes": 3134671952.746, "num_examples": 86798}], "download_size": 2624065513, "dataset_size": 3134671952.746}}
|
2023-06-13T12:38:37+00:00
|
755089b9bb29d269ef2816c1db731dba9a592aa0
|
# Dataset Card for "dusha_extra_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
xbgoose/dusha_extra_data
|
[
"region:us"
] |
2023-06-13T08:36:38+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "emotion", "dtype": "string"}, {"name": "duration", "dtype": "float64"}, {"name": "speaker_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23518537494.784, "num_examples": 150352}, {"name": "test", "num_bytes": 2213372251.79, "num_examples": 14035}], "download_size": 21510259521, "dataset_size": 25731909746.574}}
|
2023-06-13T11:57:04+00:00
|
73886a2ff16f45ebb5fdb6ff2e0367e1043cca2f
|
Used for ImageClassificationSD
Uses ZIP Format
LPX Modular
Basic Images, Ranging from nano to HUGE model.
Models may also be classified by the version.
|
UncoverAI/ImagesAnimal
|
[
"biology",
"region:us"
] |
2023-06-13T08:38:47+00:00
|
{"pretty_name": "TM ML", "tags": ["biology"]}
|
2023-07-05T01:17:34+00:00
|
a2979735ed1cbbe1433f18bd4cf6671d07543c08
|
# Dataset Card for "6d70c905"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/6d70c905
|
[
"region:us"
] |
2023-06-13T09:04:18+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 178, "num_examples": 10}], "download_size": 1336, "dataset_size": 178}}
|
2023-06-13T09:04:20+00:00
|
be6e0294ffdd525681d79af20fa95f2e775586f0
|
deepghs/anime_ch_hair_color
|
[
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:mit",
"art",
"region:us"
] |
2023-06-13T09:10:50+00:00
|
{"license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "tags": ["art"]}
|
2023-06-14T02:45:42+00:00
|
|
258f73fb7f1473669abcb7e54e6d2d468c55d99e
|
# Dataset Card for "materials-blip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
samurai-architects/materials-blip
|
[
"region:us"
] |
2023-06-13T09:12:43+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32878470.0, "num_examples": 10}], "download_size": 32881580, "dataset_size": 32878470.0}}
|
2023-06-13T09:12:49+00:00
|
1d3d1b2311d51f14e8059d8336573fa1aefa54c7
|
# Otter UBC Dataset Card
UBC is a dataset comprising entities (Proteins/Drugs) from Uniprot (U), BindingDB (B) and. ChemBL (C). It contains 6,207,654 triples.
<div align="center">
<img src="https://raw.githubusercontent.com/IBM/otter-knowledge/main/assets/neurips_ubc.png" alt="Overview of the creation of UBC"/>
</div>
## Dataset details
#### Uniprot
Uniprot comprises of 573,227 proteins from SwissProt, which is the subset of manually curated entries within UniProt, including attributes with different modalities like the sequence (567,483 of them), full name, organism, protein family, description of its function, catalytics activity, pathways and its length. The number of edges are 38,665 of type target_of from Uniprot ids to both ChEMBL and Drugbank ids, and 196,133 interactants between Uniprot protein ids.
#### BindingDB
BindingDB consists of 2,656,221 data points, involving 1.2 million compounds and 9,000 targets. Instead of utilizing the affinity score, we generate a triple for each combination of drugs and proteins. In order to prevent any data leakage, we eliminate overlapping triples with the TDC DTI dataset. As a result, the dataset concludes with a total of 2,232,392 triples.
#### ChEMBL
ChemBL comprises of drug-like bioactive molecules, 10,261 ChEMBL ids with their corresponding SMILES were downloaded from OpenTargets, from which 7,610 have a *sameAs* link to drugbank id molecules.
<div align="center">
<img src="https://raw.githubusercontent.com/IBM/otter-knowledge/main/assets/ubckg_example.jpg" alt="Example of UBC"/>
</div>
**Original datasets:**
- Uniprot: The UniProt Consortium. UniProt: the Universal Protein Knowledgebase in 2023. Nucleic Acids Research, 51(D1):D523–D531, 11 2022. ISSN 0305-1048. doi: 10.1093/nar/gkac1052. URL https://doi.org/10.1093/nar/gkac1052
- BindingDB: Tiqing Liu, Yuhmei Lin, Xin Wen, Robert N Jorissen, and Michael K Gilson. Bindingdb: a web-accessible database of experimentally determined protein–ligand binding affinities. Nucleic acids research, 35(suppl_1):D198–D201, 2007.
- ChemBL: Anna Gaulton, Louisa J. Bellis, A. Patricia Bento, Jon Chambers, Mark Davies, Anne Hersey, Yvonne Light, Shaun McGlinchey, David Michalovich, Bissan Al-Lazikani, and John P. Overington. ChEMBL: a large-scale bioactivity database for drug discovery. Nucleic Acids Research, 40(D1):D1100–D1107, 09 2011. ISSN 0305-1048. doi: 10.1093/nar/gkr777. URL https://doi.org/10.1093/nar/gkr777
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the dataset:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
**Models trained on Otter UBC**
- [ibm/otter_ubc_classifier](https://huggingface.co/ibm/otter_ubc_classifier)
- [ibm/otter_ubc_distmult](https://huggingface.co/ibm/otter_ubc_distmult)
- [ibm/otter_ubc_transe](https://huggingface.co/ibm/otter_ubc_transe)
|
ibm/otter_uniprot_bindingdb_chembl
|
[
"license:mit",
"arxiv:2306.12802",
"region:us"
] |
2023-06-13T09:17:15+00:00
|
{"license": "mit"}
|
2023-06-26T07:09:52+00:00
|
27223b44fa3119aab67a39b65358a453d88912b9
|
# Otter UB Dataset Card
UB is a dataset comprising entities (Proteins/Drugs) from Uniprot (U) and BindingDB (B)
## Dataset details
#### Uniprot
Uniprot comprises of 573,227 proteins from SwissProt, which is the subset of manually curated entries within UniProt, including attributes with different modalities like the sequence (567,483 of them), full name, organism, protein family, description of its function, catalytics activity, pathways and its length. The number of edges are 38,665 of type target_of from Uniprot ids to both ChEMBL and Drugbank ids, and 196,133 interactants between Uniprot protein ids.
#### BindingDB
BindingDB consists of 2,656,221 data points, involving 1.2 million compounds and 9,000 targets. Instead of utilizing the affinity score, we generate a triple for each combination of drugs and proteins. In order to prevent any data leakage, we eliminate overlapping triples with the TDC DTI dataset. As a result, the dataset concludes with a total of 2,232,392 triples.
**Original datasets:**
- Uniprot: The UniProt Consortium. UniProt: the Universal Protein Knowledgebase in 2023. Nucleic Acids Research, 51(D1):D523–D531, 11 2022. ISSN 0305-1048. doi: 10.1093/nar/gkac1052. URL https://doi.org/10.1093/nar/gkac1052
- BindingDB: Tiqing Liu, Yuhmei Lin, Xin Wen, Robert N Jorissen, and Michael K Gilson. Bindingdb: a web-accessible database of experimentally determined protein–ligand binding affinities. Nucleic acids research, 35(suppl_1):D198–D201, 2007.
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the dataset:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
|
ibm/otter_uniprot_bindingdb
|
[
"license:mit",
"arxiv:2306.12802",
"region:us"
] |
2023-06-13T09:36:47+00:00
|
{"license": "mit"}
|
2023-06-26T07:10:09+00:00
|
8b5f4b07fa29cb046d75701e3aa7b9fa74471c84
|
# Otter PrimeKG Dataset Card
The Otter PrimeKG dataset contains 12,757,257 triples with Proteins, Drugs and Diseases. It contains protein sequences, SMILES and text
## Dataset details
#### PrimeKG
PrimeKG (the Precision Medicine Knowledge Graph) integrates 20 biomedical resources, it describes 17,080 diseases with 4 million relationships. PrimeKG includes nodes describing Gene/Proteins (29,786) and Drugs (7,957 nodes). The Multimodal Knowledge Graph (MKG) that we built from PrimeKG contains 13 modalities, 12,757,300 edges (154,130 data properties, and 12,603,170 object properties), including 642,150 edges describing interactions between proteins, 25,653 edges describing drug-protein interactions, and 2,672,628 describing interactions between drugs.
**Original dataset:**
- [GitHub Repo](https://zitniklab.hms.harvard.edu/projects/PrimeKG)
- Citation: Chandak, P., Huang, K. & Zitnik, M. Building a knowledge graph to enable precision medicine. Sci Data 10, 67 (2023). https://doi.org/10.1038/s41597-023-01960-3
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the dataset:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
**Models trained on Otter PrimeKG**
- [ibm/otter_primekg_classifier](https://huggingface.co/ibm/otter_primekg_classifier)
- [ibm/otter_primekg_distmult](https://huggingface.co/ibm/otter_primekg_distmult)
- [ibm/otter_primekg_transe](https://huggingface.co/ibm/otter_primekg_transe)
|
ibm/otter_primekg
|
[
"license:mit",
"arxiv:2306.12802",
"region:us"
] |
2023-06-13T09:39:12+00:00
|
{"license": "mit"}
|
2023-06-26T07:09:40+00:00
|
79c3cfa2a9da894e1ca84548f8963324404dfbdb
|
# Dataset Card for "databricks-dolly-15k-curated-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mrm8488/databricks-dolly-15k-curated-es
|
[
"region:us"
] |
2023-06-13T09:42:39+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "instruction_original_en", "dtype": "string"}, {"name": "context_original_en", "dtype": "string"}, {"name": "response_original_en", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "es", "num_bytes": 25902709, "num_examples": 15015}], "download_size": 16490137, "dataset_size": 25902709}}
|
2023-06-13T09:42:43+00:00
|
a76433f3aa0daa3bc35367e804f5af97568773a0
|
# Dataset Card for "DreamBook_Guanaco_Format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
n3rd0/DreamBook_Guanaco_Format
|
[
"region:us"
] |
2023-06-13T09:43:08+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2514729, "num_examples": 8548}, {"name": "test", "num_bytes": 301023, "num_examples": 949}], "download_size": 1066863, "dataset_size": 2815752}}
|
2023-06-13T09:43:33+00:00
|
9f8efac3bdfe1bdc0d3c1d997726bdf3056d886c
|
# Otter DUDe Dataset Card
Otter DUDe includes 1,452,568 instances of drug-target interactions.
## Dataset details
#### DUDe
DUDe comprises a collection of 22,886 active compounds and their corresponding affinities towards 102 targets. For our study, we utilized a preprocessed version of the DUDe, which includes 1,452,568 instances of drug-target interactions. To prevent any data leakage, we eliminated the negative interactions and the overlapping triples with the TDC DTI dataset. As a result, we were left with a total of 40,216 drug-target interaction pairs.
**Original dataset:**
- Citation: Samuel Sledzieski, Rohit Singh, Lenore Cowen, and Bonnie Berger. Adapting protein language models for rapid dti prediction. bioRxiv, pages 2022–11, 2022
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the dataset:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
**Models trained on Otter UBC**
- [ibm/otter_dude_classifier](https://huggingface.co/ibm/otter_dude_classifier)
- [ibm/otter_dude_distmult](https://huggingface.co/ibm/otter_dude_distmult)
- [ibm/otter_dude_transe](https://huggingface.co/ibm/otter_dude_transe)
|
ibm/otter_dude
|
[
"license:mit",
"arxiv:2306.12802",
"region:us"
] |
2023-06-13T09:46:48+00:00
|
{"license": "mit"}
|
2023-06-26T07:10:15+00:00
|
345fe50d83a6b6a13f0fca1a597be2fd48bfb37a
|
ml-projects/clickbait-ml_dataset
|
[
"license:openrail",
"region:us"
] |
2023-06-13T10:14:26+00:00
|
{"license": "openrail"}
|
2023-06-13T10:35:03+00:00
|
|
5fc46c6101f259c4e9343a26257c7d5d310d6257
|
deepghs/anime_ch_eye_color
|
[
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:mit",
"art",
"region:us"
] |
2023-06-13T10:22:05+00:00
|
{"license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "tags": ["art"]}
|
2023-06-14T03:05:54+00:00
|
|
471e57016c12c4a59599be12b224ba7071596994
|
# Dataset Card for Common Voice Corpus 6.1
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:[email protected])
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 9283 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 7335 validated hours in 60 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Assamese, Basque, Breton, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, Finnish, French, Frisian, Georgian, German, Greek, Hakha Chin, Hindi, Hungarian, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Lithuanian, Luganda, Maltese, Mongolian, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Slovenian, Sorbian, Upper, Spanish, Swedish, Tamil, Tatar, Thai, Turkish, Ukrainian, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_6_1", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
mariosasko/test_push_split
|
[
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] |
2023-06-13T10:53:13+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": {"ab": ["n<1K"], "ar": ["10K<n<100K"], "as": ["n<1K"], "br": ["10K<n<100K"], "ca": ["100K<n<1M"], "cnh": ["1K<n<10K"], "cs": ["10K<n<100K"], "cv": ["10K<n<100K"], "cy": ["10K<n<100K"], "de": ["100K<n<1M"], "dv": ["10K<n<100K"], "el": ["10K<n<100K"], "en": ["1M<n<10M"], "eo": ["10K<n<100K"], "es": ["100K<n<1M"], "et": ["10K<n<100K"], "eu": ["10K<n<100K"], "fa": ["100K<n<1M"], "fi": ["1K<n<10K"], "fr": ["100K<n<1M"], "fy-NL": ["10K<n<100K"], "ga-IE": ["1K<n<10K"], "hi": ["n<1K"], "hsb": ["1K<n<10K"], "hu": ["1K<n<10K"], "ia": ["1K<n<10K"], "id": ["10K<n<100K"], "it": ["100K<n<1M"], "ja": ["1K<n<10K"], "ka": ["1K<n<10K"], "kab": ["100K<n<1M"], "ky": ["10K<n<100K"], "lg": ["1K<n<10K"], "lt": ["1K<n<10K"], "lv": ["1K<n<10K"], "mn": ["10K<n<100K"], "mt": ["10K<n<100K"], "nl": ["10K<n<100K"], "or": ["1K<n<10K"], "pa-IN": ["1K<n<10K"], "pl": ["100K<n<1M"], "pt": ["10K<n<100K"], "rm-sursilv": ["1K<n<10K"], "rm-vallader": ["1K<n<10K"], "ro": ["1K<n<10K"], "ru": ["10K<n<100K"], "rw": ["1M<n<10M"], "sah": ["1K<n<10K"], "sl": ["1K<n<10K"], "sv-SE": ["10K<n<100K"], "ta": ["10K<n<100K"], "th": ["10K<n<100K"], "tr": ["10K<n<100K"], "tt": ["10K<n<100K"], "uk": ["10K<n<100K"], "vi": ["1K<n<10K"], "vot": ["n<1K"], "zh-CN": ["10K<n<100K"], "zh-HK": ["10K<n<100K"], "zh-TW": ["10K<n<100K"]}, "source_datasets": ["extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "paperswithcode_id": "common-voice", "pretty_name": "Common Voice Corpus 6.1", "language_bcp47": ["ab", "ar", "as", "br", "ca", "cnh", "cs", "cv", "cy", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy-NL", "ga-IE", "hi", "hsb", "hu", "ia", "id", "it", "ja", "ka", "kab", "ky", "lg", "lt", "lv", "mn", "mt", "nl", "or", "pa-IN", "pl", "pt", "rm-sursilv", "rm-vallader", "ro", "ru", "rw", "sah", "sl", "sv-SE", "ta", "th", "tr", "tt", "uk", "vi", "vot", "zh-CN", "zh-HK", "zh-TW"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."}
|
2023-06-18T15:09:06+00:00
|
d4db40f3ff8c1c2304e0aeb5306c9019b963bef7
|
# Dataset Card for "wikitextcopy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dderr/wikitextcopy
|
[
"region:us"
] |
2023-06-13T10:54:22+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1270947, "num_examples": 4358}, {"name": "train", "num_bytes": 10918118, "num_examples": 36718}, {"name": "validation", "num_bytes": 1134123, "num_examples": 3760}], "download_size": 7371282, "dataset_size": 13323188}}
|
2023-06-13T10:54:41+00:00
|
21d832ec9839f74c57237a9b11d93033ee79b7d1
|
# Dataset Card for "gdquest"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lumenwrites/gdquest
|
[
"region:us"
] |
2023-06-13T11:08:10+00:00
|
{"dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 64598455.82826131, "num_examples": 3161}, {"name": "test", "num_bytes": 7279448.685738685, "num_examples": 352}], "download_size": 66859575, "dataset_size": 71877904.514}}
|
2023-06-13T12:17:08+00:00
|
9520ea1618a316094c8c7d76ae4d9d949092f736
|
# Dataset Card for "gdquest-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lumenwrites/gdquest-test
|
[
"region:us"
] |
2023-06-13T11:09:46+00:00
|
{"dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 155132.0, "num_examples": 14}], "download_size": 165791, "dataset_size": 155132.0}}
|
2023-06-13T11:27:02+00:00
|
bb0514cf34f19ef01d899f291206a0f7c790a811
|
Thouph/Text2Video1
|
[
"license:mit",
"region:us"
] |
2023-06-13T11:36:06+00:00
|
{"license": "mit", "viewer": false}
|
2023-08-14T08:44:31+00:00
|
|
667c5af6f647af55691bc5ffac71f7cab5a40275
|
patrickvonplaten/bella_ciao
|
[
"license:cc-by-nc-nd-4.0",
"region:us"
] |
2023-06-13T11:38:37+00:00
|
{"license": "cc-by-nc-nd-4.0"}
|
2023-06-13T11:39:33+00:00
|
|
cfc23c83ab4c6c043ce3687c5d68fd3ff098d52f
|
# CC3M Image-Text Embeddings
- `images_part{1-3}.txt` are text files with base64-encoded images.
- `texts.txt` is a text file with captions for images.
- `images.{model_name}.fbin` is a binary file with {model_name} image embeddings.
- `images.{model_name}.usearch` is a binary file with a serialized USearch image index which contains `images.{model_name}.fbin`.
- `texts.{model_name}.fbin` is a binary file with {model_name} text embeddings.
- `texts.{model_name}.usearch` is a binary file with a serialized USearch text index which contains `texts.{model_name}.fbin`.
|
unum-cloud/ann-cc-3m
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-13T11:40:44+00:00
|
{"license": "apache-2.0"}
|
2023-08-25T19:36:08+00:00
|
e1f14d0eb790c1a5d6d2c9e7accee79e17509bd1
|
<div align="center">
<img width="640" alt="manot/pothole-segmentation2" src="https://huggingface.co/datasets/manot/pothole-segmentation2/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['pothole']
```
### Number of Images
```json
{'valid': 133, 'test': 66, 'train': 466}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("manot/pothole-segmentation2", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/gurgen-hovsepyan-mbrnv/pothole-detection-gilij/dataset/2](https://universe.roboflow.com/gurgen-hovsepyan-mbrnv/pothole-detection-gilij/dataset/2?ref=roboflow2huggingface)
### Citation
```
@misc{ pothole-detection-gilij_dataset,
title = { pothole-detection Dataset },
type = { Open Source Dataset },
author = { Gurgen Hovsepyan },
howpublished = { \\url{ https://universe.roboflow.com/gurgen-hovsepyan-mbrnv/pothole-detection-gilij } },
url = { https://universe.roboflow.com/gurgen-hovsepyan-mbrnv/pothole-detection-gilij },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jun },
note = { visited on 2023-06-13 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on June 13, 2023 at 12:48 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 665 images.
Pothole are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
|
manot/pothole-segmentation2
|
[
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"region:us"
] |
2023-06-13T11:48:56+00:00
|
{"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface"]}
|
2023-06-13T11:49:33+00:00
|
7d820e8fbe009dea19f4f4c1060503b661b0695e
|
ibrohim8828/Fiqih
|
[
"license:unknown",
"region:us"
] |
2023-06-13T11:52:05+00:00
|
{"license": "unknown"}
|
2023-06-13T11:53:58+00:00
|
|
4d93caf60b9018f4d4bcc597ebc1e262c7a74928
|
# AutoTrain Dataset for project: aniaitokenclassification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project aniaitokenclassification.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens": [
"I",
" booked",
"a",
" flight",
"to",
"London."
],
"tags": [
4,
2,
2,
5,
2,
1
]
},
{
"tokens": [
"Apple",
"Inc.",
"is",
"planning",
"to",
"open",
"a",
"new",
"store",
"in",
"Paris."
],
"tags": [
3,
3,
2,
2,
2,
2,
2,
2,
2,
2,
1
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(names=['COMPANY', 'LOC', 'O', 'ORG', 'PER', 'THING'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 23 |
| valid | 6 |
|
KrishnAI7/autotrain-data-aniaitokenclassification
|
[
"task_categories:token-classification",
"language:en",
"region:us"
] |
2023-06-13T12:03:49+00:00
|
{"language": ["en"], "task_categories": ["token-classification"]}
|
2023-06-13T12:04:25+00:00
|
7de61b064297d75bce16be38a420b9fe5293af50
|
# URLs of images containing faces from Common Crawl

## Download images
Select only first face from image and faces with min size of 40 pixels:
```python
from datasets import load_dataset
def filter_bbox(bbox, min_size=40):
x1, x2, y1, y2 = bbox
return x2 - x1 >= min_size and y2 - y1 >= min_size
ds = load_dataset('atom-in-the-universe/cc-faces-150k')
ds = ds.map(lambda sample: {'faces': sample['faces'][0]})
ds = ds.filter(lambda sample: filter_bbox(sample['faces']))
ds.to_parquet('cc_faces.parquet')
```
## Download using img2dataset
Install Vanga's fork of img2dataset:
```bash
pip install img2dataset git+https://github.com/vanga/img2dataset.git
```
Python script:
```python
from img2dataset import download
import os
output_dir = os.path.abspath("bench")
download(
processes_count=16,
thread_count=32,
url_list="cc_faces.parquet",
image_size=256,
output_folder=output_dir,
output_format="files",
input_format="parquet",
url_col="url",
caption_col="alt",
enable_wandb=True,
number_sample_per_shard=1000,
distributor="multiprocessing",
box_col='faces
)
```
|
atom-in-the-universe/cc-faces-150k
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-13T12:07:28+00:00
|
{"license": "apache-2.0"}
|
2023-06-13T12:30:20+00:00
|
94b030e4818bea1a348c95c86794f66a330bf586
|
# AutoTrain Dataset for project: language_model
## Dataset Description
This dataset has been automatically processed by AutoTrain for project language_model.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"source": "Aceste onsule sunt denumite Teritoriile de Nord.",
"target": "Aceste insule sunt denumite Teritoriile de Nord."
},
{
"source": "Care este pozi\u0163ia noastr\u0103dde plecare?",
"target": "Care este pozi\u0163ia noastr\u0103 de plecare?"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"source": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2398 |
| valid | 600 |
|
pintileipetru/autotrain-data-language_model
|
[
"task_categories:translation",
"region:us"
] |
2023-06-13T12:57:41+00:00
|
{"task_categories": ["translation"]}
|
2023-06-13T12:58:39+00:00
|
c7e90d978173d8eca014ce19ad6c9516c0b74c68
|
WStark/dataset
|
[
"license:mit",
"region:us"
] |
2023-06-13T13:02:11+00:00
|
{"license": "mit"}
|
2023-06-13T13:02:11+00:00
|
|
eb1f6b3ba860085a0201718ebff484c0c695183e
|
# Dataset Card for "beaches"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shape-ai/beaches
|
[
"region:us"
] |
2023-06-13T13:02:43+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1566551.0, "num_examples": 1}], "download_size": 1568255, "dataset_size": 1566551.0}}
|
2023-06-14T10:39:47+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.