sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
2d2ac49777cae06ef01b14a5f4359989b25ed8f7
|
# Dataset Card for "VQAv2_sample_validation_benchmarks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_benchmarks
|
[
"region:us"
] |
2023-04-18T22:52:11+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 200179, "num_examples": 10}], "download_size": 87070, "dataset_size": 200179}}
|
2023-04-18T22:52:13+00:00
|
ec4d344e9e2f3586f7593bbf4a9ae6e37c461774
|
Ok SO, as usual we don't usually have time to test these, though there is a chance many of the poses are inclusive of testing - the images are in the grids, and we've included sample images for the face landmarks.
...Don't mock us LOL, we literally found a face landmark demo on huggingface, and went nuts making dumb faces for y'all. "DEPICTS AN ACTUAL PERSON" is because some of the data is AI generated and some of it's realistic people used for poses -- like our face.
Lisc Restrictions:
Don't resell any of the sample images in the model card.
What you do with the data beyond that is literally not up to use, tho we'd prefer you don't "SELL" the poses.
We're not liable for any copyright infringment or any dirty nasty things you make with the poses XD
We're also not liable if easy diffusion tells you that you can merge a single image into 20gb models because you can't, if it tells you that you can then you're gullible LOL.
These packs are copied over from civit as a backup :3
Feel free to use.
|
Duskfallcrew/Pose_Packs
|
[
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:creativeml-openrail-m",
"controlnet",
"poses",
"region:us"
] |
2023-04-18T23:01:46+00:00
|
{"language": ["en"], "license": "creativeml-openrail-m", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image"], "pretty_name": "Duskfall Pose Packs", "tags": ["controlnet", "poses"]}
|
2023-04-18T23:06:43+00:00
|
51c9f5cd7ea38e8d549fbee3a8993f5c52eb9441
|
Poem dataset to be used with instruction fine tuning
|
checkai/instruction-poems
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-04-18T23:36:02+00:00
|
{"license": "cc-by-4.0"}
|
2023-04-19T02:02:09+00:00
|
b4da8f1a8c38da2968c86f6df62227caedfe6d3c
|
# Dataset Card for "youtube_dataset_locfuho"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
quocanh34/youtube_dataset_locfuho
|
[
"region:us"
] |
2023-04-18T23:48:12+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "w2v2_transcription", "dtype": "string"}, {"name": "WER", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1594831.8648648649, "num_examples": 18}], "download_size": 1512591, "dataset_size": 1594831.8648648649}}
|
2023-04-18T23:48:17+00:00
|
88efd261ae9e8b5c50d40f25da2a49465679e89f
|
# Dataset Card for "SheepsLAIONSquare"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
GreeneryScenery/SheepsLAIONSquare
|
[
"region:us"
] |
2023-04-19T00:28:52+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "square_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 27470879234.0, "num_examples": 29000}], "download_size": 27459163664, "dataset_size": 27470879234.0}}
|
2023-04-19T05:58:04+00:00
|
6ecc458b7219ddac2ad188ea64f2b9b9751f55c4
|
# ***** BN-HTRd Splitted Dataset for Experimentation *****
# <u>Original Dataset:</u> "BN-HTRd: A Benchmark Dataset for Document Level Offline Bangla Handwritten Text Recognition (HTR)"
Link: https://data.mendeley.com/datasets/743k6dm543
### Description
We introduce a new dataset for offline Handwritten Text Recognition (HTR) from images of Bangla scripts comprising words, lines, and document-level annotations. The BN-HTRd dataset is based on the BBC Bangla News corpus - which acted as ground truth texts for the handwritings. Our dataset contains a total of 786 full-page images collected from 150 different writers. With a staggering 108,147 instances of handwritten words, distributed over 13,867 lines and 23,115 unique words, this is currently the 'largest and most comprehensive dataset' in this field. We also provided the bounding box annotations (YOLO format) for the segmentation of words/lines and the ground truth annotations for full-text, along with the segmented images and their positions. The contents of our dataset came from a diverse news category, and annotators of different ages, genders, and backgrounds, having variability in writing styles. The BN-HTRd dataset can be adopted as a basis for various handwriting classification tasks such as end-to-end document recognition, word-spotting, word/line segmentation, and so on.
The statistics of the original dataset are given below:
Number of writers = 150\
Total number of images = 786\
Total number of lines = 14,383\
Total number of words = 1,08,181\
Total number of unique words = 23,115\
Total number of punctuation = 7,446\
Total number of characters = 5,74,203\
### Steps to reproduce
See the Paper: https://arxiv.org/abs/2206.08977
#### Paper Information for Citation
```ruby
@misc{https://doi.org/10.48550/arxiv.2206.08977,
doi = {10.48550/ARXIV.2206.08977},
url = {https://arxiv.org/abs/2206.08977},
author = {Rahman, Md. Ataur and Tabassum, Nazifa and Paul, Mitu and Pal, Riya and Islam, Mohammad Khairul},
title = {BN-HTRd: A Benchmark Dataset for Document Level Offline Bangla Handwritten Text Recognition (HTR) and Line Segmentation},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
shaoncsecu/BN-HTRd_Splitted
|
[
"task_categories:image-segmentation",
"task_categories:image-to-text",
"size_categories:10K<n<100K",
"language:bn",
"license:cc-by-4.0",
"Handwriting Recognition",
"Document Imaging",
"Annotation",
"Image Segmentation",
"Bengali Language",
"Word Spotting",
"arxiv:2206.08977",
"doi:10.57967/hf/0546",
"region:us"
] |
2023-04-19T00:43:08+00:00
|
{"language": ["bn"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["image-segmentation", "image-to-text"], "pretty_name": "BN-HTRd Splitted Dataset for Experimentation", "tags": ["Handwriting Recognition", "Document Imaging", "Annotation", "Image Segmentation", "Bengali Language", "Word Spotting"]}
|
2023-04-19T01:11:52+00:00
|
fdbe9224b51c528a87c3d79cf2e506b46b3e5ace
|
nezhazheng/myspider
|
[
"license:cc-by-sa-4.0",
"region:us"
] |
2023-04-19T01:04:13+00:00
|
{"license": "cc-by-sa-4.0", "dataset_info": {"features": [{"name": "db_id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "query_toks", "sequence": "string"}, {"name": "query_toks_no_value", "sequence": "string"}, {"name": "question_toks", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 630265, "num_examples": 1001}, {"name": "validation", "num_bytes": 663747, "num_examples": 1001}], "download_size": 238230, "dataset_size": 1294012}}
|
2023-04-19T01:14:50+00:00
|
|
c1a7d846f25b9654be9b4026dbc4c7243a718b9a
|
# AutoTrain Dataset for project: xx
## Dataset Description
This dataset has been automatically processed by AutoTrain for project xx.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_db_id": "department_management",
"target": "SELECT count(*) FROM head WHERE age > 56",
"source": "How many heads of the departments are older than 56 ?",
"feat_query_toks": [
"SELECT",
"count",
"(",
"*",
")",
"FROM",
"head",
"WHERE",
"age",
">",
"56"
],
"feat_query_toks_no_value": [
"select",
"count",
"(",
"*",
")",
"from",
"head",
"where",
"age",
">",
"value"
],
"feat_question_toks": [
"How",
"many",
"heads",
"of",
"the",
"departments",
"are",
"older",
"than",
"56",
"?"
]
},
{
"feat_db_id": "department_management",
"target": "SELECT name , born_state , age FROM head ORDER BY age",
"source": "List the name, born state and age of the heads of departments ordered by age.",
"feat_query_toks": [
"SELECT",
"name",
",",
"born_state",
",",
"age",
"FROM",
"head",
"ORDER",
"BY",
"age"
],
"feat_query_toks_no_value": [
"select",
"name",
",",
"born_state",
",",
"age",
"from",
"head",
"order",
"by",
"age"
],
"feat_question_toks": [
"List",
"the",
"name",
",",
"born",
"state",
"and",
"age",
"of",
"the",
"heads",
"of",
"departments",
"ordered",
"by",
"age",
"."
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_db_id": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)",
"source": "Value(dtype='string', id=None)",
"feat_query_toks": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_query_toks_no_value": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_question_toks": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1001 |
| valid | 1001 |
|
nezhazheng/autotrain-data-xx
|
[
"task_categories:translation",
"region:us"
] |
2023-04-19T01:16:36+00:00
|
{"task_categories": ["translation"]}
|
2023-04-19T01:19:51+00:00
|
6561fb3804bd929685e8a8e6ea7ce6035788172a
|
# Dataset Card for "MATH_Instruction_Format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alpayariyak/MATH_Instruction_Format
|
[
"region:us"
] |
2023-04-19T01:27:44+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9836383, "num_examples": 12500}], "download_size": 4859969, "dataset_size": 9836383}}
|
2023-04-19T01:27:52+00:00
|
5bf2ce03f6f0e3f4e9e14df3e5af5b2915e2f332
|
# Dataset Card for "sandhi-split-long-2018"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chronbmm/sandhi-split-long-2018
|
[
"region:us"
] |
2023-04-19T03:23:08+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "unsandhied", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 58896572, "num_examples": 109152}, {"name": "validation", "num_bytes": 6548762, "num_examples": 12128}, {"name": "test", "num_bytes": 6548762, "num_examples": 12128}, {"name": "test_500", "num_bytes": 273816, "num_examples": 500}, {"name": "validation_500", "num_bytes": 273816, "num_examples": 500}], "download_size": 46961402, "dataset_size": 72541728}}
|
2023-04-19T03:23:26+00:00
|
3748feab5aec895c8ec3a4ae03ce382a654b633d
|
We are excited to share the release of the HumanMOD dataset, unveiled in our [AMCIS 2023 paper ](https://aisel.aisnet.org/amcis2023/sig_aiaa/sig_aiaa/3).
Wang, Kanlun; Fu, Zhe; Zhou, Lina; and Zhang, Dongsong, "How Does User Engagement Support Content Moderation? A Deep Learning-based Comparative Study" (2023). AMCIS 2023 Proceedings. 3.
https://aisel.aisnet.org/amcis2023/sig_aiaa/sig_aiaa/3
Dataset Summary:
- The data collection was limited to public online communities to comply with the platform's privacy policy.
- We leveraged a [Pushshift Reddit API](https://reddit-api.readthedocs.io/en/latest/) to scrape posts from 40 subreddits daily across four different domains from Aug 24 to October 28, 2022, resulting in 104,674 posts.
- To enhance the ecological validity of the study findings, we used a [PRAW API](https://praw.readthedocs.io/en/stable/) to perform another round of data collection of the collected posts 2 months later to validate whether the post content was moderated or not.
- Thereafter, we used a snowballing approach to collect the corresponding comments on all the posts.
- The metadata includes post content, post time, comment content, comment time, karma score, etc.
- We set a threshold for the minimum number of comments to 2 and an upper bound for the number of direct comments to 15 to facilitate the extraction of graph-based structural information.
- The final dataset consists of 8,511 moderated posts and another 8,511 not moderated posts.
- All the posts were commented on, with a total of 148,344 comments.
Data Fields for HumanMOD_Posts dataset:
- Reddit_ID: the unique identifiers for Reddit posts, which serve as the foreign keys bridging to the HumanMOD_Comments dataset.
- Subreddits: the names of subreddits
- Titles: the titles of Reddit posts
- Body: post content, which is an extended description of a post
- Author: the authors of posts
- URLs: the web addresses of Reddit posts
- Labels: (0) -- The post is not moderated; (1) -- The post is moderated by moderators.
Data Fields for HumanMOD_Comments dataset:
- Parent ID: the unique identifiers for parent comments
- Comment ID: the unique identifiers for child comments, the ones that reply to the parent comments
- Comment Body: the content of child comments
- Score: the karma scores of child comments
- Author: the authors of child comments
- Post ID: the specific Reddit post identifiers to which the child comment should be associated, which serve as the foreign keys bridging to the HumanMOD_Posts dataset.
|
SamW/HumanMOD
|
[
"task_categories:text-classification",
"size_categories:10M<n<100M",
"language:en",
"region:us"
] |
2023-04-19T03:46:45+00:00
|
{"language": ["en"], "size_categories": ["10M<n<100M"], "task_categories": ["text-classification"], "pretty_name": "HumanMOD (AMCIS 2023)"}
|
2023-07-08T00:38:54+00:00
|
c1879f19c2c28377a0781a28278e42dddb8cf1d4
|
# TempoFunk S(mall)Dance
10k samples of metadata and encoded latents & prompts of videos themed around **dance**.
## Data format
- Video frame latents
- Numpy arrays
- 120 frames, 512x512 source size
- Encoded shape (120, 4, 64, 64)
- CLIP (openai) encoded prompts
- Video description (as seen in metadata)
- Encoded shape (77,768)
- Video metadata as JSON (description, tags, categories, source URLs, etc.)
|
TempoFunk/tempofunk-sdance
|
[
"task_categories:text-to-video",
"task_categories:text-to-image",
"task_categories:video-classification",
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:agpl-3.0",
"region:us"
] |
2023-04-19T04:08:11+00:00
|
{"language": ["en"], "license": "agpl-3.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-video", "text-to-image", "video-classification", "image-classification"]}
|
2023-05-07T06:38:48+00:00
|
628e39792fe5a1fc1b4977d83fff19cd4de683cf
|
# AutoTrain Dataset for project: test
## Dataset Description
This dataset has been automatically processed by AutoTrain for project test.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "\u4ea4\u6613\u5185\u5bb9\uff1a\u516c\u53f8\u63a7\u80a1\u5b50\u516c\u53f8\u9ec4\u5c71\u592a\u5e73\u6e56\u6587\u5316\u65c5\u6e38\u6709\u9650\u516c\u53f8\u62df\u5411\u516c\u53f8\u63a7\u80a1\u80a1\u4e1c\u9ec4\u5c71\u65c5\u6e38\u96c6\u56e2\u6709\u9650\u516c\u53f8\u501f\u6b3e3,887\u4e07\u5143\uff0c\u501f\u6b3e\u5e74\u5229\u7387\u4e3a3.51%\uff0c\u501f\u6b3e\u671f\u9650\u4e3a\u81ea\u501f\u6b3e\u8d44\u91d1\u5230\u8d26\u4e4b\u65e5\u8d77\uff08\u542b\u5f53\u65e5\uff09\u81f32036\u5e749\u670822\u65e5\u3002 ",
"question": "\u8d22\u52a1\u8d44\u52a9\u91d1\u989d\u662f\u591a\u5c11\uff1f",
"answers.text": [
"3,887\u4e07\u5143"
],
"answers.answer_start": [
45
]
},
{
"context": "\u6839\u636e\u4e1a\u52a1\u53d1\u5c55\u9700\u8981\uff0c\u4e2d\u7267\u5b9e\u4e1a\u80a1\u4efd\u6709\u9650\u516c\u53f8\uff08\u4ee5\u4e0b\u7b80\u79f0\u201c\u516c\u53f8\u201d\uff09\u53ca\u5168\u8d44\uff08\u63a7\u80a1\uff09\u4f01\u4e1a\uff08\u4ee5\u4e0b\u7b80\u79f0\u201c\u6240\u5c5e\u4f01\u4e1a\u201d\uff09\u5411\u516c\u53f8\u63a7\u80a1\u80a1\u4e1c\u4e2d\u56fd\u7267\u5de5\u5546\uff08\u96c6\u56e2\uff09\u603b\u516c\u53f8\uff08\u4ee5\u4e0b\u7b80\u79f0\u201c\u4e2d\u7267\u603b\u516c\u53f8\u201d\uff09\u7533\u8bf7\u8d22\u52a1\u8d44\u52a9\uff0c2014\u5e74\u5e74\u5ea6\u5185\u5411\u5176\u652f\u4ed8\u7684\u5229\u606f\u4e0d\u8d85\u8fc71,500\u4e07\u5143\u3002\u8d22\u52a1\u8d44\u52a9\u7684\u5229\u7387\u6c34\u5e73\u4e0d\u9ad8\u4e8e\u4e2d\u56fd\u4eba\u6c11\u94f6\u884c\u89c4\u5b9a\u7684\u540c\u671f\u8d37\u6b3e\u57fa\u51c6\u5229\u7387\u3002\u6309\u7167\u300a\u4e0a\u6d77\u8bc1\u5238\u4ea4\u6613\u6240\u80a1\u7968\u4e0a\u5e02\u89c4\u5219\u300b\u53ca\u76f8\u5173\u89c4\u5b9a\uff0c\u5173\u8054\u4ea4\u6613\u91d1\u989d\u4ee5\u5b9e\u9645\u652f\u4ed8\u7ed9\u4e2d\u7267\u603b\u516c\u53f8\u7684\u5229\u606f\u91d1\u989d\u5c65\u884c\u76f8\u5e94\u7a0b\u5e8f\u3002 \u4e2d\u7267\u603b\u516c\u53f8\u4e3a\u516c\u53f8\u7b2c\u4e00\u5927\u80a1\u4e1c\uff0c\u6839\u636e\u300a\u4e0a\u6d77\u8bc1\u5238\u4ea4\u6613\u6240\u80a1\u7968\u4e0a\u5e02\u89c4\u5219\u300b\u7684\u89c4\u5b9a\uff0c\u4e2d\u7267\u603b\u516c\u53f8\u4e3a\u516c\u53f8\u7684\u5173\u8054\u6cd5\u4eba\uff0c\u516c\u53f8\u5411\u5176\u7533\u8bf7\u8d22\u52a1\u8d44\u52a9\u884c\u4e3a\u6784\u6210\u5173\u8054\u4ea4\u6613\uff0c\u6545\u6b64\u4e8b\u9879\u4f5c\u4e3a\u5173\u8054\u4ea4\u6613\u5355\u72ec\u516c\u544a\u3002 ",
"question": "\u8d22\u52a1\u8d44\u52a9\u91d1\u989d\u662f\u591a\u5c11\uff1f",
"answers.text": [
"1,500\u4e07\u5143"
],
"answers.answer_start": [
107
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 4 |
| valid | 1 |
|
horsemanhector/autotrain-test-50698120978
|
[
"region:us"
] |
2023-04-19T04:55:31+00:00
|
{}
|
2023-04-19T06:20:22+00:00
|
3d116fe4c3d7bb1efd536339c52fba1bb8dc45a2
|
# Dataset Card for "hpqa_generator_input"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/hpqa_generator_input
|
[
"region:us"
] |
2023-04-19T05:03:54+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 260134640, "num_examples": 72340}, {"name": "validation", "num_bytes": 65033660, "num_examples": 18085}, {"name": "test", "num_bytes": 26624784, "num_examples": 7404}], "download_size": 25644638, "dataset_size": 351793084}}
|
2023-04-19T05:04:03+00:00
|
4b12277dc906869ec6e0552ac9d3975df9a7aa12
|
Kodytek P, Bodzas A and Bilik P. A large-scale image dataset of wood surface defects for automated vision-based quality control processes [version 2; peer review: 2 approved]. F1000Research 2022, 10:581 (https://doi.org/10.12688/f1000research.52903.2)
Bounding boxes only, semantic maps
All images are 2800 x 1024 pixels (width x height)
Images compressed using
```python
PIL.Image.Image.save(
format="JPEG",
optimize=True,
quality=50,
)
```
Bounding boxes converted to YOLO format.
TODO: loader script, preview, semantic maps
|
iluvvatar/wood_surface_defects
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-04-19T05:26:17+00:00
|
{"license": "cc-by-4.0", "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "objects", "list": [{"name": "bb", "sequence": "float64"}, {"name": "label", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2216186843.984, "num_examples": 20276}], "download_size": 2202918677, "dataset_size": 2216186843.984}}
|
2023-04-22T11:52:45+00:00
|
8bdebda72cda9401479bfcb3198e7ed306e48ba4
|
# Dataset Card for "codegen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
skar02/codegen
|
[
"region:us"
] |
2023-04-19T05:30:22+00:00
|
{"dataset_info": {"features": [{"name": "story", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6711, "num_examples": 5}], "download_size": 12099, "dataset_size": 6711}}
|
2023-04-19T05:30:25+00:00
|
559ab9f24ce4621e3f839ef05e1075b6c4f125ee
|
# Dataset Card for Dataset Name
GitHub Repository: https://github.com/thu-coai/Safety-Prompts
Paper: https://arxiv.org/abs/2304.10436
|
thu-coai/Safety-Prompts
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"arxiv:2304.10436",
"region:us"
] |
2023-04-19T05:41:55+00:00
|
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "Safety-Prompts"}
|
2023-08-25T14:02:51+00:00
|
7e2a0b6b1fc11444a02009b0fc9a509f613b2002
|
[Original dataset] - This dataset is just the translation of the [gsm8k] dataset.
[Original dataset]: <https://huggingface.co/datasets/gsm8k>
[gsm8k]: <https://huggingface.co/datasets/gsm8k>
|
BlackKakapo/gsm8k-ro
|
[
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:n<8K",
"language:ro",
"license:mit",
"region:us"
] |
2023-04-19T05:42:36+00:00
|
{"language": ["ro"], "license": "mit", "size_categories": ["n<8K"], "task_categories": ["question-answering", "text2text-generation"]}
|
2023-04-19T06:29:21+00:00
|
4a202b5883966f7c75eb4b1695f5f12f1a38dfe6
|
# Dataset Card for "unscramble_GPT3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lighteval/GPT3_unscramble
|
[
"region:us"
] |
2023-04-19T06:33:20+00:00
|
{"dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "mid_word_1_anagrams", "num_bytes": 271516, "num_examples": 10000}, {"name": "mid_word_2_anagrams", "num_bytes": 282654, "num_examples": 10000}, {"name": "cycle_letters_in_word", "num_bytes": 282654, "num_examples": 10000}, {"name": "random_insertion_in_word", "num_bytes": 353981, "num_examples": 10000}, {"name": "reversed_words", "num_bytes": 282654, "num_examples": 10000}], "download_size": 1131195, "dataset_size": 1473459}}
|
2023-04-19T06:33:33+00:00
|
769e9da1d8785db800db16afffbe60c59a53dd2e
|
# Dataset Card for "SheepsWikiScribble"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
GreeneryScenery/SheepsWikiScribble
|
[
"region:us"
] |
2023-04-19T07:36:39+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "scribble_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 6460723420.5, "num_examples": 20650}], "download_size": 6459596095, "dataset_size": 6460723420.5}}
|
2023-04-19T10:35:00+00:00
|
4119e1da6dabba83cb57bcdcc0aa9f79a5cd017a
|
abir18/codebert_suggestions
|
[
"license:mit",
"region:us"
] |
2023-04-19T07:48:21+00:00
|
{"license": "mit"}
|
2023-04-19T07:49:48+00:00
|
|
09acc69f798540e19cd69ee8518077a8cf4caae5
|
fiatrete/dan-used-apps
|
[
"license:mit",
"region:us"
] |
2023-04-19T07:50:59+00:00
|
{"license": "mit"}
|
2023-04-25T06:44:43+00:00
|
|
5a6819b22cfd02b010dcb74cb8e5ff66998a4684
|
notsobad9527/chinese-joke
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-19T08:46:21+00:00
|
{"license": "apache-2.0"}
|
2023-04-19T08:47:51+00:00
|
|
d4a37fd0729013021dcd1d5bef6a172c0f18b914
|
# Dataset Card for "hagrid-mediapipe-hands"
This dataset is designed to train a ControlNet with human hands. It includes hand landmarks detected by MediaPipe(for more information refer to: https://developers.google.com/mediapipe/solutions/vision/hand_landmarker).
The source image data is from [HaGRID dataset](https://github.com/hukenovs/hagrid) and we use a modified version from Kaggle(https://www.kaggle.com/datasets/innominate817/hagrid-classification-512p) to build this dataset. There are 507050 data samples in total and the image resolution is 512x512.
### Generate Mediapipe annotation
We use the script below to generate hand landmarks and you should download `hand_landmarker.task` file first. For more information please refer to [this](https://developers.google.com/mediapipe/solutions/vision/hand_landmarker).
```
import mediapipe as mp
from mediapipe import solutions
from mediapipe.framework.formats import landmark_pb2
from mediapipe.tasks import python
from mediapipe.tasks.python import vision
from PIL import Image
import cv2
import numpy as np
def draw_landmarks_on_image(rgb_image, detection_result):
hand_landmarks_list = detection_result.hand_landmarks
handedness_list = detection_result.handedness
annotated_image = np.zeros_like(rgb_image)
# Loop through the detected hands to visualize.
for idx in range(len(hand_landmarks_list)):
hand_landmarks = hand_landmarks_list[idx]
handedness = handedness_list[idx]
# Draw the hand landmarks.
hand_landmarks_proto = landmark_pb2.NormalizedLandmarkList()
hand_landmarks_proto.landmark.extend([
landmark_pb2.NormalizedLandmark(x=landmark.x, y=landmark.y, z=landmark.z) for landmark in hand_landmarks
])
solutions.drawing_utils.draw_landmarks(
annotated_image,
hand_landmarks_proto,
solutions.hands.HAND_CONNECTIONS,
solutions.drawing_styles.get_default_hand_landmarks_style(),
solutions.drawing_styles.get_default_hand_connections_style())
return annotated_image
# Create an HandLandmarker object.
base_options = python.BaseOptions(model_asset_path='hand_landmarker.task')
options = vision.HandLandmarkerOptions(base_options=base_options,
num_hands=2)
detector = vision.HandLandmarker.create_from_options(options)
# Load the input image.
image = np.asarray(Image.open("./test.png"))
image = mp.Image(
image_format=mp.ImageFormat.SRGB, data=image
)
# Detect hand landmarks from the input image.
detection_result = detector.detect(image)
# Process the classification result and save it.
annotated_image = draw_landmarks_on_image(image.numpy_view(), detection_result)
cv2.imwrite("ann.png", cv2.cvtColor(annotated_image, cv2.COLOR_RGB2BGR))
```
|
Vincent-luo/hagrid-mediapipe-hands
|
[
"region:us"
] |
2023-04-19T08:58:39+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 111989279184.95, "num_examples": 507050}], "download_size": 112032639870, "dataset_size": 111989279184.95}}
|
2023-05-26T08:28:36+00:00
|
3f93610b09a08463c73a978e8a5009360f1a691b
|
This is a clone of the Trump Tweet Kaggle dataset found here: https://www.kaggle.com/datasets/headsortails/trump-twitter-archive
|
fschlatt/trump-tweets
|
[
"language:en",
"license:cc0-1.0",
"region:us"
] |
2023-04-19T09:35:29+00:00
|
{"language": ["en"], "license": "cc0-1.0", "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "is_retweet", "dtype": "bool"}, {"name": "is_deleted", "dtype": "bool"}, {"name": "device", "dtype": "string"}, {"name": "favorites", "dtype": "int64"}, {"name": "retweets", "dtype": "int64"}, {"name": "datetime", "dtype": "timestamp[s]"}, {"name": "is_flagged", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 10593265, "num_examples": 56571}], "download_size": 0, "dataset_size": 10593265}}
|
2023-04-19T10:41:59+00:00
|
d6891e7feec9ec8f6c4178ba12ad48d57f37d292
|
# Dataset Card for "processed_train_coco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gokuls/processed_train_coco
|
[
"region:us"
] |
2023-04-19T09:45:00+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 60520900000, "num_examples": 100000}], "download_size": 18447379186, "dataset_size": 60520900000}}
|
2023-04-19T16:16:18+00:00
|
3dd41894dd0ba6f161b713d196d6e86e12edafa1
|
```
python -m spacy download en_core_web_sm
```
Titles:
```
jq -s '.[].title' raw/dict.jsonl
```
returns
- [x] "English"
- [ ] "English One Million"
- [x] "American English"
- [x] "British English"
- [x] "English Fiction"
- [ ] "Chinese (simplified)"
- [x] "French"
- [x] "German"
- [ ] "Hebrew"
- [ ] "Italian"
- [x] "Russian"
- [x] "Spanish"
Spellcheck:
https://pypi.org/project/pyspellchecker/
```
English - ‘en’
Spanish - ‘es’
French - ‘fr’
Portuguese - ‘pt’
German - ‘de’
Russian - ‘ru’
Arabic - ‘ar’
```
Sets now:
- [x] "English" - en
- [x] "Spanish" - es
- [x] "French" - fr
- [x] "German" - de
- [x] "Russian" - ru
|
gustawdaniel/ngram-google-2012
|
[
"license:cc-by-3.0",
"region:us"
] |
2023-04-19T09:45:39+00:00
|
{"license": "cc-by-3.0"}
|
2023-04-21T03:48:47+00:00
|
5081a14a98688e1604cb26a7654086a6bc6e36dc
|
# Car Models 3778
A comprehensive collection of 193k car images and metadata.

Car Models 3778 Dataset is a comprehensive collection of around `193k` images across `3778` car model variants, obtained entirely through web scraping of the autoevolution.com website. Each model variant contains between 20 and 200 images in the size of 512x512, offering a diverse range of high-quality images that have been collected from a single reliable source.
The accompanying `.csv` file contains `44` columns of information about the car and the images that belong to them, making it easy to access and utilize the data. The information in the .csv file includes make, model, year, body type, engine type, transmission, and fuel type, among other specifications. Additionally, the file includes information on the image filenames and directories, providing quick access to the corresponding image data.
Some images might be missing due to being deleted as a bad format after resizing. However, despite the missing images, this dataset still provides a rich and diverse collection of car images that can be used for various machine learning tasks, such as image classification, object detection, and segmentation.
In conclusion, the Car Models 3778 Dataset is a reliable and comprehensive collection of high-quality car images and associated metadata, obtained through web scraping of the autoevolution.com website. The dataset is well-suited for use in a wide range of machine learning tasks, making it a valuable resource for researchers and practitioners in the computer vision field.
---
license: other
---
|
Unit293/car_models_3887
|
[
"region:us"
] |
2023-04-19T09:49:34+00:00
|
{}
|
2023-04-20T11:29:49+00:00
|
39d9170249212f09ac42528ab04c484a654724a3
|
# Dataset Card for RVL-CDIP
## Extension
The data loader provides support for loading easyOCR files together with the images
It is not included under '../data', yet is available upon request via email <[email protected]>.
## Table of Contents
- [Dataset Card for RVL-CDIP](#dataset-card-for-rvl-cdip)
- [Extension](#extension)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [The RVL-CDIP Dataset](https://www.cs.cmu.edu/~aharley/rvl-cdip/)
- **Repository:**
- **Paper:** [Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval](https://arxiv.org/abs/1502.07058)
- **Leaderboard:** [RVL-CDIP leaderboard](https://paperswithcode.com/dataset/rvl-cdip)
- **Point of Contact:** [Adam W. Harley](mailto:[email protected])
### Dataset Summary
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given document into one of 16 classes representing document types (letter, form, etc.). The leaderboard for this task is available [here](https://paperswithcode.com/sota/document-image-classification-on-rvl-cdip).
### Languages
All the classes and documents use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the training set is provided below :
```
{
'image': <PIL.TiffImagePlugin.TiffImageFile image mode=L size=754x1000 at 0x7F9A5E92CA90>,
'label': 15
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing a document.
- `label`: an `int` classification label.
<details>
<summary>Class Label Mappings</summary>
```json
{
"0": "letter",
"1": "form",
"2": "email",
"3": "handwritten",
"4": "advertisement",
"5": "scientific report",
"6": "scientific publication",
"7": "specification",
"8": "file folder",
"9": "news article",
"10": "budget",
"11": "invoice",
"12": "presentation",
"13": "questionnaire",
"14": "resume",
"15": "memo"
}
```
</details>
### Data Splits
| |train|test|validation|
|----------|----:|----:|---------:|
|# of examples|320000|40000|40000|
The dataset was split in proportions similar to those of ImageNet.
- 320000 images were used for training,
- 40000 images for validation, and
- 40000 images for testing.
## Dataset Creation
### Curation Rationale
From the paper:
> This work makes available a new labelled subset of the IIT-CDIP collection, containing 400,000
document images across 16 categories, useful for training new CNNs for document analysis.
### Source Data
#### Initial Data Collection and Normalization
The same as in the IIT-CDIP collection.
#### Who are the source language producers?
The same as in the IIT-CDIP collection.
### Annotations
#### Annotation process
The same as in the IIT-CDIP collection.
#### Who are the annotators?
The same as in the IIT-CDIP collection.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was curated by the authors - Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis.
### Licensing Information
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
### Citation Information
```bibtex
@inproceedings{harley2015icdar,
title = {Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval},
author = {Adam W Harley and Alex Ufkes and Konstantinos G Derpanis},
booktitle = {International Conference on Document Analysis and Recognition ({ICDAR})}},
year = {2015}
}
```
### Contributions
Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset.
|
jordyvl/rvl_cdip_easyocr
|
[
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|iit_cdip",
"language:en",
"license:other",
"arxiv:1502.07058",
"region:us"
] |
2023-04-19T09:51:31+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|iit_cdip"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "rvl-cdip", "pretty_name": "RVL-CDIP-EasyOCR", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "letter", "1": "form", "2": "email", "3": "handwritten", "4": "advertisement", "5": "scientific report", "6": "scientific publication", "7": "specification", "8": "file folder", "9": "news article", "10": "budget", "11": "invoice", "12": "presentation", "13": "questionnaire", "14": "resume", "15": "memo"}}}}, {"name": "words", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int32"}}]}}
|
2023-10-20T17:43:34+00:00
|
c2edd1a4e163f852bd2dff365af433e1b264cee9
|
# Dataset Card for "BioDEX-ICSR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
BioDEX/BioDEX-ICSR
|
[
"region:us"
] |
2023-04-19T10:10:45+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "fulltext", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "pmid", "dtype": "string"}, {"name": "fulltext_license", "dtype": "string"}, {"name": "title_normalized", "dtype": "string"}, {"name": "issue", "dtype": "string"}, {"name": "pages", "dtype": "string"}, {"name": "journal", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "pubdate", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "affiliations", "dtype": "string"}, {"name": "medline_ta", "dtype": "string"}, {"name": "nlm_unique_id", "dtype": "string"}, {"name": "issn_linking", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "mesh_terms", "dtype": "string"}, {"name": "publication_types", "dtype": "string"}, {"name": "chemical_list", "dtype": "string"}, {"name": "keywords", "dtype": "string"}, {"name": "references", "dtype": "string"}, {"name": "delete", "dtype": "bool"}, {"name": "pmc", "dtype": "string"}, {"name": "other_id", "dtype": "string"}, {"name": "safetyreportid", "dtype": "int64"}, {"name": "fulltext_processed", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 155748936, "num_examples": 3628}, {"name": "train", "num_bytes": 374859364, "num_examples": 9624}, {"name": "validation", "num_bytes": 96385392, "num_examples": 2407}], "download_size": 337571954, "dataset_size": 626993692}}
|
2023-05-30T14:20:25+00:00
|
87c6fb297198df6d44f2d144b235adc01490ab5c
|
BlackKakapo/recipes-ro
|
[
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:ro",
"license:apache-2.0",
"region:us"
] |
2023-04-19T10:19:12+00:00
|
{"language": ["ro"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["text2text-generation"]}
|
2023-04-19T10:22:10+00:00
|
|
7e7b5e068019dcd80d96eb0baff3c35dd79d2473
|
kekemlp/congo-twitter-leopards
|
[
"license:mit",
"region:us"
] |
2023-04-19T10:42:32+00:00
|
{"license": "mit"}
|
2023-04-19T10:42:32+00:00
|
|
0bd4a0c0e6764c569ff5a7a26b02ed728e48822b
|
DanteWu/CBC_Material
|
[
"license:afl-3.0",
"region:us"
] |
2023-04-19T10:48:12+00:00
|
{"license": "afl-3.0"}
|
2023-04-19T10:48:12+00:00
|
|
12dd3b778d8c37e9da1d906c079784547298d885
|
# Dataset Card for "whisper-jax-test-files"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sanchit-gandhi/whisper-jax-test-files
|
[
"region:us"
] |
2023-04-19T10:49:16+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 271658381.0, "num_examples": 2}], "download_size": 113444578, "dataset_size": 271658381.0}}
|
2023-04-19T11:07:08+00:00
|
c7e424cb2277d880853e03730be9c2a0e3c0a7db
|
# Dataset Card for "SMM2-levels-discrete"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
valashir/SMM2-levels-discrete
|
[
"region:us"
] |
2023-04-19T11:26:46+00:00
|
{"dataset_info": {"features": [{"name": "level", "sequence": {"sequence": {"sequence": "uint8"}}}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 838493867, "num_examples": 7991}], "download_size": 9691556, "dataset_size": 838493867}}
|
2023-04-20T11:23:34+00:00
|
9e7dd8cf28ecf1a897bfd686bc8a1af433f5775d
|
# Dataset Card for "sst-sentiment-explainability"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arthur001122/sst-sentiment-explainability
|
[
"region:us"
] |
2023-04-19T11:28:46+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "POSITIVE"}}}}], "splits": [{"name": "train", "num_bytes": 350.6666666666667, "num_examples": 4}, {"name": "test", "num_bytes": 175.33333333333334, "num_examples": 2}], "download_size": 4042, "dataset_size": 526.0}}
|
2023-04-19T11:28:48+00:00
|
6d880782a6185fd546ba085e483e18ca54f7f35d
|
# Dataset Card for "assin_por_Latn_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruanchaves/assin_por_Latn_to_eng_Latn
|
[
"region:us"
] |
2023-04-19T11:40:23+00:00
|
{"dataset_info": {"features": [{"name": "sentence_pair_id", "dtype": "int64"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "relatedness_score", "dtype": "float32"}, {"name": "entailment_judgment", "dtype": {"class_label": {"names": {"0": "NONE", "1": "ENTAILMENT", "2": "PARAPHRASE"}}}}, {"name": "__language__", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 993418, "num_examples": 5000}, {"name": "test", "num_bytes": 777672, "num_examples": 4000}, {"name": "validation", "num_bytes": 198351, "num_examples": 1000}], "download_size": 0, "dataset_size": 1969441}}
|
2023-04-22T18:11:54+00:00
|
4d502cfabcd2f3e360b618dbc498d32b6e13f44b
|
# Dataset Card for "assin2_por_Latn_to_spa_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruanchaves/assin2_por_Latn_to_spa_Latn
|
[
"region:us"
] |
2023-04-19T11:40:37+00:00
|
{"dataset_info": {"features": [{"name": "sentence_pair_id", "dtype": "int64"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "relatedness_score", "dtype": "float32"}, {"name": "entailment_judgment", "dtype": {"class_label": {"names": {"0": "NONE", "1": "ENTAILMENT"}}}}, {"name": "__language__", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 926284, "num_examples": 6500}, {"name": "test", "num_bytes": 359987, "num_examples": 2448}, {"name": "validation", "num_bytes": 71410, "num_examples": 500}], "download_size": 0, "dataset_size": 1357681}}
|
2023-04-22T18:11:58+00:00
|
9143eb732ed5df52fb127ee25673c6f36282c6ab
|
# Dataset Card for "hatebr_por_Latn_to_spa_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruanchaves/hatebr_por_Latn_to_spa_Latn
|
[
"region:us"
] |
2023-04-19T11:41:00+00:00
|
{"dataset_info": {"features": [{"name": "instagram_comments", "dtype": "string"}, {"name": "offensive_language", "dtype": "bool"}, {"name": "offensiveness_levels", "dtype": "int32"}, {"name": "antisemitism", "dtype": "bool"}, {"name": "apology_for_the_dictatorship", "dtype": "bool"}, {"name": "fatphobia", "dtype": "bool"}, {"name": "homophobia", "dtype": "bool"}, {"name": "partyism", "dtype": "bool"}, {"name": "racism", "dtype": "bool"}, {"name": "religious_intolerance", "dtype": "bool"}, {"name": "sexism", "dtype": "bool"}, {"name": "xenophobia", "dtype": "bool"}, {"name": "offensive_&_non-hate_speech", "dtype": "bool"}, {"name": "non-offensive", "dtype": "bool"}, {"name": "specialist_1_hate_speech", "dtype": "bool"}, {"name": "specialist_2_hate_speech", "dtype": "bool"}, {"name": "specialist_3_hate_speech", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 426153, "num_examples": 4480}, {"name": "validation", "num_bytes": 94951, "num_examples": 1120}, {"name": "test", "num_bytes": 120538, "num_examples": 1400}], "download_size": 0, "dataset_size": 641642}}
|
2023-04-22T18:12:11+00:00
|
8bd17f118591207dd26a1dbea1ec42864c7ec25f
|
# Dataset Card for "rerelem_por_Latn_to_spa_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruanchaves/rerelem_por_Latn_to_spa_Latn
|
[
"region:us"
] |
2023-04-19T11:41:11+00:00
|
{"dataset_info": {"features": [{"name": "docid", "dtype": "string"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "same_text", "dtype": "bool"}, {"name": "__language__", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1137478, "num_examples": 2226}, {"name": "validation", "num_bytes": 379879, "num_examples": 701}, {"name": "test", "num_bytes": 410261, "num_examples": 805}], "download_size": 0, "dataset_size": 1927618}}
|
2023-04-22T18:12:38+00:00
|
a1d26390737a3942f85739d32a9c657763d2ad7b
|
# Dataset Card for "reli-sa_por_Latn_to_spa_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruanchaves/reli-sa_por_Latn_to_spa_Latn
|
[
"region:us"
] |
2023-04-19T11:41:22+00:00
|
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "book", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "sentence_id", "dtype": "int64"}, {"name": "unique_review_id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1833644, "num_examples": 7875}, {"name": "validation", "num_bytes": 323687, "num_examples": 1348}, {"name": "test", "num_bytes": 673218, "num_examples": 3288}], "download_size": 0, "dataset_size": 2830549}}
|
2023-04-22T18:12:49+00:00
|
2f44582cf7c1005ce94d1b185af43c00d5e00963
|
# Dataset Card for "assin_por_Latn_to_spa_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruanchaves/assin_por_Latn_to_spa_Latn
|
[
"region:us"
] |
2023-04-19T11:41:34+00:00
|
{"dataset_info": {"features": [{"name": "sentence_pair_id", "dtype": "int64"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "relatedness_score", "dtype": "float32"}, {"name": "entailment_judgment", "dtype": {"class_label": {"names": {"0": "NONE", "1": "ENTAILMENT", "2": "PARAPHRASE"}}}}, {"name": "__language__", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1052463, "num_examples": 5000}, {"name": "test", "num_bytes": 820108, "num_examples": 4000}, {"name": "validation", "num_bytes": 210810, "num_examples": 1000}], "download_size": 0, "dataset_size": 2083381}}
|
2023-04-22T18:13:06+00:00
|
cb48ea2047fdbfa01d10f18179ff34b7a1943129
|
# Dataset Card for "porsimplessent_por_Latn_to_spa_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruanchaves/porsimplessent_por_Latn_to_spa_Latn
|
[
"region:us"
] |
2023-04-19T11:41:52+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int32"}, {"name": "production_id", "dtype": "int32"}, {"name": "level", "dtype": "string"}, {"name": "changed", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "sentence_text_from", "dtype": "string"}, {"name": "sentence_text_to", "dtype": "string"}, {"name": "__language__", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2285502, "num_examples": 4976}, {"name": "validation", "num_bytes": 652413, "num_examples": 1446}, {"name": "test", "num_bytes": 776229, "num_examples": 1697}], "download_size": 0, "dataset_size": 3714144}}
|
2023-04-22T18:13:25+00:00
|
0d4ab446e39c16e58bca475eec27a001ef772fa3
|
# Dataset Card for "faquad-nli_por_Latn_to_spa_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruanchaves/faquad-nli_por_Latn_to_spa_Latn
|
[
"region:us"
] |
2023-04-19T11:42:54+00:00
|
{"dataset_info": {"features": [{"name": "document_index", "dtype": "int32"}, {"name": "document_title", "dtype": "string"}, {"name": "paragraph_index", "dtype": "int32"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "label", "dtype": "int32"}, {"name": "__language__", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 914711, "num_examples": 3128}, {"name": "validation", "num_bytes": 197365, "num_examples": 731}, {"name": "test", "num_bytes": 210232, "num_examples": 650}], "download_size": 0, "dataset_size": 1322308}}
|
2023-04-22T18:12:01+00:00
|
132c9ef3b9eb30c2fffa727fb0e1131bf8c0d252
|
# Dataset Card for "faquad-nli_por_Latn_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruanchaves/faquad-nli_por_Latn_to_eng_Latn
|
[
"region:us"
] |
2023-04-19T11:43:11+00:00
|
{"dataset_info": {"features": [{"name": "document_index", "dtype": "int32"}, {"name": "document_title", "dtype": "string"}, {"name": "paragraph_index", "dtype": "int32"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "label", "dtype": "int32"}, {"name": "__language__", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 826409, "num_examples": 3128}, {"name": "validation", "num_bytes": 183166, "num_examples": 731}, {"name": "test", "num_bytes": 191949, "num_examples": 650}], "download_size": 0, "dataset_size": 1201524}}
|
2023-04-22T18:13:21+00:00
|
0192d70951115256f866c848361d5d6ee900723a
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
KumbiWako/all_data
|
[
"task_categories:text-classification",
"language:om",
"region:us"
] |
2023-04-19T12:05:00+00:00
|
{"language": ["om"], "task_categories": ["text-classification"]}
|
2023-04-19T12:23:35+00:00
|
c2e1b8791afbb3d8314dcb390ad53d93eed78a89
|
# Dataset Card for "OIG-small-chip2-ko"
- 210282 items
- Original Dataset: OIG-small-chip2 dataset from https://laion.ai/blog/oig-dataset/
- Translated by Google Translate API
example
```
{
"user": "Is there a good way to clean up my credit report?\n\n",
"chip2": "That depends on why your credit score is low. Would you like to share more details about your situation?",
"index": 210272,
"user_translated": "내 신용 보고서를 정리하는 좋은 방법이 있습니까?\n\n",
"chip2_translated": "신용 점수가 낮은 이유에 따라 다릅니다. 귀하의 상황에 대해 더 자세히 알려주시겠습니까?"
}
```
|
heegyu/OIG-small-chip2-ko
|
[
"size_categories:100K<n<1M",
"language:ko",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-04-19T12:12:25+00:00
|
{"language": ["ko", "en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"]}
|
2023-04-19T12:25:41+00:00
|
47529cbc3bb1ebdf9cf644206e6a8a17e5522f3f
|
# 春日野穹(Kasugano Sora) 音声数据集
数据集提取自《缘之空》和《悠之空》,剔除部分不和谐的音声
## 数据集说明
### 缘之空
### 悠之空
### 田口宏子(宫村宫子)歌声
## 免责声明
本项目内容仅供学习交流,严禁用于商业用途和从事其他非法和有违公序良俗的活动,请于24小时内删除!
|
PorYoung/Kasugano-Sora
|
[
"license:mit",
"region:us"
] |
2023-04-19T12:13:59+00:00
|
{"license": "mit"}
|
2023-04-19T13:29:04+00:00
|
86f753bef9c40f884767e92b647a0c64225d921d
|
# Dataset Card for "resume-zh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
misitetong/resume-zh
|
[
"region:us"
] |
2023-04-19T12:25:43+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "sequence": "string"}, {"name": "ner", "list": [{"name": "index", "sequence": "int64"}, {"name": "type", "dtype": "string"}]}, {"name": "word", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 2921172, "num_examples": 3819}, {"name": "dev", "num_bytes": 323927, "num_examples": 463}, {"name": "test", "num_bytes": 358752, "num_examples": 477}], "download_size": 382144, "dataset_size": 3603851}}
|
2023-04-19T12:27:28+00:00
|
e905fbcc6c39aab697e29e5f831d2279f428b9c9
|
bodonodon/colabunny
|
[
"license:afl-3.0",
"region:us"
] |
2023-04-19T12:39:47+00:00
|
{"license": "afl-3.0"}
|
2023-04-19T12:40:10+00:00
|
|
7c419a70840ec46d62c980d4b09e9080f2aab166
|
iamketan25/alpaca-instructions-dataset
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-19T12:52:41+00:00
|
{"license": "apache-2.0"}
|
2023-04-19T12:53:11+00:00
|
|
f91a9ee7fa0de40dba89d992446f0ed1970a4568
|
# Dataset Card for "segundo_harem_selective"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arubenruben/segundo_harem_selective
|
[
"region:us"
] |
2023-04-19T13:29:40+00:00
|
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PESSOA", "2": "I-PESSOA", "3": "B-ORGANIZACAO", "4": "I-ORGANIZACAO", "5": "B-LOCAL", "6": "I-LOCAL", "7": "B-TEMPO", "8": "I-TEMPO", "9": "B-VALOR", "10": "I-VALOR"}}}}], "splits": [{"name": "train", "num_bytes": 1235128, "num_examples": 117}], "download_size": 262690, "dataset_size": 1235128}}
|
2023-04-19T13:29:43+00:00
|
f7e3dc2b7df6074fd2f7a4886b32c892ae3e165c
|
# Dataset Card for "segundo_harem_default"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arubenruben/segundo_harem_default
|
[
"region:us"
] |
2023-04-19T13:33:06+00:00
|
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PESSOA", "2": "I-PESSOA", "3": "B-ORGANIZACAO", "4": "I-ORGANIZACAO", "5": "B-LOCAL", "6": "I-LOCAL", "7": "B-TEMPO", "8": "I-TEMPO", "9": "B-VALOR", "10": "I-VALOR", "11": "B-ABSTRACCAO", "12": "I-ABSTRACCAO", "13": "B-ACONTECIMENTO", "14": "I-ACONTECIMENTO", "15": "B-COISA", "16": "I-COISA", "17": "B-OBRA", "18": "I-OBRA", "19": "B-OUTRO", "20": "I-OUTRO"}}}}], "splits": [{"name": "train", "num_bytes": 1276015, "num_examples": 117}], "download_size": 276069, "dataset_size": 1276015}}
|
2023-04-19T13:33:09+00:00
|
b08c3e0a3b3e2a516b551753b3a35f456f7e7258
|
# Dataset Card for "mr"
## Dataset Description
Movie review dataset from SentEval.
## Data Fields
- `sentence`: Complete sentence expressing an opinion about a film.
- `label`: Sentiment of the opinion, either "negative" (0) or positive (1).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mattymchen/mr
|
[
"task_categories:text-classification",
"task_ids:sentiment-classification",
"language:en",
"region:us"
] |
2023-04-19T13:44:35+00:00
|
{"language": ["en"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 1352524, "num_examples": 10662}], "download_size": 883903, "dataset_size": 1352524}}
|
2023-04-19T14:20:03+00:00
|
7e02161bfd0a200d760d5a973e5f6b8f6455355d
|
# Dataset Card for "xorder_205"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vhug/xorder_205
|
[
"region:us"
] |
2023-04-19T13:45:58+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 70477788.0, "num_examples": 30}], "download_size": 0, "dataset_size": 70477788.0}}
|
2023-04-19T13:59:06+00:00
|
3415a1772935c44b6738ba4bcf14b30b92eb9978
|
# Dataset Card for "cr"
## Dataset Description
Product review dataset from SentEval.
## Data Fields
- `sentence`: Complete sentence expressing an opinion about a product.
- `label`: Sentiment of the opinion, either "negative" (0) or positive (1).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mattymchen/cr
|
[
"task_categories:text-classification",
"task_ids:sentiment-classification",
"language:en",
"region:us"
] |
2023-04-19T13:57:36+00:00
|
{"language": ["en"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 408668, "num_examples": 3775}], "download_size": 244814, "dataset_size": 408668}}
|
2023-04-19T14:18:09+00:00
|
ae0e3d835bbf047e38f6b189d1b9081ab4452857
|
# AutoTrain Dataset for project: cancer-lakera
## Dataset Description
This dataset has been automatically processed by AutoTrain for project cancer-lakera.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<600x450 RGB PIL image>",
"feat_image_id": "ISIC_0024329",
"feat_lesion_id": "HAM_0002954",
"target": 0,
"feat_dx_type": "histo",
"feat_age": 75.0,
"feat_sex": "female",
"feat_localization": "lower extremity"
},
{
"image": "<600x450 RGB PIL image>",
"feat_image_id": "ISIC_0024372",
"feat_lesion_id": "HAM_0005389",
"target": 0,
"feat_dx_type": "histo",
"feat_age": 70.0,
"feat_sex": "male",
"feat_localization": "lower extremity"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"feat_image_id": "Value(dtype='string', id=None)",
"feat_lesion_id": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['actinic_keratoses', 'basal_cell_carcinoma', 'benign_keratosis-like_lesions'], id=None)",
"feat_dx_type": "Value(dtype='string', id=None)",
"feat_age": "Value(dtype='float64', id=None)",
"feat_sex": "Value(dtype='string', id=None)",
"feat_localization": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1200 |
| valid | 150 |
|
Lakera/autotrain-data-cancer-lakera
|
[
"task_categories:image-classification",
"region:us"
] |
2023-04-19T13:59:00+00:00
|
{"task_categories": ["image-classification"]}
|
2023-04-19T14:06:49+00:00
|
88302514bb17800b645f24f9488dbc180ecb67e2
|
# 人民日报(1946-2023)数据集
The dataset is part of CialloCorpus, available at https://github.com/prnake/CialloCorpus
|
Papersnake/people_daily_news
|
[
"license:cc0-1.0",
"region:us"
] |
2023-04-19T14:09:28+00:00
|
{"license": "cc0-1.0"}
|
2024-01-19T07:46:27+00:00
|
1222ed29ec3d690f60ea6786f48b30955912d970
|
# Dataset Card for "nepali-to-newari-english-alphabet-translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Unspoiled-Egg/nepali-to-newari-english-alphabet-translation
|
[
"task_categories:translation",
"size_categories:n<1K",
"language:ne",
"doi:10.57967/hf/0555",
"region:us"
] |
2023-04-19T14:10:30+00:00
|
{"language": ["ne"], "size_categories": ["n<1K"], "task_categories": ["translation"], "pretty_name": "n", "dataset_info": {"features": [{"name": "translation", "struct": [{"name": "ne", "dtype": "string"}, {"name": "new", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 14830, "num_examples": 299}], "download_size": 11107, "dataset_size": 14830}}
|
2023-04-23T10:05:46+00:00
|
a93230a8ca8dfd95823d326319c1faf8b257f164
|
# Dataset Card for "8000-java"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DavidMOBrien/8000-java
|
[
"region:us"
] |
2023-04-19T14:11:33+00:00
|
{"dataset_info": {"features": [{"name": "before", "dtype": "string"}, {"name": "after", "dtype": "string"}, {"name": "repo", "dtype": "string"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 722488653.5318879, "num_examples": 441596}, {"name": "test", "num_bytes": 90311899.73405604, "num_examples": 55200}, {"name": "valid", "num_bytes": 90311899.73405604, "num_examples": 55200}], "download_size": 323537982, "dataset_size": 903112452.9999999}}
|
2023-04-19T14:14:06+00:00
|
456ffaf26a0a571db20c60566dd8ee8336a73177
|
# Dataset Card for "animesfw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ioclab/animesfw
|
[
"region:us"
] |
2023-04-19T14:24:32+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "tags", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 968422627084.875, "num_examples": 3969879}], "download_size": 4471804726, "dataset_size": 968422627084.875}}
|
2023-04-24T13:10:44+00:00
|
5da72c4d376f7a7d7101fe2975c91cf52a39c370
|
# Dataset Card for "natural-instruction-195"
## Dataset Description
NaturalInstruction task 195.
In this task, you are given a text from tweets. Your task is to classify given tweet text into two categories: 1) positive, and 2) negative based on its content.
## Data Fields
- `text`: Tweet text.
- `label`: Sentiment of the text, either "negative" (0) or positive (1).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mattymchen/natural-instruction-195
|
[
"task_categories:text-classification",
"task_ids:sentiment-classification",
"language:en",
"region:us"
] |
2023-04-19T15:09:31+00:00
|
{"language": ["en"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 549795, "num_examples": 6500}], "download_size": 388442, "dataset_size": 549795}}
|
2023-04-20T08:34:15+00:00
|
0c9ba3e3150e51bbc44bf6ffe10629a0d6faeaf8
|
# Dataset Card for "natural-instruction-050"
## Dataset Description
NaturalInstruction task 050.
You are given a sentence and a question in the input. If the information provided in the sentence is enough to answer the question, label \"Yes\", otherwise label \"No\". Do not use any facts other than those provided in the sentence while labeling \"Yes\" or \"No\". There are only two types of valid responses: Yes and No.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mattymchen/natural-instruction-050
|
[
"region:us"
] |
2023-04-19T15:21:07+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 847078, "num_examples": 5912}], "download_size": 443131, "dataset_size": 847078}}
|
2023-04-20T08:43:27+00:00
|
94c7998045390064573a038b54225ddda2e7e044
|
SoulAbi/Yelp-Dataset
|
[
"language:en",
"license:openrail",
"region:us"
] |
2023-04-19T15:38:03+00:00
|
{"language": ["en"], "license": "openrail"}
|
2023-04-19T15:40:00+00:00
|
|
d5b7a4e67e71ed010224b9f2336f3d6834da5276
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
Sanath369/Telugu_sentiment_sentences
|
[
"size_categories:n<1K",
"language:te",
"region:us"
] |
2023-04-19T15:52:06+00:00
|
{"language": ["te"], "size_categories": ["n<1K"], "pretty_name": "Telugu language movie review data"}
|
2023-10-08T07:47:53+00:00
|
7849df70fa7bcc810993f33ca4c0ad4f96ba1feb
|
# Dataset Card for "pokemon5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
WUYONGF/pokemon5
|
[
"region:us"
] |
2023-04-19T16:11:01+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 254925.0, "num_examples": 5}], "download_size": 234406, "dataset_size": 254925.0}}
|
2023-04-20T03:50:38+00:00
|
a7bf838f7a5edfa016e1e5ca9fb05283a69ae13e
|
Russian glados lines with links to audio from https://i1.theportalwiki.net/
parsed from https://theportalwiki.com/wiki/GLaDOS_voice_lines/ru
|
TeraSpace/glados_ru_lines
|
[
"language:ru",
"license:mit",
"region:us"
] |
2023-04-19T16:27:05+00:00
|
{"language": ["ru"], "license": "mit"}
|
2023-10-18T21:17:49+00:00
|
512ce9b633ce5bf9b859d91dbd58307a8ac33c16
|
# Dataset Card for "processed_eval_coco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gokuls/processed_eval_coco
|
[
"region:us"
] |
2023-04-19T16:35:59+00:00
|
{"dataset_info": {"features": [{"name": "image_path", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "validation", "num_bytes": 3026780000, "num_examples": 5000}], "download_size": 920275832, "dataset_size": 3026780000}}
|
2023-04-19T16:39:45+00:00
|
3ef3e2f30d0f933fdfdabcf34701231987cc6af1
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_benchmarks_partition_0
|
[
"region:us"
] |
2023-04-19T16:36:17+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 63, "num_examples": 2}], "download_size": 1388, "dataset_size": 63}}
|
2023-04-19T18:44:40+00:00
|
82419d0e586265833ee17fed5b17340777172db5
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_benchmarks_partition_1
|
[
"region:us"
] |
2023-04-19T16:36:17+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 63, "num_examples": 2}], "download_size": 1388, "dataset_size": 63}}
|
2023-04-19T18:44:40+00:00
|
3907e6510f69467ba0e43c18e6819db7b3a774d9
|
# Dataset Card for "rmh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
stoddur/rmh
|
[
"region:us"
] |
2023-04-19T16:55:17+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15286850153, "num_examples": 5643208}], "download_size": 9354218561, "dataset_size": 15286850153}}
|
2023-04-19T17:54:03+00:00
|
8a3858328feccc729dd25968170b2144c203e844
|
# Dataset Card for "big-bigbio-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rntc/big-bigbio-ner
|
[
"region:us"
] |
2023-04-19T16:55:29+00:00
|
{"dataset_info": {"features": [{"name": "answer", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "ner_tags", "sequence": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "types", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 796468363, "num_examples": 169113}], "download_size": 156028850, "dataset_size": 796468363}}
|
2023-04-19T16:57:16+00:00
|
6eb7ed1c50b3cdacddad855fecf23753cbd82ff1
|
Hyttenak/storytimeline
|
[
"license:unknown",
"region:us"
] |
2023-04-19T18:29:05+00:00
|
{"license": "unknown"}
|
2023-04-19T19:35:11+00:00
|
|
b66bfe2cf4187b64b0faf0d6c996b2bff70d1f41
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_benchmarks_partition_5
|
[
"region:us"
] |
2023-04-19T18:39:28+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 55, "num_examples": 2}], "download_size": 1356, "dataset_size": 55}}
|
2023-04-19T18:44:38+00:00
|
47e45ed627e67c5871f3fe31cd465c5384a3e0ba
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_benchmarks_partition_6
|
[
"region:us"
] |
2023-04-19T18:39:28+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 58, "num_examples": 2}], "download_size": 1368, "dataset_size": 58}}
|
2023-04-19T18:44:38+00:00
|
82b32dc9f686ed98c307e65c6b251aefdb35fa70
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_benchmarks_partition_3
|
[
"region:us"
] |
2023-04-19T18:39:28+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 58, "num_examples": 2}], "download_size": 1368, "dataset_size": 58}}
|
2023-04-19T18:44:38+00:00
|
4d1145af1ca4b59a2a735dcf115b90665ad435b1
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_benchmarks_partition_4
|
[
"region:us"
] |
2023-04-19T18:39:28+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39, "num_examples": 2}], "download_size": 1292, "dataset_size": 39}}
|
2023-04-19T18:44:38+00:00
|
339a3f1d176cda7539b4b6389d92576bf15a3b87
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_benchmarks_partition_2
|
[
"region:us"
] |
2023-04-19T18:39:28+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39, "num_examples": 2}], "download_size": 1292, "dataset_size": 39}}
|
2023-04-19T18:44:37+00:00
|
976c96b478d0dd763261abba9e7c82b98929dda6
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_sample_validation_benchmarks_partition_7
|
[
"region:us"
] |
2023-04-19T18:39:29+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27, "num_examples": 2}], "download_size": 1242, "dataset_size": 27}}
|
2023-04-19T18:44:38+00:00
|
a5c6ad271749cab8351bda7e9909b3f99cc452f2
|
# Dataset Card for "test_lambda"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/test_lambda
|
[
"region:us"
] |
2023-04-19T18:43:15+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "abyssinian", "1": "american bulldog", "2": "american pit bull terrier", "3": "basset hound", "4": "beagle", "5": "bengal", "6": "birman", "7": "bombay", "8": "boxer", "9": "british shorthair", "10": "chihuahua", "11": "egyptian mau", "12": "english cocker spaniel", "13": "english setter", "14": "german shorthaired", "15": "great pyrenees", "16": "havanese", "17": "japanese chin", "18": "keeshond", "19": "leonberger", "20": "maine coon", "21": "miniature pinscher", "22": "newfoundland", "23": "persian", "24": "pomeranian", "25": "pug", "26": "ragdoll", "27": "russian blue", "28": "saint bernard", "29": "samoyed", "30": "scottish terrier", "31": "shiba inu", "32": "siamese", "33": "sphynx", "34": "staffordshire bull terrier", "35": "wheaten terrier", "36": "yorkshire terrier"}}}}, {"name": "species", "dtype": {"class_label": {"names": {"0": "Cat", "1": "Dog"}}}}, {"name": "id", "dtype": "int64"}, {"name": "clip_tags_ViT_L_14", "sequence": "string"}, {"name": "blip_caption", "dtype": "string"}, {"name": "LLM_Description_gpt3_downstream_tasks_ViT_L_14", "sequence": "string"}, {"name": "clip_tag_ViT_L_14_specific", "dtype": "string"}, {"name": "clip_tags_ViT_L_14_ensemble_specific", "dtype": "string"}, {"name": "clip_tags_ViT_L_14_simple_specific", "dtype": "string"}, {"name": "LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_with_openai_classes", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_wo_openai_classes", "sequence": "string"}, {"name": "Attributes_ViT_L_14_text_davinci_003", "sequence": "string"}, {"name": "Attributes_ViT_L_14_text_davinci_003_full", "sequence": "string"}, {"name": "Attributes_ViT_L_14_text_davinci_003_oxfordpets", "sequence": "string"}, {"name": "clip_tags_ViT_B_16_simple_specific", "dtype": "string"}, {"name": "clip_tags_ViT_B_16_ensemble_specific", "dtype": "string"}, {"name": "clip_tags_ViT_B_32_simple_specific", "dtype": "string"}, {"name": "clip_tags_ViT_B_32_ensemble_specific", "dtype": "string"}, {"name": "Attributes_ViT_L_14_descriptors_text_davinci_003_full_validate", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 419597453.0, "num_examples": 3669}], "download_size": 413002721, "dataset_size": 419597453.0}}
|
2023-04-19T18:43:54+00:00
|
34d287b1a67853560b7e1534c65a19875f36db6e
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2_sample_validation_benchmarks_partition_4
|
[
"region:us"
] |
2023-04-19T18:46:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39, "num_examples": 2}], "download_size": 0, "dataset_size": 39}}
|
2023-04-19T19:58:56+00:00
|
7e199ba42e6c1ec65001850be78099bc0209d33c
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2_sample_validation_benchmarks_partition_2
|
[
"region:us"
] |
2023-04-19T18:46:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39, "num_examples": 2}], "download_size": 0, "dataset_size": 39}}
|
2023-04-19T19:58:56+00:00
|
84bae955a6ba68e071e1531cf151d67f14fd5eb5
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2_sample_validation_benchmarks_partition_6
|
[
"region:us"
] |
2023-04-19T18:46:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 58, "num_examples": 2}], "download_size": 0, "dataset_size": 58}}
|
2023-04-19T19:58:56+00:00
|
8c90f5acd25e93a224c7716815edcb85b4adc89d
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2_sample_validation_benchmarks_partition_3
|
[
"region:us"
] |
2023-04-19T18:46:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 58, "num_examples": 2}], "download_size": 0, "dataset_size": 58}}
|
2023-04-19T19:58:56+00:00
|
6a26d599ec24fe632a57de0efa9235288b8b9834
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2_sample_validation_benchmarks_partition_5
|
[
"region:us"
] |
2023-04-19T18:46:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 55, "num_examples": 2}], "download_size": 0, "dataset_size": 55}}
|
2023-04-19T19:58:56+00:00
|
bc9d6e0ecfef597d286674939ade09367b6f6d24
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2_sample_validation_benchmarks_partition_0
|
[
"region:us"
] |
2023-04-19T18:46:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 63, "num_examples": 2}], "download_size": 0, "dataset_size": 63}}
|
2023-04-19T19:58:56+00:00
|
fbde1aff05c239502b82f506e62b7d29c12c16bb
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2_sample_validation_benchmarks_partition_7
|
[
"region:us"
] |
2023-04-19T18:46:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27, "num_examples": 2}], "download_size": 0, "dataset_size": 27}}
|
2023-04-19T19:58:56+00:00
|
7cb4010d437e829a6df781363ac9f5e391d73aa9
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2_sample_validation_benchmarks_partition_1
|
[
"region:us"
] |
2023-04-19T18:46:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 63, "num_examples": 2}], "download_size": 0, "dataset_size": 63}}
|
2023-04-19T19:58:56+00:00
|
3bda016f561ca28ab6da041cae60475391c198a2
|
Phonecharger/WLAagreement
|
[
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:summarization",
"task_categories:feature-extraction",
"task_categories:table-question-answering",
"task_categories:automatic-speech-recognition",
"task_categories:sentence-similarity",
"task_categories:fill-mask",
"size_categories:10K<n<100K",
"language:en",
"license:openrail",
"region:us"
] |
2023-04-19T19:07:06+00:00
|
{"language": ["en"], "license": "openrail", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "conversational", "summarization", "feature-extraction", "table-question-answering", "automatic-speech-recognition", "sentence-similarity", "fill-mask"], "pretty_name": "Co470"}
|
2023-04-19T19:10:31+00:00
|
|
3101f144886c29717ec99c899b22c758cecea302
|
QuickWire/TestSet-CB
|
[
"license:mit",
"region:us"
] |
2023-04-19T19:21:27+00:00
|
{"license": "mit"}
|
2023-04-24T21:57:59+00:00
|
|
fb4dcf6dd5ce22be5371b87c81b52147f6f88403
|
garcianacho/Scripts
|
[
"license:bsd",
"region:us"
] |
2023-04-19T19:27:53+00:00
|
{"license": "bsd"}
|
2023-04-19T19:30:23+00:00
|
|
db5ff83cea830b252f50b50894946f58751f524e
|
# Dataset Card for "COCO2014-Qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
justram/COCO2014-Qrels
|
[
"region:us"
] |
2023-04-19T19:32:37+00:00
|
{"dataset_info": {"features": [{"name": "text_id", "dtype": "int64"}, {"name": "Q0", "dtype": "string"}, {"name": "image_id", "dtype": "int64"}, {"name": "rel", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 17002410, "num_examples": 566747}, {"name": "val", "num_bytes": 750300, "num_examples": 25010}, {"name": "test", "num_bytes": 750300, "num_examples": 25010}], "download_size": 5515654, "dataset_size": 18503010}}
|
2023-04-19T19:32:54+00:00
|
8a18a7e7791837a9bb7b0169b5cfaac5d300eeaa
|
# Dataset Card for "COCO2014-Captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
justram/COCO2014-Captions
|
[
"region:us"
] |
2023-04-19T19:33:17+00:00
|
{"dataset_info": {"features": [{"name": "text_id", "dtype": "int64"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36551702, "num_examples": 566747}, {"name": "val", "num_bytes": 1610843, "num_examples": 25010}, {"name": "test", "num_bytes": 1610345, "num_examples": 25010}], "download_size": 21814166, "dataset_size": 39772890}}
|
2023-04-19T19:33:40+00:00
|
58d5b38e42eda6537da901f621bd1fb91d730d6f
|
# Dataset Card for "COCO2014-Images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
justram/COCO2014-Images
|
[
"region:us"
] |
2023-04-19T19:38:00+00:00
|
{"dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "val", "num_bytes": 244670060.0, "num_examples": 5000}, {"name": "test", "num_bytes": 243891680.0, "num_examples": 5000}, {"name": "train", "num_bytes": 5538368056.125, "num_examples": 113287}], "download_size": 6027072598, "dataset_size": 6026929796.125}}
|
2023-04-19T20:19:41+00:00
|
62e7d6069459e95c8795a94df7a54ce7d5d677da
|
yash1811/news_summaries
|
[
"license:mit",
"region:us"
] |
2023-04-19T19:41:41+00:00
|
{"license": "mit"}
|
2023-04-19T21:00:36+00:00
|
|
ddd755a77b8d48c21a76bfe54873d307c2cd48da
|
# Dataset Card for "punctuation-mec-bert-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tiagoblima/punctuation-mec-bert-v2
|
[
"region:us"
] |
2023-04-19T19:51:19+00:00
|
{"dataset_info": {"features": [{"name": "text_id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "labels", "sequence": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "sent_text", "dtype": "string"}, {"name": "tag", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3086823, "num_examples": 2247}], "download_size": 493652, "dataset_size": 3086823}}
|
2023-04-21T20:20:41+00:00
|
d7b3208525869d458df3c64cdfc8c8f8538fa84b
|
luizfsjunior/idades
|
[
"license:unknown",
"region:us"
] |
2023-04-19T19:55:04+00:00
|
{"license": "unknown"}
|
2023-04-19T19:55:04+00:00
|
|
3e5d0572e283a46963e843043cb733f66da4168b
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_global_8_loca_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2_sample_validation_benchmarks_partition_global_8_loca_0
|
[
"region:us"
] |
2023-04-19T20:32:45+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48, "num_examples": 1}], "download_size": 0, "dataset_size": 48}}
|
2023-04-20T02:06:36+00:00
|
168187ac9a57d4fb9acd60080a17ed58d76048ae
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_global_12_loca_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2_sample_validation_benchmarks_partition_global_12_loca_4
|
[
"region:us"
] |
2023-04-19T20:32:45+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24, "num_examples": 1}], "download_size": 0, "dataset_size": 24}}
|
2023-04-20T02:06:37+00:00
|
5e44f96741b33f12986744ce0a35f5fd7e157b96
|
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_global_15_loca_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2_sample_validation_benchmarks_partition_global_15_loca_7
|
[
"region:us"
] |
2023-04-19T20:32:46+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14, "num_examples": 1}], "download_size": 0, "dataset_size": 14}}
|
2023-04-20T02:06:37+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.