sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
7e13029312aefc044fc580abc65b87b591596004
|
# Dataset Card for "bart_tokenized_data_bpe_byte_level"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ZurabDz/bart_tokenized_data_bpe_byte_level
|
[
"region:us"
] |
2023-04-17T15:35:22+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 704227420, "num_examples": 2708567}], "download_size": 437699986, "dataset_size": 704227420}}
|
2023-04-17T15:38:00+00:00
|
eb60093c2d0375ebced8711d43464ab4ebe2bc85
|
# Dataset Card for "tib_w_slides"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gigant/tib_w_slides
|
[
"region:us"
] |
2023-04-17T15:35:39+00:00
|
{"dataset_info": {"features": [{"name": "doi", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "video_url", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "release_year", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "contributors", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "transcript", "dtype": "string"}, {"name": "transcript_segments", "sequence": [{"name": "id", "dtype": "int32"}, {"name": "seek", "dtype": "int32"}, {"name": "start", "dtype": "float32"}, {"name": "end", "dtype": "float32"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "int32"}, {"name": "temperature", "dtype": "float32"}, {"name": "avg_logprob", "dtype": "float32"}, {"name": "compression_ratio", "dtype": "float32"}, {"name": "no_speech_prob", "dtype": "float32"}]}, {"name": "keyframes", "sequence": [{"name": "slide", "dtype": "image"}, {"name": "frames", "sequence": "int32"}, {"name": "timestamp", "sequence": "float32"}]}], "splits": [{"name": "train", "num_bytes": 121040849.0, "num_examples": 9}], "download_size": 120461997, "dataset_size": 121040849.0}}
|
2023-04-17T15:35:59+00:00
|
0f5409104c4ac786e5bb366e7c4057dfd3b439df
|
# Distil Whisper: Common Voice 13
This is a variant of the [Common Voice 13](https://huggingface.co/datasets/mozilla_foundation/common_voice_13) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/mozilla_foundation/common_voice_13).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/common_voice_13_0", "en")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/common_voice_13_0", "en", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc0-1.0.
|
distil-whisper/common_voice_13_0
|
[
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc0-1.0",
"region:us"
] |
2023-04-17T15:51:15+00:00
|
{"language": ["en"], "license": "cc0-1.0", "task_categories": ["automatic-speech-recognition"], "-pretty_name": "Common Voice 13"}
|
2023-09-25T09:30:13+00:00
|
1020089fce7c6d8a579959383c166547cbfc5c66
|
# Dataset Card for "sst5-mapped-extreme"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jacobthebanana/sst5-mapped-extreme
|
[
"region:us"
] |
2023-04-17T15:51:48+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 450820, "num_examples": 4004}, {"name": "test", "num_bytes": 119474, "num_examples": 1067}, {"name": "validation", "num_bytes": 60494, "num_examples": 533}], "download_size": 413936, "dataset_size": 630788}}
|
2023-04-18T18:51:29+00:00
|
1f453f96cdf275f0330fb2f241c148684869ac82
|
# Dataset Card for "sst5_mapped_grouped"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jacobthebanana/sst5_mapped_grouped
|
[
"region:us"
] |
2023-04-17T15:59:03+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 979714, "num_examples": 8544}, {"name": "test", "num_bytes": 253524, "num_examples": 2210}, {"name": "validation", "num_bytes": 127148, "num_examples": 1101}], "download_size": 890192, "dataset_size": 1360386}}
|
2023-04-18T18:54:38+00:00
|
febb9c0eec62adfcd63c9076761122cae49c6c56
|
The dataset was created based on the web-scrapping script available [here](https://github.com/vanessasml/web-scrapper).
The ENISA website includes a section of publicly available news on cyber risk.
The dataset includes the following information: publication_date (datetime.date), title (str), summary (str), body (str).
# Credits
Full credits to ENISA. This dataset serves only to facilitate the eploration of ML models for cyber risk.
|
Vanessasml/enisa_cyber_news_dataset
|
[
"region:us"
] |
2023-04-17T16:03:54+00:00
|
{}
|
2023-04-17T16:13:58+00:00
|
2a251ba6c660312be4d7a145c37c044b47074afa
|
# Dataset Card for April 2023 Polish Wikipedia
Wikipedia dataset containing cleaned articles of Polish language.
The dataset has been built from the Wikipedia dump (https://dumps.wikimedia.org/)
using the [OLM Project](https://github.com/huggingface/olm-datasets).
Each example contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
### Citation Information
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
```
|
chrisociepa/wikipedia-pl-20230401
|
[
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"size_categories:1M<n<10M",
"language:pl",
"license:cc-by-sa-3.0",
"pretraining",
"language modelling",
"wikipedia",
"web",
"region:us"
] |
2023-04-17T16:14:21+00:00
|
{"language": ["pl"], "license": "cc-by-sa-3.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "Polish Wikipedia 2023-04-01", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2883878741, "num_examples": 1562327}], "download_size": 1761971402, "dataset_size": 2883878741}, "tags": ["pretraining", "language modelling", "wikipedia", "web"]}
|
2023-04-17T19:41:24+00:00
|
9c34753a67f5b05558ae3b5c3f90f387f9ad8b11
|
# Slovene MNLI SNLI
This dataset contains 49961 premise hypothesis pairs (50% MNLI, 50% SNLI), which were acquired by translating original samples.
|
jacinthes/slovene_mnli_snli
|
[
"license:cc-by-sa-4.0",
"region:us"
] |
2023-04-17T16:33:45+00:00
|
{"license": "cc-by-sa-4.0", "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "org_premise", "dtype": "string"}, {"name": "org_hypothesis", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 12554097, "num_examples": 40000}, {"name": "dev", "num_bytes": 1569723, "num_examples": 4961}, {"name": "test", "num_bytes": 1584740, "num_examples": 5000}], "download_size": 8471333, "dataset_size": 15708560}}
|
2023-04-20T08:03:53+00:00
|
8dddf29f9ea090325b9cdea906988eb6584ceefa
|
# Dataset Card for Snippet-MLSUM-500-V2
### Dataset Summary
This dataset is a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generated news snippets.
### Supported Tasks
This dataset was created to support the task of generating news snippets such as title, teaser, keywords, serp and tweet for news articles in German language.
### Languages
de - German
## Dataset Structure
text: a string feature.
title: a string feature.
teaser: a string feature.
keywords: a string feature.
summary: a string feature.
serp: a string feature.
tweet: a string feature.
url: a string feature.
date: a string feature.
topic: a string feature.
## Dataset Creation
The news articles in this dataset are a random sample of ~500 news articles from MLSUM balanced by topic.
Features text, title, teaser (originally summary in MLSUM), url, date and topic are copied from MLSUM.
Features keywords, serp, summary and tweet are machine generated with GPT-3.5.
Generated features comply with length limits in place for SERPs and Tweets at the time of publishing.
## Considerations for Using the Data
### Known Limitations
Part of the snippet data is machine generated. Be aware that these features (specifically: keywords, serp, summary and tweet) may exhibit signs of model hallucination, stereotypes and toxicity.
## Additional Information
### Licensing Information
This dataset is licensed under MIT license.
|
snipaid/snippet-mlsum-500-v2
|
[
"task_categories:summarization",
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:de",
"license:mit",
"news",
"headline",
"teaser",
"keywords",
"tweet",
"serp",
"summary",
"news snippets",
"region:us"
] |
2023-04-17T16:44:25+00:00
|
{"language": "de", "license": "mit", "size_categories": ["n<1K"], "task_categories": ["summarization", "text2text-generation"], "tags": ["news", "headline", "teaser", "keywords", "tweet", "serp", "summary", "news snippets"]}
|
2023-04-19T17:26:42+00:00
|
a731408903e08c33f1a69bce8682159e7fe2071f
|
# Dataset Card for "newest_biored"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SHS/newest_biored
|
[
"region:us"
] |
2023-04-17T17:00:14+00:00
|
{"dataset_info": {"features": [{"name": "pmid", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 576610, "num_examples": 97}, {"name": "train", "num_bytes": 2259680, "num_examples": 387}, {"name": "val", "num_bytes": 604670, "num_examples": 98}], "download_size": 1083243, "dataset_size": 3440960}}
|
2023-04-24T20:35:53+00:00
|
3da1559ac33f5a0605dc15dbddfec3bb3b935a45
|
# Dataset Card for Instruct-Snippet-MLSUM-500-V2
### Dataset Summary
This is a dataset for multitask instruction finetuning dataset for the task of news snippet generation. It is built from a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generated news snippets.
### Supported Tasks
This dataset was created to support the task of generating news snippets such as title, teaser, summary, keywords, serp and tweet for news articles in German language.
### Languages
de - German
## Dataset Structure
lable: a string feature.
instruction: a string feature.
input: a string feature.
output: a string feature.
## Dataset Creation
This dataset was created from Snippet-MLSUM-500-V2. See [Snippet-MLSUM-500-V2](https://huggingface.co/datasets/snipaid/snippet-mlsum-500-V2) for the dataset without instructions.
Instructions were generated with GPT-3.5 from a human-curated seed-set of instructions.
## Considerations for Using the Data
### Known Limitations
Part of the snippet data is machine generated. Be aware that these features (specifically: output) may exhibit signs of model hallucination, toxicity and stereotypes.
## Additional Information
### Licensing Information
This dataset is licensed under MIT license.
|
snipaid/instruct-snippet-mlsum-v2
|
[
"task_categories:summarization",
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"language:de",
"license:mit",
"news",
"headline generation",
"teaser generation",
"keyword generation",
"summarization",
"tweet generation",
"serp generation",
"news snippet generation",
"region:us"
] |
2023-04-17T17:27:27+00:00
|
{"language": "de", "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["summarization", "text2text-generation"], "pretty_name": "Instruct-Snippet-MLSUM-500-V2", "tags": ["news", "headline generation", "teaser generation", "keyword generation", "summarization", "tweet generation", "serp generation", "news snippet generation"]}
|
2023-04-19T17:10:24+00:00
|
7b04e00e66860887ae1938dca0f2030eb03c881a
|
# GenderClassify
## Example Images
#### woman

#### man

|
cledoux42/GenderClassify
|
[
"image-classification",
"pytorch",
"huggingpics",
"region:us"
] |
2023-04-17T17:27:30+00:00
|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"], "model-index": [{"name": "GenderClassify", "results": [{"task": {"name": "Image Classification", "type": "image-classification"}, "metrics": [{"name": "Accuracy", "type": "accuracy"}]}]}]}
|
2023-04-17T19:45:47+00:00
|
a834d3e158d1365cbac84e15101bec5ff86db103
|
# OpenAssistant Conversations Spanish Dataset (OASST1-es) for GPT-j
## Dataset Summary
Subset of the original [OpenAssistant Conversations Dataset (OASST)](https://huggingface.co/datasets/OpenAssistant/oasst1).
* Filtered by `lang=es`.
* Formatted according to the "instruction - output" pattern.
* Select the best ranked output (Some instructions have multiple outputs ranked by humans).
* Select only the first level of the tree conversation.
## Dataset Structure
The dataset has 3909 rows of tuples (instructions and outputs).
|
dariolopez/gpt-j-oasst1-es
|
[
"size_categories:1K<n<10K",
"language:es",
"license:apache-2.0",
"region:us"
] |
2023-04-17T17:30:42+00:00
|
{"language": ["es"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4445880, "num_examples": 3909}], "download_size": 2580076, "dataset_size": 4445880}}
|
2023-04-21T18:03:26+00:00
|
4a11cb973b985e3c4cb55ce20b836d4834057fc0
|
This dataset was created by automatically translating "databricks-dolly-15k" into Japanese.
This dataset contains 69K ja-en-translation task data and is licensed under CC BY SA 3.0.
Last Update : 2023-04-18
databricks-dolly-15k-ja
https://github.com/kunishou/databricks-dolly-15k-ja
databricks-dolly-15k
https://github.com/databrickslabs/dolly/tree/master/data
|
kunishou/databricks-dolly-69k-ja-en-translation
|
[
"language:ja",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-04-17T17:31:42+00:00
|
{"language": ["ja", "en"], "license": "cc-by-sa-3.0"}
|
2023-10-21T14:09:14+00:00
|
a652efb068e0be19cca294b20f85030f5d0e34ce
|
trec-product-search/product-search-corpus
|
[
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"information retrieval",
"product search",
"dense retrieval",
"region:us"
] |
2023-04-17T17:41:56+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-classification"], "pretty_name": "TREC Product Search Corpus", "tags": ["information retrieval", "product search", "dense retrieval"]}
|
2023-08-09T14:56:33+00:00
|
|
2d8e416c8b686e6a22ef515128f38f877bcdb034
|
trec-product-search/Product-Search-Qrels-v0.1
|
[
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"Information Retrieval",
"TREC",
"Product Search",
"region:us"
] |
2023-04-17T17:42:16+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "pretty_name": "TREC Product Search Relevance Labels", "tags": ["Information Retrieval", "TREC", "Product Search"]}
|
2023-05-17T15:31:18+00:00
|
|
8463c1a4137459f9b0751e8a0ea48d63f2af2e53
|
# Dataset Card for "discursos-pre-clasificados"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Sleoruiz/discursos-pre-clasificados
|
[
"region:us"
] |
2023-04-17T18:31:30+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "comision", "dtype": "string"}, {"name": "gaceta_numero", "dtype": "string"}, {"name": "fecha_gaceta", "dtype": "string"}, {"name": "labels", "sequence": "string"}, {"name": "scores", "sequence": "float64"}, {"name": "idx", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 250597907.43449536, "num_examples": 119399}, {"name": "test", "num_bytes": 31325000.782752313, "num_examples": 14925}, {"name": "valid", "num_bytes": 31325000.782752313, "num_examples": 14925}], "download_size": 144277964, "dataset_size": 313247909.0}}
|
2023-04-17T18:32:24+00:00
|
edad2c0293b40905a759e9a3db3a8b3a1b7dd595
|
# Dataset Card for "ruin_names_preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
skrishna/ruin_names_preprocessed
|
[
"region:us"
] |
2023-04-17T18:43:01+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "sequence": "string"}, {"name": "multiple_choice_targets", "sequence": "string"}, {"name": "multiple_choice_scores", "sequence": "int32"}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 111297, "num_examples": 359}, {"name": "validation", "num_bytes": 27924, "num_examples": 89}], "download_size": 55059, "dataset_size": 139221}}
|
2023-04-17T19:07:04+00:00
|
2956d312f01edad572469bcfa5965c1bd04ae4c6
|
axprok/SovietFilmTitles
|
[
"size_categories:1K<n<10K",
"region:us"
] |
2023-04-17T19:08:44+00:00
|
{"size_categories": ["1K<n<10K"], "pretty_name": "sovfilmtitles"}
|
2023-04-17T19:10:55+00:00
|
|
474d94030c202d5438abf06dcdca630c434a9a17
|
Zubairjamu/mydata
|
[
"license:cc-by-3.0",
"region:us"
] |
2023-04-17T19:11:41+00:00
|
{"license": "cc-by-3.0"}
|
2023-04-17T19:11:41+00:00
|
|
439c601e24efada60b38c3edc5f670dd98d8f5cf
|
# Dataset Card for "xorder_dish_sdk"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vhug/xorder_dish_sdk
|
[
"region:us"
] |
2023-04-17T19:26:09+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 30385044.0, "num_examples": 13}], "download_size": 2091361, "dataset_size": 30385044.0}}
|
2023-04-17T19:26:11+00:00
|
267ee0efd20b7a94874d80bbb4e22cb3b7ec666e
|
# Dataset Card for "applescript-lines-100k-non-annotated"
## Description
Dataset of 100,000 unique lines of AppleScript code scraped from GitHub and GitHub Gists. The dataset has been de-duplicated, comments have been removed (both single and multi-line), and effort has been made to merge multi-line structures such as records into one (however, expect some variability in this regard).
The dataset is constructed as an intermediate step to a fully-annotated AppleScript dataset.
Each row has fields for `text` and `source`, with text being the raw text of the line and source being the file name and extension from which the line was obtained. Full source links have been omitted for anonymity.
|
HelloImSteven/applescript-lines-100k-non-annotated
|
[
"task_categories:text-classification",
"size_categories:100K<n<1M",
"license:mit",
"code",
"applescript",
"region:us"
] |
2023-04-17T19:28:24+00:00
|
{"license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8452105, "num_examples": 100000}], "download_size": 2718505, "dataset_size": 8452105}, "tags": ["code", "applescript"]}
|
2023-04-17T19:43:09+00:00
|
45fb44dfb0d5a7aa18061ae275c7c30864da4f6c
|
# Dataset Card for "chunk_273"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
one-sec-cv12/chunk_273
|
[
"region:us"
] |
2023-04-17T20:09:47+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 5506047648.25, "num_examples": 57326}], "download_size": 4808654570, "dataset_size": 5506047648.25}}
|
2023-04-17T20:14:32+00:00
|
41d2f41183ae36228b6f448d43684c04c77fd9ea
|
elenanaymova/elenanaymova
|
[
"task_categories:zero-shot-classification",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:translation",
"task_categories:question-answering",
"task_categories:table-question-answering",
"size_categories:1M<n<10M",
"language:af",
"language:ae",
"language:aa",
"language:am",
"language:ba",
"language:bg",
"language:an",
"language:ak",
"license:bigscience-openrail-m",
"finance",
"art",
"code",
"region:us"
] |
2023-04-17T20:42:36+00:00
|
{"language": ["af", "ae", "aa", "am", "ba", "bg", "an", "ak"], "license": "bigscience-openrail-m", "size_categories": ["1M<n<10M"], "task_categories": ["zero-shot-classification", "text-classification", "token-classification", "translation", "question-answering", "table-question-answering"], "pretty_name": "Fiasco is visiting ", "tags": ["finance", "art", "code"]}
|
2023-04-17T20:59:33+00:00
|
|
fcd8d2068356997a24d40d82c1536626ec869026
|
# Dataset Card for "iwslt-2023-en-vi-train-val-split-0.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shreevigneshs/iwslt-2023-en-vi-train-val-split-0.2
|
[
"region:us"
] |
2023-04-17T20:58:43+00:00
|
{"dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "vi", "dtype": "string"}, {"name": "vi_annotated", "dtype": "string"}, {"name": "styles", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 293279.0, "num_examples": 640}, {"name": "val", "num_bytes": 69940.0, "num_examples": 160}, {"name": "if_test", "num_bytes": 275045.0, "num_examples": 598}, {"name": "f_test", "num_bytes": 294897.0, "num_examples": 598}, {"name": "f_flores", "num_bytes": 337966, "num_examples": 1012}, {"name": "if_flores", "num_bytes": 337966, "num_examples": 1012}], "download_size": 570518, "dataset_size": 1609093.0}}
|
2023-04-17T22:50:20+00:00
|
808d2b7c111f71f91e5c5cd8f4546a1a618d93f7
|
@InProceedings{Yu&al.19,
title = {SParC: Cross-Domain Semantic Parsing in Context},
author = {Tao Yu and Rui Zhang and Michihiro Yasunaga and Yi Chern Tan and Xi Victoria Lin and Suyi Li and Heyang Er, Irene Li and Bo Pang and Tao Chen and Emily Ji and Shreya Dixit and David Proctor and Sungrok Shim and Jonathan Kraft, Vincent Zhang and Caiming Xiong and Richard Socher and Dragomir Radev},
booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
year = {2019},
address = {Florence, Italy},
publisher = {Association for Computational Linguistics}
}
@inproceedings{Yu&al.18c,
title = {Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task},
author = {Tao Yu and Rui Zhang and Kai Yang and Michihiro Yasunaga and Dongxu Wang and Zifan Li and James Ma and Irene Li and Qingning Yao and Shanelle Roman and Zilin Zhang and Dragomir Radev}
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
year = 2018
}
Reference links
SParC task link: https://yale-lily.github.io/sparc
SParC Github page: https://github.com/taoyds/sparc
Spider task link: https://yale-lily.github.io/spider
Spider Github page: https://github.com/taoyds/spider
|
jellyChiru/SParC
|
[
"license:cc-by-sa-4.0",
"region:us"
] |
2023-04-17T21:07:52+00:00
|
{"license": "cc-by-sa-4.0"}
|
2023-04-17T22:00:38+00:00
|
76ef60c2b6a1591e1754282b4b914f5c3eea6bc3
|
Creation Steps
- Downloaded [5 Million Song Dataset](https://www.kaggle.com/datasets/nikhilnayak123/5-million-song-lyrics-dataset) from Kaggle
- Selected quality artists, as defined by me
- Remove songs featuring any [profanity](https://github.com/surge-ai/profanity)
- Added normalized version of lyrics (used for GloVe embedding only)
- lower case, remove punctuation, remove stopwords, lemmatize)
- Computed four sets of embeddings using all-MiniLM-L12-v2, all-distilroberta-v1, text-embedding-ada-002, and average_word_embeddings_glove.840B.300d
|
sheacon/song_lyrics
|
[
"region:us"
] |
2023-04-17T21:12:06+00:00
|
{}
|
2023-04-18T02:50:45+00:00
|
8e6af5556ef22aa31e8c386927005915155c09c9
|
# Dataset Card for "iwslt-2023-en-ko-train-val-split-0.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shreevigneshs/iwslt-2023-en-ko-train-val-split-0.2
|
[
"region:us"
] |
2023-04-17T21:18:20+00:00
|
{"dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "ko", "dtype": "string"}, {"name": "ko_annotated", "dtype": "string"}, {"name": "styles", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 250870.0, "num_examples": 640}, {"name": "val", "num_bytes": 64582.0, "num_examples": 160}, {"name": "if_test", "num_bytes": 238485.0, "num_examples": 597}, {"name": "f_test", "num_bytes": 249702.0, "num_examples": 597}, {"name": "f_flores", "num_bytes": 312159, "num_examples": 1012}, {"name": "if_flores", "num_bytes": 312159, "num_examples": 1012}], "download_size": 0, "dataset_size": 1427957.0}}
|
2023-04-17T23:08:27+00:00
|
d230f3fa0c12a710b3d2b2e44eff393e776578de
|
# Dataset Card for "fr_crawler"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
factored/fr_crawler_mlm
|
[
"region:us"
] |
2023-04-17T21:40:57+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 122141363, "num_examples": 735972}], "download_size": 68057880, "dataset_size": 122141363}}
|
2023-04-20T20:27:10+00:00
|
d2c24b5e450d8ae1db9dc0c8b14830d1fce1695a
|
# Dataset Card for "biored_tokenized_new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
safiyaalavi/biored_tokenized_new
|
[
"region:us"
] |
2023-04-17T21:51:15+00:00
|
{"dataset_info": {"features": [{"name": "pmid", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 184274, "num_examples": 30}, {"name": "train", "num_bytes": 865185, "num_examples": 148}, {"name": "val", "num_bytes": 197171, "num_examples": 33}], "download_size": 393173, "dataset_size": 1246630}}
|
2023-04-17T21:53:14+00:00
|
1c8b8c2a966d68ff4f19bf3ae5116aefbd0ff8a9
|
gocer/bgg
|
[
"license:other",
"region:us"
] |
2023-04-17T22:27:17+00:00
|
{"license": "other"}
|
2023-04-17T22:27:17+00:00
|
|
9d451dc7629cfe0469f6ae4432b765cd603d5fcb
|
# LLaVA Visual Instruct 150K Dataset Card
## Dataset details
**Dataset type:**
LLaVA Visual Instruct 150K is a set of GPT-generated multimodal instruction-following data.
It is constructed for visual instruction tuning and for building large multimodal towards GPT-4 vision/language capability.
**Dataset date:**
LLaVA Visual Instruct 150K was collected in April 2023, by prompting GPT-4-0314 API.
**Paper or resources for more information:**
https://llava-vl.github.io/
**License:**
Creative Commons Attribution 4.0 International; and it should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
liuhaotian/LLaVA-Instruct-150K
|
[
"task_categories:visual-question-answering",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-04-17T22:47:27+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["visual-question-answering", "question-answering"], "pretty_name": "LLaVA Visual Instruct 150K"}
|
2024-01-03T01:59:20+00:00
|
230718276ca6b8131fa71b797310e1e6763315a5
|
# 弱智吧笑话数据集
弱智吧是百度贴吧中的一个非常受欢迎的论坛,以创作短小精悍的冷笑话而闻名。这些笑话通常采用双关语、不寻常的断句、不合理的逻辑等创作手法。即使是目前最先进的语言模型,也难以完全理解弱智吧的笑话。
[弱智吧](https://tieba.baidu.com/f?ie=utf-8&kw=%E5%BC%B1%E6%99%BA)
我从互联网上收集了一些弱智吧的笑话,共100条,其中45条是陈述句,55条是问句。我结合人工和语言模型对这些笑话进行了一些解析,并制作了这个小型数据集。
## 陈述句笑话
陈述句笑话通常以句号结尾,不容易被语言模型误解为正常的问题。
例如:“出人头地常年盛产人头。”
## 问句笑话
问句笑话具有一定的迷惑性,可能会导致语言模型无法判断它们是正常的问题还是开玩笑。
例如:“蓝牙耳机坏了,应该找牙科医生还是耳科医生?”
## 文件格式
本数据集包括两个部分。
### retarded_bar.jsonl
retarded_bar.jsonl是陈述性笑话数据集,以jsonl格式存储,每行都是一个json字典,包括序号`id`,原文`text`,笑点解析`analysis`,双关语`pun`,作者类型`author_type`五个字段,其中:
- 序号`id`是数字,表示笑话的编号
- 原文`text`是文本,表示笑话的原文,由“弱智吧”社区成员创作,本人在互联网上手动收集而成。
- 笑点解析`analysis`是文本,表示笑话的笑点解析,大部分解析由本人创作,也有一小部分是用语言模型生成的。在作者类型`author_type`中体现了这个内容。
- 双关语`pun`是文本列表,表示笑话中包含的双关语,由本人找到。一个笑话可能包含不止一个双关语,也可能不包含双关语。
- 作者类型`author_type`是文本,表示笑点解析`analysis`的作者类型,而不是笑话原文`text`的作者类型,目前有`human`和`ai`两个值。
### retarded_bar_qa.jsonl
retarded_bar_qa.jsonl是提问性笑话数据集,以jsonl格式存储,每行都是一个json字典,包括序号`id`,原文`text`,回复`answer`,作者类型`author_type`四个字段,其中:
- 序号`id`是数字,表示笑话的编号
- 原文`text`是文本,表示笑话的原文,由“弱智吧”社区成员创作,本人在互联网上手动收集而成。
- 回复`analysis`是文本,表示提问型笑话的合理回复。本人定义的合理回复是应该让对方知道自己已经察觉到提问的幽默性,但仍不失礼貌,且提供准确的事实性信息的回复。合理回复有的由本人创作,也有的是用语言模型生成的。在作者类型`author_type`中体现了这个内容。
- 作者类型`author_type`是文本,表示回复`answer`的作者类型,而不是笑话原文`text`的作者类型,目前有`human`和`ai`两个值。
## 使用方式
建议使用Python的jsonlines库或Hugging Face的datasets库读取本数据集。使用这些库可以轻松地读取jsonl格式的文件并进行后续处理,例如构建训练集或测试集、训练或测试语言模型等。例如,使用jsonlines库可以按行读取jsonl格式的文件,如下所示:
```python
import jsonlines
with jsonlines.open('retarded_bar.jsonl') as reader:
for obj in reader:
# 对每个对象进行处理
print(obj)
```
## 局限性
1. 由于本项目只有本人一个人参与,而这类数据标注难度比较大,自动化程度低,需要比较多的人力,所以数据集容量较小。
2. 本人文字表达能力有限,可能无法准确生动地表达笑点解析,也可能无法创作比较高质量的回答。因此,该数据集中的一些解析和回答可能并不是最佳的。
3. 本数据集的数据来源于互联网,可能存在版权问题。因此,使用该数据集时需要注意版权问题,并遵守相关法律法规。
4. 由于弱智吧的笑话大多是基于中文语境的,因此该数据集可能不适用于其他语言的笑话判断。
## 联系方式
本人QQ:583753622
## 欢迎贡献更多优质数据!
|
hugfaceguy0001/retarded_bar
|
[
"task_categories:text-generation",
"size_categories:n<1K",
"language:zh",
"license:openrail",
"region:us"
] |
2023-04-17T22:55:28+00:00
|
{"language": ["zh"], "license": "openrail", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "\u5f31\u667a\u5427\u7b11\u8bdd\u6570\u636e\u96c6", "configs": [{"config_name": "statement", "data_files": "retarded_bar.jsonl"}, {"config_name": "question", "data_files": "retarded_bar_qa.jsonl"}]}
|
2023-08-30T20:41:00+00:00
|
7e9fe77ffc1b44f9d8340cfd2cdc82fa3d8f7571
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [<h1>](https://github.com<img>/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
kharebanafsh/987
|
[
"region:us"
] |
2023-04-17T23:01:43+00:00
|
{}
|
2023-04-17T23:24:48+00:00
|
dff8b1dc3277b96c07efe9077e6d2e8330e2827e
|
# Dataset Card for "iwslt-2023-en-pt-train-val-split-0.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shreevigneshs/iwslt-2023-en-pt-train-val-split-0.2
|
[
"region:us"
] |
2023-04-17T23:06:01+00:00
|
{"dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "pt", "dtype": "string"}, {"name": "pt_annotated", "dtype": "string"}, {"name": "styles", "dtype": "int64"}], "splits": [{"name": "if_test", "num_bytes": 228385.0, "num_examples": 599}, {"name": "f_test", "num_bytes": 226921.0, "num_examples": 599}, {"name": "f_flores", "num_bytes": 301879, "num_examples": 1012}, {"name": "if_flores", "num_bytes": 301879, "num_examples": 1012}], "download_size": 682919, "dataset_size": 1059064.0}}
|
2023-04-17T23:06:10+00:00
|
c8072105c669313796cee8c3f2e973c089d4fa0c
|
# Dataset Card for "iwslt-2023-en-ru-train-val-split-0.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shreevigneshs/iwslt-2023-en-ru-train-val-split-0.2
|
[
"language:ru",
"language:en",
"region:us"
] |
2023-04-17T23:06:39+00:00
|
{"language": ["ru", "en"], "dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "ru", "dtype": "string"}, {"name": "ru_annotated", "dtype": "string"}, {"name": "styles", "dtype": "int64"}], "splits": [{"name": "if_test", "num_bytes": 327410, "num_examples": 600}, {"name": "f_test", "num_bytes": 327839, "num_examples": 600}, {"name": "f_flores", "num_bytes": 414702, "num_examples": 1012}, {"name": "if_flores", "num_bytes": 414702, "num_examples": 1012}], "download_size": 836846, "dataset_size": 1484653}}
|
2023-10-02T18:21:58+00:00
|
9446146d7cc79146d08c773659eadc6d84effb6e
|
# Dataset Card for "squad-kor-augmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HSJuan/squad-kor-augmented
|
[
"region:us"
] |
2023-04-18T00:15:38+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "pos_aug", "dtype": "string"}, {"name": "neg_aug", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 94206230, "num_examples": 60407}], "download_size": 17504014, "dataset_size": 94206230}}
|
2023-04-20T21:36:54+00:00
|
bd58e3a6afba442b942ff7485f9394a87c1205a8
|
# Dataset Card for Common Voice Corpus 10.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:[email protected])
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 20817 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 15234 validated hours in 96 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Toki Pona, Turkish, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_10_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
gogogogo-1/test
|
[
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] |
2023-04-18T01:51:29+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": {"ab": ["10K<n<100K"], "ar": ["100K<n<1M"], "as": ["1K<n<10K"], "ast": ["n<1K"], "az": ["n<1K"], "ba": ["100K<n<1M"], "bas": ["1K<n<10K"], "be": ["100K<n<1M"], "bg": ["1K<n<10K"], "bn": ["100K<n<1M"], "br": ["10K<n<100K"], "ca": ["1M<n<10M"], "ckb": ["100K<n<1M"], "cnh": ["1K<n<10K"], "cs": ["10K<n<100K"], "cv": ["10K<n<100K"], "cy": ["100K<n<1M"], "da": ["1K<n<10K"], "de": ["100K<n<1M"], "dv": ["10K<n<100K"], "el": ["10K<n<100K"], "en": ["1M<n<10M"], "eo": ["1M<n<10M"], "es": ["100K<n<1M"], "et": ["10K<n<100K"], "eu": ["100K<n<1M"], "fa": ["100K<n<1M"], "fi": ["10K<n<100K"], "fr": ["100K<n<1M"], "fy-NL": ["10K<n<100K"], "ga-IE": ["1K<n<10K"], "gl": ["10K<n<100K"], "gn": ["1K<n<10K"], "ha": ["1K<n<10K"], "hi": ["10K<n<100K"], "hsb": ["1K<n<10K"], "hu": ["10K<n<100K"], "hy-AM": ["1K<n<10K"], "ia": ["10K<n<100K"], "id": ["10K<n<100K"], "ig": ["1K<n<10K"], "it": ["100K<n<1M"], "ja": ["10K<n<100K"], "ka": ["1K<n<10K"], "kab": ["100K<n<1M"], "kk": ["1K<n<10K"], "kmr": ["10K<n<100K"], "ky": ["10K<n<100K"], "lg": ["100K<n<1M"], "lt": ["10K<n<100K"], "lv": ["1K<n<10K"], "mdf": ["n<1K"], "mhr": ["10K<n<100K"], "mk": ["n<1K"], "ml": ["1K<n<10K"], "mn": ["10K<n<100K"], "mr": ["10K<n<100K"], "mt": ["10K<n<100K"], "myv": ["1K<n<10K"], "nan-tw": ["10K<n<100K"], "ne-NP": ["n<1K"], "nl": ["10K<n<100K"], "nn-NO": ["n<1K"], "or": ["1K<n<10K"], "pa-IN": ["1K<n<10K"], "pl": ["100K<n<1M"], "pt": ["100K<n<1M"], "rm-sursilv": ["1K<n<10K"], "rm-vallader": ["1K<n<10K"], "ro": ["10K<n<100K"], "ru": ["100K<n<1M"], "rw": ["1M<n<10M"], "sah": ["1K<n<10K"], "sat": ["n<1K"], "sc": ["n<1K"], "sk": ["10K<n<100K"], "sl": ["10K<n<100K"], "sr": ["1K<n<10K"], "sv-SE": ["10K<n<100K"], "sw": ["100K<n<1M"], "ta": ["100K<n<1M"], "th": ["100K<n<1M"], "tig": ["n<1K"], "tok": ["1K<n<10K"], "tr": ["10K<n<100K"], "tt": ["10K<n<100K"], "ug": ["10K<n<100K"], "uk": ["10K<n<100K"], "ur": ["100K<n<1M"], "uz": ["100K<n<1M"], "vi": ["10K<n<100K"], "vot": ["n<1K"], "yue": ["10K<n<100K"], "zh-CN": ["100K<n<1M"], "zh-HK": ["100K<n<1M"], "zh-TW": ["100K<n<1M"]}, "source_datasets": ["extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "paperswithcode_id": "common-voice", "pretty_name": "Common Voice Corpus 10.0", "language_bcp47": ["ab", "ar", "as", "ast", "az", "ba", "bas", "be", "bg", "bn", "br", "ca", "ckb", "cnh", "cs", "cv", "cy", "da", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy-NL", "ga-IE", "gl", "gn", "ha", "hi", "hsb", "hu", "hy-AM", "ia", "id", "ig", "it", "ja", "ka", "kab", "kk", "kmr", "ky", "lg", "lt", "lv", "mdf", "mhr", "mk", "ml", "mn", "mr", "mt", "myv", "nan-tw", "ne-NP", "nl", "nn-NO", "or", "pa-IN", "pl", "pt", "rm-sursilv", "rm-vallader", "ro", "ru", "rw", "sah", "sat", "sc", "sk", "sl", "sr", "sv-SE", "sw", "ta", "th", "tig", "tok", "tr", "tt", "ug", "uk", "ur", "uz", "vi", "vot", "yue", "zh-CN", "zh-HK", "zh-TW"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."}
|
2023-06-09T06:56:15+00:00
|
343d0ded6f4f4167dac0d86e6e55af909ca4dad3
|
# Dataset Card for "1400-java"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DavidMOBrien/1400-java
|
[
"region:us"
] |
2023-04-18T01:52:41+00:00
|
{"dataset_info": {"features": [{"name": "before", "dtype": "string"}, {"name": "after", "dtype": "string"}, {"name": "repo", "dtype": "string"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 164476347.2534379, "num_examples": 106809}, {"name": "test", "num_bytes": 20560890.828749474, "num_examples": 13352}, {"name": "valid", "num_bytes": 20559350.91781263, "num_examples": 13351}], "download_size": 74061787, "dataset_size": 205596589.0}}
|
2023-04-18T01:53:42+00:00
|
451eb8e274a5672c55b0a064fe2c0c64b3a69f5d
|
# Dataset Card for "sandhi-split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chronbmm/sandhi-split
|
[
"region:us"
] |
2023-04-18T01:54:18+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "unsandhied", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 163604621, "num_examples": 702859}, {"name": "validation", "num_bytes": 1134772, "num_examples": 1293}, {"name": "test", "num_bytes": 1177379, "num_examples": 1352}, {"name": "test_long_500", "num_bytes": 428925, "num_examples": 500}, {"name": "validation_long_500", "num_bytes": 440736, "num_examples": 500}], "download_size": 97959957, "dataset_size": 166786433}}
|
2023-07-10T23:43:56+00:00
|
1ecd64072f6bf716b9fb430501f575fa9c033e2e
|
cannlytics/cannabis_sales
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-04-18T02:52:20+00:00
|
{"license": "cc-by-4.0"}
|
2023-04-18T02:52:20+00:00
|
|
8f07cb919b67619cf4c7c9b438e3a8815c3ce618
|
Ashish-shukla/test_dataset
|
[
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:openrail",
"region:us"
] |
2023-04-18T04:36:42+00:00
|
{"language": ["en"], "license": "openrail", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "tiny_demo"}
|
2023-04-18T04:39:07+00:00
|
|
9ca18d6fd4058a1f825a20e0097319972cb82da9
|
Isotonic/massive_nli_dataset
|
[
"task_categories:zero-shot-classification",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-04-18T05:15:46+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["zero-shot-classification"], "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "train", "num_bytes": 150300464, "num_examples": 1018574}, {"name": "test", "num_bytes": 32168924, "num_examples": 218266}, {"name": "valid", "num_bytes": 32238483, "num_examples": 218266}], "download_size": 137255997, "dataset_size": 214707871}}
|
2023-07-05T11:34:10+00:00
|
|
c4eedf3f270dc403846c2fe1d7243d9fe9a77e65
|
jaydeepb-21/REBELCUSTOMDATA
|
[
"license:other",
"region:us"
] |
2023-04-18T05:27:36+00:00
|
{"license": "other"}
|
2023-04-18T05:30:25+00:00
|
|
4c08ca0f98954bf2e6fb29429ae851133524cb66
|
Origin dataset can be accessed from [here](https://github.com/GuessWhatGame/guesswhat).
|
jxu124/guesswhat
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-18T05:56:33+00:00
|
{"license": "apache-2.0", "dataset_info": {"features": [{"name": "image_raw", "dtype": "image"}, {"name": "status", "dtype": "string"}, {"name": "picture", "struct": [{"name": "coco_url", "dtype": "string"}, {"name": "file_name", "dtype": "string"}, {"name": "flickr_url", "dtype": "string"}, {"name": "height", "dtype": "int64"}, {"name": "width", "dtype": "int64"}]}, {"name": "picture_id", "dtype": "int64"}, {"name": "qas", "list": [{"name": "q", "dtype": "string"}, {"name": "a", "dtype": "string"}, {"name": "id", "dtype": "int64"}]}, {"name": "questioner_id", "dtype": "int64"}, {"name": "timestamp", "dtype": "string"}, {"name": "object_id", "dtype": "int64"}, {"name": "dialogue_id", "dtype": "int64"}, {"name": "objects", "struct": [{"name": "objects_keys", "sequence": "string"}, {"name": "objects_values", "list": [{"name": "area", "dtype": "float64"}, {"name": "bbox", "sequence": "float64"}, {"name": "category", "dtype": "string"}, {"name": "category_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "bool"}, {"name": "object_id", "dtype": "int64"}, {"name": "segment", "sequence": {"sequence": "float64"}}]}]}], "splits": [{"name": "train", "num_bytes": 17727639600.26, "num_examples": 108860}, {"name": "test", "num_bytes": 3858218992.82, "num_examples": 23115}, {"name": "validation", "num_bytes": 3885120224.34, "num_examples": 23305}], "download_size": 25497584790, "dataset_size": 25470978817.42}}
|
2023-06-29T09:51:22+00:00
|
33b5f350ce9593d93e70373c637ca13e9e297de6
|
# Dataset Card for "turkishReviews-ds-textGeneration"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kaaniince/turkishReviews-ds-textGeneration
|
[
"region:us"
] |
2023-04-18T06:22:19+00:00
|
{"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1408268.074460517, "num_examples": 3795}, {"name": "validation", "num_bytes": 156597.92553948305, "num_examples": 422}], "download_size": 1004999, "dataset_size": 1564866.0}}
|
2023-04-18T06:22:24+00:00
|
2f2448a621ace6d39fc1ac854e5c050b30917612
|
# Pums
The [Pums dataset](https://archive-beta.ics.uci.edu/dataset/116/us+census+data+1990) from the [UCI repository](https://archive-beta.ics.uci.edu/).
U.S.A. Census dataset, classify the income of the individual.
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| pums | Binary classification.|
|
mstz/pums
|
[
"task_categories:tabular-classification",
"language:en",
"pums",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] |
2023-04-18T06:32:38+00:00
|
{"language": ["en"], "task_categories": ["tabular-classification"], "pretty_name": "Ipums", "tags": ["pums", "tabular_classification", "binary_classification", "UCI"], "configs": ["pums"]}
|
2023-04-18T06:42:19+00:00
|
01ae842aa2ac6c1620d4ba99c9264991374dc8d9
|
# Dataset Card for "sam_ocr_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shrey9669/sam_ocr_donut
|
[
"region:us"
] |
2023-04-18T06:34:36+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 46980779.0, "num_examples": 54}, {"name": "test", "num_bytes": 5872591.0, "num_examples": 7}, {"name": "validation", "num_bytes": 5016626.0, "num_examples": 6}], "download_size": 56659460, "dataset_size": 57869996.0}}
|
2023-04-18T06:35:53+00:00
|
c29d9739c5cdd29ca9d254ae14024d7ed87eba7a
|
https://github.com/disrpt/sharedtask2023
scditb:
```
@inproceedings{yang-li-2018-scidtb,
title = "{S}ci{DTB}: Discourse Dependency {T}ree{B}ank for Scientific Abstracts",
author = "Yang, An and
Li, Sujian",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-2071",
doi = "10.18653/v1/P18-2071",
pages = "444--449",
abstract = "Annotation corpus for discourse relations benefits NLP tasks such as machine translation and question answering. In this paper, we present SciDTB, a domain-specific discourse treebank annotated on scientific articles. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. We discuss the labeling framework, annotation workflow and some statistics about SciDTB. Furthermore, our treebank is made as a benchmark for evaluating discourse dependency parsers, on which we provide several baselines as fundamental work.",
}
```
|
tasksource/disrpt
|
[
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-04-18T06:36:18+00:00
|
{"language": ["en"], "license": "apache-2.0"}
|
2023-11-29T15:01:27+00:00
|
6d0bfcbd654d6e5b6345574963b1e8f5fca12d4b
|
The dataset contains (almost) the entire OpenSubtittles database for Japanese:
- Over 7000 tv shows and/or movies.
- The subtittles are human generated.
- The dataset has been parsed, cleaned and converted to UTF-8.
File contents:
- OpenSubtitles.parquet: The text and the time data.
- OpenSubtitles_meta.parquet: The existing metadata for each title.
- OpenSubtitles-OA.parquet: The dataset coded with two columns SOURCE(the name of the movie/tv show), and TEXT (the subtittles) following the Open Assistant rules.
Both tables can be joined by the ID column. (The value can be NULL in the meta table).
|
Nan-Do/OpenSubtitlesJapanese
|
[
"task_categories:text-generation",
"size_categories:100M<n<1B",
"language:ja",
"text",
"tv",
"movies",
"region:us"
] |
2023-04-18T06:43:48+00:00
|
{"language": ["ja"], "size_categories": ["100M<n<1B"], "task_categories": ["text-generation"], "pretty_name": "OpenSubtitles dataset in Japanese", "tags": ["text", "tv", "movies"]}
|
2023-04-19T08:34:45+00:00
|
57bbe8fcdd7853284deba8c540e129350c80b9ac
|
# Soybean
The [Soybean dataset](https://archive-beta.ics.uci.edu/dataset/90/soybean+large) from the [UCI repository](https://archive-beta.ics.uci.edu/).
Classify the type of soybean.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-----------------|
| soybean | Binary classification.| Classify soybean type. |
| diaporthe_stem_canker | Binary classification | Is this instance of class diaporthe_stem_canker? |
| charcoal_rot | Binary classification | Is this instance of class charcoal_rot? |
| rhizoctonia_root_rot | Binary classification | Is this instance of class rhizoctonia_root_rot? |
| phytophthora_rot | Binary classification | Is this instance of class phytophthora_rot? |
| brown_stem_rot | Binary classification | Is this instance of class brown_stem_rot? |
| powdery_mildew | Binary classification | Is this instance of class powdery_mildew? |
| downy_mildew | Binary classification | Is this instance of class downy_mildew? |
| brown_spot | Binary classification | Is this instance of class brown_spot? |
| bacterial_blight | Binary classification | Is this instance of class bacterial_blight? |
| bacterial_pustule | Binary classification | Is this instance of class bacterial_pustule? |
| purple_seed_stain | Binary classification | Is this instance of class purple_seed_stain? |
| anthracnose | Binary classification | Is this instance of class anthracnose? |
| phyllosticta_leaf_spot | Binary classification | Is this instance of class phyllosticta_leaf_spot? |
| alternarialeaf_spot | Binary classification | Is this instance of class alternarialeaf_spot? |
| frog_eye_leaf_spot | Binary classification | Is this instance of class frog_eye_leaf_spot? |
| diaporthe_pod_&_stem_blight | Binary classification | Is this instance of class diaporthe_pod_? |
| cyst_nematode | Binary classification | Is this instance of class cyst_nematode? |
| 2_4_d_injury | Binary classification | Is this instance of class 2_4_d_injury? |
| herbicide_injury | Binary classification | Is this instance of class herbicide_injury? |
|
mstz/soybean
|
[
"task_categories:tabular-classification",
"language:en",
"soybean",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] |
2023-04-18T07:01:59+00:00
|
{"language": ["en"], "task_categories": ["tabular-classification"], "pretty_name": "Isoybean", "tags": ["soybean", "tabular_classification", "binary_classification", "multiclass_classification", "UCI"], "configs": ["soybean"]}
|
2023-04-18T07:09:13+00:00
|
5bc9536d264666378f4fb58a4bbd87b45221c9d5
|
# ru_instruct_gpt4
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Dataset of GPT-4 generated instructions in Russian. Will soon be updated with more examples.
### Languages
Russian
|
lksy/ru_instruct_gpt4
|
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:ru",
"license:cc-by-4.0",
"chat",
"region:us"
] |
2023-04-18T07:15:50+00:00
|
{"language": ["ru"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "text2text-generation"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "full_output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22424451, "num_examples": 15056}], "download_size": 23276814, "dataset_size": 22424451}, "tags": ["chat"]}
|
2023-06-02T15:56:03+00:00
|
6668a7de524301125e0e4956ceb1655a74af2313
|
# Sydt
Synthetic dataset.
|
mstz/sydt
|
[
"task_categories:tabular-classification",
"language:en",
"sydt",
"tabular_classification",
"binary_classification",
"synthetic",
"region:us"
] |
2023-04-18T07:25:12+00:00
|
{"language": ["en"], "task_categories": ["tabular-classification"], "pretty_name": "Sydt", "tags": ["sydt", "tabular_classification", "binary_classification", "synthetic"], "configs": ["sydt"]}
|
2023-04-18T07:27:15+00:00
|
f2c28bf279560e8676d2bb991f447997a75cc6f2
|
# Dataset Card for "PwC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Elise-hf/PwC
|
[
"region:us"
] |
2023-04-18T07:43:01+00:00
|
{"dataset_info": {"features": [{"name": "uid", "dtype": "int64"}, {"name": "paper_url", "dtype": "string"}, {"name": "arxiv_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "url_abs", "dtype": "string"}, {"name": "url_pdf", "dtype": "string"}, {"name": "proceeding", "dtype": "string"}, {"name": "authors", "sequence": "string"}, {"name": "tasks", "sequence": "string"}, {"name": "date", "dtype": "float64"}, {"name": "methods", "list": [{"name": "code_snippet_url", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "full_name", "dtype": "string"}, {"name": "introduced_year", "dtype": "int64"}, {"name": "main_collection", "struct": [{"name": "area", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "parent", "dtype": "string"}]}, {"name": "name", "dtype": "string"}, {"name": "source_title", "dtype": "string"}, {"name": "source_url", "dtype": "string"}]}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 437349959, "num_examples": 149495}, {"name": "test", "num_bytes": 110099655, "num_examples": 37108}], "download_size": 183963479, "dataset_size": 547449614}}
|
2023-04-18T09:36:55+00:00
|
6ecce58ddbe3d10d9e9c8514b385ae0cb21e604e
|
# The Audio, Speech, and Vision Processing Lab - Emotional Sound Database (ASVP - ESD)
## ABOUT
The Audio, Speech, and Vision Processing Lab - Emotional Sound Database (ASVP - ESD)
was created by School of Electronic and Information Engineering, South China University of Technology.
## CHOSEN EMOTIONS
13 emotions were chosen:
1. boredom,sigh
2. neutral,calm
3. happy,laugh,gaggle
4. sad,cry
5. angry,grunt,frustration
6. fearful,scream,panic
7. disgust,dislike,contempt
8. surprised,gasp,amazed
9. excited
10. pleasure
11. pain,groan
12. disappointment,disapproval
13. breath
## ORGANISING THE DATABASE
### Speech Statistic
| Duration Statisitc | Duration |
| -------------------------- |:-------------------------------------------------:|
| Num. of Clips | 2,150 |
| Total Duration | 13347.835 seconds = 222.464 minutes = 3.708 hours |
| Max Dur | 32.235 seconds |
| Min Dur | 0.287 seconds |
| Mean Dur | 6.208 seconds |
| Std. Dur | 3.839 seconds |
| Num. of Clips > 30 seconds | 1 |
| Emotion | Num. of Clips |
| ------------------------------- |:-------------:|
| 01: boredom, sigh | 81 |
| 02: neutral, calm | 657 |
| 03: happy, laugh, gaggle | 154 |
| 04: sad, cry | 268 |
| 05: angry, grunt, frustration | 385 |
| 06: fearful, scream, panic | 63 |
| 07: disgust, dislike, contempt | 90 |
| 08: surprised, hasp, amazed | 144 |
| 09: excited | 136 |
| 10: pleasure | 15 |
| 11: pain, groan | 25 |
| 12: disappointmrnt, disapproval | 132 |
| 13: breath | 0 |
| Emotion Intensity | Num. of Clips |
| ----------------- |:-------------:|
| 01: normal | 1,783 |
| 02: high | 367 |
| Gender | Num. of Clips |
| ----------- |:-------------:|
| 01: male | 1,224 |
| 02: female | 926 |
| Age Range | Num. of Clips |
| ---------- |:-------------:|
| 01: >65 | 65 |
| 02: 20~65 | 1,914 |
| 03: 3<20 | 80 |
| 04: <3 | 91 |
| Language | Num. of Clips |
| ------------- |:-------------:|
| 01: Mandarin | 937 |
| 02: English | 621 |
| 03: French | 175 |
| 04: Others | 417 |
### Non-Speech Statistic
| Duration Statisitc | Duration |
| -------------------------- |:-------------------------------------------------:|
| Num. of Clips | 5,484 |
| Total Duration | 14438.117 seconds = 240.635 minutes = 4.011 hours |
| Max Dur | 25.810 seconds |
| Min Dur | 0.141 seconds |
| Mean Dur | 2.633 seconds |
| Std. Dur | 2.720 seconds |
| Num. of Clips > 30 seconds | 0 |
| Emotion | Num. of Clips |
| ------------------------------- |:-------------:|
| 01: boredom, sigh | 392 |
| 02: neutral, calm | 253 |
| 03: happy, laugh, gaggle | 878 |
| 04: sad, cry | 383 |
| 05: angry, grunt, frustration | 339 |
| 06: fearful, scream, panic | 799 |
| 07: disgust, dislike, contempt | 473 |
| 08: surprised, hasp, amazed | 808 |
| 09: excited | 109 |
| 10: pleasure | 273 |
| 11: pain, groan | 706 |
| 12: disappointmrnt, disapproval | 70 |
| 13: breath | 1 |
| Emotion Intensity | Num. of Clips |
| ----------------- |:-------------:|
| 01: normal | 4,693 |
| 02: high | 791 |
| Gender | Num. of Clips |
| ----------- |:-------------:|
| 01: male | 2,919 |
| 02: female | 2,565 |
| Age Range | Num. of Clips |
| ---------- |:-------------:|
| 01: >65 | 73 |
| 02: 20~65 | 5,224 |
| 03: 3<20 | 100 |
| 04: <3 | 87 |
| Language | Num. of Clips |
| ------------- |:-------------:|
| 01: Mandarin | 512 |
| 02: English | 3,258 |
| 03: French | 109 |
| 04: Others | 1,605 |
## References
1. Dejoli Tientcheu Touko Landry, Qianhua He, Haikang Yan and Yanxiong Li. (2020). ASVP-ESD:A dataset and its benchmark for emotion recognition using both speech and non-speech utterances. Global Scientific Journals, 8(6), 1793-1798.
|
EdwardLin2023/ASVP_ESD
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-04-18T07:46:32+00:00
|
{"license": "cc-by-4.0"}
|
2023-04-19T01:51:35+00:00
|
63c2c731fb4d46dc8436c53803ee902ece41b24c
|
# Uscensus
[US census dataset]() from [UCI]().
|
mstz/uscensus
|
[
"task_categories:tabular-classification",
"language:en",
"uscensus",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] |
2023-04-18T07:50:06+00:00
|
{"language": ["en"], "task_categories": ["tabular-classification"], "pretty_name": "Uscensus", "tags": ["uscensus", "tabular_classification", "binary_classification", "UCI"], "configs": ["uscensus"]}
|
2023-04-18T08:01:20+00:00
|
59f7043babbc2c2e74d647c6f5af8c66c7a2bc1c
|
# Dataset Card for "camel_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vietgpt-archive/camel_vi
|
[
"region:us"
] |
2023-04-18T08:28:33+00:00
|
{"dataset_info": {"features": [{"name": "role_1", "dtype": "string"}, {"name": "role_2", "dtype": "string"}, {"name": "original_task", "dtype": "string"}, {"name": "specified_task", "dtype": "string"}, {"name": "messages", "list": [{"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "role", "dtype": "string"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 171026076, "num_examples": 10744}], "download_size": 52918251, "dataset_size": 171026076}}
|
2023-04-26T13:25:19+00:00
|
d043a7bf174f2d96afdcbc475c55d700f1c08b19
|
# AutoTrain Dataset for project: foreign-exchange-idr-usd
## Dataset Description
This dataset has been automatically processed by AutoTrain for project foreign-exchange-idr-usd.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"id": 279,
"feat_Nilai": 1,
"target": 14149.0,
"feat_Kurs Beli": 14009.0,
"feat_Tanggal": "2/22/2019 0:00"
},
{
"id": 356,
"feat_Nilai": 1,
"target": 14245.0,
"feat_Kurs Beli": 14103.0,
"feat_Tanggal": "6/26/2019 0:00"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"id": "Value(dtype='int64', id=None)",
"feat_Nilai": "Value(dtype='int64', id=None)",
"target": "Value(dtype='float32', id=None)",
"feat_Kurs Beli": "Value(dtype='float64', id=None)",
"feat_Tanggal": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1046 |
| valid | 266 |
|
vyver7952/autotrain-data-foreign-exchange-idr-usd
|
[
"region:us"
] |
2023-04-18T08:29:43+00:00
|
{}
|
2023-04-18T08:44:07+00:00
|
f6d89d339eff8861f1622749d54d5f77012efd38
|
kunalPN/rebel_custom_ds
|
[
"license:other",
"region:us"
] |
2023-04-18T08:46:23+00:00
|
{"license": "other"}
|
2023-04-20T09:18:30+00:00
|
|
78202d3e5655a9647bc009d65e47a2752a9ea824
|
Hakureirm/citypersons
|
[
"license:mit",
"region:us"
] |
2023-04-18T08:49:31+00:00
|
{"license": "mit"}
|
2023-04-18T08:49:31+00:00
|
|
6bbb8dfcb08323ed176e9b4094813e454aee4eef
|
# Dataset Card for "hagrid-classification-512p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
camenduru/hagrid-classification-512p
|
[
"region:us"
] |
2023-04-18T08:50:13+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13432600315.75, "num_examples": 507050}], "download_size": 12546629164, "dataset_size": 13432600315.75}}
|
2023-04-18T09:09:33+00:00
|
db86d7ba8c169b250dd68412ab4c06e34beacdef
|
Usage:
```python
from dataclasses import dataclass
import datasets
# load and path setting
ds_visdial = datasets.load_dataset('jxu124/visdial')
path_map = {
"coco/train2014": f"/datasets/coco/train2014",
"coco/val2014": f"/datasets/coco/val2014",
"visdial/VisualDialog_test2018": f"/datasets/visdial/VisualDialog_test2018",
"visdial/VisualDialog_val2018": f"/datasets/visdial/VisualDialog_val2018"
}
# apply to your datasets
@dataclass
class ReplaceImagePath():
path_map: {}
def __call__(self, features):
for k, v in self.path_map.items():
features['image'] = features['image'].replace(k, v)
return features
ds_visdial = ds_visdial.map(ReplaceImagePath(path_map=path_map)).cast_column("image", datasets.Image())
```
|
jxu124/visdial
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-04-18T09:06:36+00:00
|
{"license": "cc-by-4.0", "dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "dialog", "sequence": {"sequence": "string"}}, {"name": "image_path", "dtype": "string"}, {"name": "global_image_id", "dtype": "string"}, {"name": "anns_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 77657548, "num_examples": 123287}, {"name": "test", "num_bytes": 3495490, "num_examples": 8000}, {"name": "validation", "num_bytes": 1408883, "num_examples": 2064}], "download_size": 34814702, "dataset_size": 82561921}}
|
2023-05-20T18:18:49+00:00
|
8e4a481f03f314aeedecf064c5acd503191a0ac0
|
sbmaruf/forai_ml-prompted-nq-open
|
[
"license:cc-by-sa-3.0",
"region:us"
] |
2023-04-18T09:07:02+00:00
|
{"license": "cc-by-sa-3.0"}
|
2023-04-18T19:08:08+00:00
|
|
55190865a6fa0be9a0b6ab691d1f9e780a071e59
|
Jayabalambika/Handgrid
|
[
"license:mit",
"region:us"
] |
2023-04-18T09:08:25+00:00
|
{"license": "mit"}
|
2023-04-18T09:08:25+00:00
|
|
0951320310adeaf396c7f5826cb91db7209dc61f
|
# Dataset Card for "hagrid-classification-512p-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Jayabalambika/hagrid-classification-512p-dataset
|
[
"region:us"
] |
2023-04-18T09:40:53+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "call", "1": "dislike", "2": "fist", "3": "four", "4": "like", "5": "mute", "6": "ok", "7": "one", "8": "palm", "9": "peace", "10": "peace_inverted", "11": "rock", "12": "stop", "13": "stop_inverted", "14": "three", "15": "three2", "16": "two_up", "17": "two_up_inverted"}}}}], "splits": [{"name": "train", "num_bytes": 12879555275.4, "num_examples": 507050}], "download_size": 12546125241, "dataset_size": 12879555275.4}}
|
2023-04-18T10:05:43+00:00
|
38874a8b1aa9ab8f6609b7184ab79d709f2dfdbf
|
# Dataset Card for "audio-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
izzy-lazerson/audio-test
|
[
"region:us"
] |
2023-04-18T09:51:43+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 9172325.0, "num_examples": 40}], "download_size": 8703205, "dataset_size": 9172325.0}}
|
2023-04-18T09:51:54+00:00
|
4cd6c1adf4f536539ef89ddf135589e2f79094ff
|
# Dataset Card for "audio-test-metadata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
izzy-lazerson/audio-test-metadata
|
[
"region:us"
] |
2023-04-18T10:02:19+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "file_info", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9172805.0, "num_examples": 40}], "download_size": 8703874, "dataset_size": 9172805.0}}
|
2023-04-18T10:02:30+00:00
|
188d1ca65fcf9b731d3578dddf9f6d8b90465bdb
|
# Dataset Card for "audio-test-metadata-new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
izzy-lazerson/audio-test-metadata-new
|
[
"region:us"
] |
2023-04-18T10:11:20+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "file_info", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9172805.0, "num_examples": 40}], "download_size": 8703931, "dataset_size": 9172805.0}}
|
2023-04-18T10:11:31+00:00
|
971e0048efc7fc973283ec351d381366e02bca54
|
# Dataset Card for "picto2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
cfrerebeau/picto2
|
[
"region:us"
] |
2023-04-18T11:12:47+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "data", "num_bytes": 11970350.0, "num_examples": 48}], "download_size": 11612085, "dataset_size": 11970350.0}}
|
2023-04-18T13:01:58+00:00
|
6a3c60beb2495790e22c153c5992d3287edf26bd
|
mmt93/zeroshot_portuguese
|
[
"region:us"
] |
2023-04-18T11:42:46+00:00
|
{}
|
2023-04-26T15:20:26+00:00
|
|
cafcf8fa75c1edc53f4cae58e7aacad6462a8f1d
|
# Dataset Card for "amazon-shoe-reviews-scaled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
P3ps/amazon-shoe-reviews-scaled
|
[
"region:us"
] |
2023-04-18T11:43:58+00:00
|
{"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 831555.0, "num_examples": 4500}, {"name": "test", "num_bytes": 92395.0, "num_examples": 500}], "download_size": 553253, "dataset_size": 923950.0}}
|
2023-04-18T11:44:04+00:00
|
cf5b8e8e3e300eec125173ec2c871657e7c49bb3
|
# Dataset Card for "UN_PDF_RECORD_SET"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ranWang/UN_PDF_RECORD_SET
|
[
"region:us"
] |
2023-04-18T11:59:26+00:00
|
{"dataset_info": {"features": [{"name": "record", "dtype": "int64"}, {"name": "language", "dtype": "string"}, {"name": "year_time", "dtype": "int64"}, {"name": "file_name", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 162579384, "num_examples": 1338864}, {"name": "2000year", "num_bytes": 106669952.46696304, "num_examples": 878442}], "download_size": 44831302, "dataset_size": 269249336.46696305}}
|
2023-04-18T13:08:03+00:00
|
73810687d1afab5e8318f4d7e51bfd7175d1848d
|
This dataset is a redistribution of the following dataset.
https://github.com/suzuki256/dog-dataset
```
The dataset and its contents are made available on an "as is" basis and without warranties of any kind, including without limitation satisfactory quality and conformity, merchantability, fitness for a particular purpose, accuracy or completeness, or absence of errors.
```
|
437aewuh/dog-dataset
|
[
"task_categories:audio-to-audio",
"task_categories:audio-classification",
"size_categories:n<1K",
"license:other",
"biology",
"region:us"
] |
2023-04-18T12:01:04+00:00
|
{"license": "other", "size_categories": ["n<1K"], "task_categories": ["audio-to-audio", "audio-classification"], "tags": ["biology"]}
|
2023-04-18T12:18:25+00:00
|
8fe6c04edcd5073d6ebe9e52961dd2dcf223565e
|
Rafitrians/Ajijoy
|
[
"license:other",
"region:us"
] |
2023-04-18T12:28:43+00:00
|
{"license": "other"}
|
2023-04-18T12:28:43+00:00
|
|
93444eb1dd4a89a19ccce33a77901d921138ad32
|
CNchangan/123
|
[
"license:openrail",
"region:us"
] |
2023-04-18T12:31:59+00:00
|
{"license": "openrail"}
|
2023-04-18T12:31:59+00:00
|
|
17139014aa6059ae07d89597274ebc3978ba2f2b
|
QingyiSi/mmC4-fewer-faces
|
[
"license:odc-by",
"region:us"
] |
2023-04-18T12:51:33+00:00
|
{"license": "odc-by"}
|
2023-04-24T05:33:40+00:00
|
|
d6e8fa984df95a2eeca5539fc15183d292fd893a
|
# AutoTrain Dataset for project: fine-tune
## Dataset Description
This dataset has been automatically processed by AutoTrain for project fine-tune.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<382x256 RGB PIL image>",
"target": 17
},
{
"image": "<341x256 RGB PIL image>",
"target": 7
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['apple', 'banana', 'cake', 'candy', 'carrot', 'cookie', 'doughnut', 'grape', 'hot dog', 'ice cream', 'juice', 'muffin', 'orange', 'pineapple', 'popcorn', 'pretzel', 'salad', 'strawberry', 'waffle', 'watermelon'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 5394 |
| valid | 1351 |
|
jjuarez/autotrain-data-fine-tune
|
[
"task_categories:image-classification",
"region:us"
] |
2023-04-18T12:51:34+00:00
|
{"task_categories": ["image-classification"]}
|
2023-04-18T13:50:32+00:00
|
bcb98cc7c5b72742f38bd844c62d9161af964ee8
|
Videos have been created for documentation and testing purposes.
Some of them may be helpful to someone somewhere.
This message was not generated by AI.
|
houck2040/videos
|
[
"license:mit",
"region:us"
] |
2023-04-18T13:41:30+00:00
|
{"license": "mit"}
|
2023-04-18T14:07:28+00:00
|
f635b6b0260696a5c4ff19f8d2696b51422386fd
|
Data comes from Published Texas A&M Engineering News and was used to train a MLM @3epochs 500
This messages was not generated by AI
|
houck2040/engineering
|
[
"license:mit",
"region:us"
] |
2023-04-18T13:56:31+00:00
|
{"license": "mit"}
|
2023-04-18T14:06:30+00:00
|
2755167c3f1c778d5396bcfbe0d2983e4d73101e
|
# Dataset Card for CIFAR-10-Enriched (Enhanced by Renumics)
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=cifar10-enriched)
- **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
- **Dataset Homepage** [CS Toronto Homepage](https://www.cs.toronto.edu/~kriz/cifar.html)
- **Paper:** [Learning Multiple Layers of Features from Tiny Images](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf)
### Dataset Summary
📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
At [Renumics](https://renumics.com/?hf-dataset-card=cifar10-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
📚 This dataset is an enriched version of the [CIFAR-10 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html).
### Explore the Dataset

The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
```python
!pip install renumics-spotlight datasets
```
Load the dataset from huggingface in your notebook:
```python
import datasets
dataset = datasets.load_dataset("renumics/cifar10-enriched", split="train")
```
Start exploring with a simple view:
```python
from renumics import spotlight
df = dataset.to_pandas()
df_show = df.drop(columns=['img'])
spotlight.show(df_show, port=8000, dtype={"img_path": spotlight.Image})
```
You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
### CIFAR-10 Dataset
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
The classes are completely mutually exclusive. There is no overlap between automobiles and trucks. "Automobile" includes sedans, SUVs, things of that sort. "Truck" includes only big trucks. Neither includes pickup trucks.
Here is the list of classes in the CIFAR-10:
- airplane
- automobile
- bird
- cat
- deer
- dog
- frog
- horse
- ship
- truck
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 10 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cifar-10).
### Languages
English class labels.
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x7FD19FABC1D0>,
'img_path': '/huggingface/datasets/downloads/extracted/7faec2e0fd4aa3236f838ed9b105fef08d1a6f2a6bdeee5c14051b64619286d5/0/0.png',
'label': 0,
'split': 'train'
}
```
### Data Fields
| Feature | Data Type |
|---------------------------------|-----------------------------------------------|
| img | Image(decode=True, id=None) |
| img_path | Value(dtype='string', id=None) |
| label | ClassLabel(names=[...], id=None) |
| split | Value(dtype='string', id=None) |
### Data Splits
| Dataset Split | Number of Images in Split | Samples per Class |
| ------------- |---------------------------| -------------------------|
| Train | 50000 | 5000 |
| Test | 10000 | 1000 |
## Dataset Creation
### Curation Rationale
The CIFAR-10 and CIFAR-100 are labeled subsets of the [80 million tiny images](http://people.csail.mit.edu/torralba/tinyimages/) dataset.
They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use this dataset, please cite the following paper:
```
@article{krizhevsky2009learning,
added-at = {2021-01-21T03:01:11.000+0100},
author = {Krizhevsky, Alex},
biburl = {https://www.bibsonomy.org/bibtex/2fe5248afe57647d9c85c50a98a12145c/s364315},
interhash = {cc2d42f2b7ef6a4e76e47d1a50c8cd86},
intrahash = {fe5248afe57647d9c85c50a98a12145c},
keywords = {},
pages = {32--33},
timestamp = {2021-01-21T03:01:11.000+0100},
title = {Learning Multiple Layers of Features from Tiny Images},
url = {https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf},
year = 2009
}
```
### Contributions
Alex Krizhevsky, Vinod Nair, Geoffrey Hinton, and Renumics GmbH.
|
renumics/cifar10-enriched
|
[
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"size_categories:10K<n<100K",
"source_datasets:extended|cifar10",
"language:en",
"license:apache-2.0",
"image classification",
"cifar-10",
"cifar-10-enriched",
"embeddings",
"enhanced",
"spotlight",
"region:us"
] |
2023-04-18T14:16:41+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "source_datasets": ["extended|cifar10"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "cifar-10", "pretty_name": "CIFAR-10", "tags": ["image classification", "cifar-10", "cifar-10-enriched", "embeddings", "enhanced", "spotlight"]}
|
2023-06-06T06:42:35+00:00
|
e9ac8c247defb2002cc41b8202b8291607d38601
|
mitsudate/DiffSinger_opencpop_JPN
|
[
"license:mit",
"region:us"
] |
2023-04-18T14:22:20+00:00
|
{"license": "mit"}
|
2023-04-18T16:45:28+00:00
|
|
fd7801e269a1db7855b94c953481c6b96d3980a0
|
# AutoTrain Dataset for project: lex-fin-sve
## Dataset Description
This dataset has been automatically processed by AutoTrain for project lex-fin-sve.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"source": "Veronsaajien oikeudenvalvontayksik\u00f6n valituksesta hallinto-oikeuden p\u00e4\u00e4t\u00f6s kumottiin ja luovutusvoiton verotus saatettiin voimaan.",
"target": "P\u00e5 besv\u00e4r av Enheten f\u00f6r bevakning av skattetagarnas r\u00e4tt upph\u00e4vde h\u00f6gsta f\u00f6rvaltningsdomstolen f\u00f6rvaltningsdomstolens beslut och best\u00e4mde att den verkst\u00e4llda \u00f6verl\u00e5telsebeskattningen skulle s\u00e4ttas i kraft.",
"feat_similarity": 0.9205572605133056,
"feat_decision": "kho201500887.xml"
},
{
"source": "Kunnanel\u00e4inl\u00e4\u00e4k\u00e4rin tuli viivytyksett\u00e4 arvioida, oliko m\u00e4\u00e4r\u00e4yksen voimassa pit\u00e4miselle edellytyksi\u00e4.",
"target": "Kommunalveterin\u00e4ren skulle utan dr\u00f6jsm\u00e5l bed\u00f6ma huruvida det fanns f\u00f6ruts\u00e4ttningar f\u00f6r att f\u00f6rordnandet skulle f\u00f6rbli i kraft.",
"feat_similarity": 0.9545820951461792,
"feat_decision": "kho201303022.xml"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"source": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)",
"feat_similarity": "Value(dtype='float64', id=None)",
"feat_decision": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 7878 |
| valid | 1970 |
|
ossib/autotrain-data-lex-fin-sve
|
[
"task_categories:translation",
"region:us"
] |
2023-04-18T15:06:47+00:00
|
{"task_categories": ["translation"]}
|
2023-04-18T15:07:55+00:00
|
e87b71b13e3c2ee5a76e37fb17277e3dfb37dc36
|
# Dataset Card for "sandhi-split-long"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chronbmm/sandhi-split-long
|
[
"region:us"
] |
2023-04-18T15:09:08+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "unsandhied", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 72443676, "num_examples": 86301}, {"name": "validation", "num_bytes": 4328064, "num_examples": 5130}, {"name": "test", "num_bytes": 3785816, "num_examples": 4556}, {"name": "test_500", "num_bytes": 414520, "num_examples": 500}, {"name": "validation_500", "num_bytes": 427332, "num_examples": 500}], "download_size": 46975751, "dataset_size": 81399408}}
|
2023-04-19T17:08:26+00:00
|
e80804f0cabf65a9943a06ef0ef9fb8cbaf3b226
|
# Dataset Card for "naively_captioned_CUB2002011_test_2shots"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
anjunhu/naively_captioned_CUB2002011_test_2shot
|
[
"region:us"
] |
2023-04-18T15:11:07+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "text_cupl", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 11010976.0, "num_examples": 400}], "download_size": 10976537, "dataset_size": 11010976.0}}
|
2023-04-28T09:17:51+00:00
|
10e9aaea2955dae65802650ffa2d7e21ede4e6f5
|
tyoung560/ai-assist-logs
|
[
"license:unknown",
"region:us"
] |
2023-04-18T15:47:35+00:00
|
{"license": "unknown"}
|
2023-05-02T17:29:45+00:00
|
|
9ae5b382d4d19230b60eb07a8e8e4d04ae0c0759
|
# Dataset Card for "PwC_Tasks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Elise-hf/PwC_Tasks
|
[
"region:us"
] |
2023-04-18T15:50:12+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "all_tasks", "num_bytes": 1373543, "num_examples": 3076}], "download_size": 662734, "dataset_size": 1373543}}
|
2023-04-18T15:55:42+00:00
|
6a84b2f4405b044103621df0b79402e2d5690074
|
# AutoTrain Dataset for project: lte-4g
## Dataset Description
This dataset has been automatically processed by AutoTrain for project lte-4g.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_PLMN": 74002,
"feat_SYSTEM": 4,
"feat_XCI": 5658370,
"feat_xNBID": 22103,
"feat_LOCAL_CID": 2.0,
"feat_LAC/TAC": 31100,
"feat_PCI/PSC/BSIC": 434,
"feat_ARFCN": 2050,
"feat_BAND": "2100 B4",
"feat_RSSI": -77.0,
"target": 8,
"feat_RSRQ/ECIO": -12,
"feat_SNR": 2.0,
"feat_CQI": null,
"feat_TA": null,
"feat_DISTANCE": null,
"feat_DELTA_AZI": null,
"feat_LAT": -3.988365,
"feat_LON": -79.198596,
"feat_SPEED": 1,
"feat_GPS_ACCURACY": 21,
"feat_UL": 0,
"feat_DL": 0,
"feat_BANDWIDTH": 15000,
"feat_BANDWIDTHS": 15,
"feat_CA": 1,
"feat_NR_STATE": "none",
"feat_NARFCN": null,
"feat_NR_BAND": null,
"feat_NR_PCI": null,
"feat_NR_SS_RSRP": null,
"feat_NR_SS_RSRQ": null,
"feat_NR_SS_SINR": null,
"feat_NR_CSI_RSRP": null,
"feat_NR_CSI_RSRQ": null,
"feat_NR_CSI_SINR": null,
"feat_CLF_LABEL": "--",
"feat_CLF_LOC": "--",
"feat_CLF_DESC": "No se encuentra la celda en la base de datos del CLF.",
"feat_DATE": "2023/04/17",
"feat_TIME": "16:00:22",
"feat_ROAMING": "HOME"
},
{
"feat_PLMN": 74002,
"feat_SYSTEM": 4,
"feat_XCI": 5658370,
"feat_xNBID": 22103,
"feat_LOCAL_CID": 2.0,
"feat_LAC/TAC": 31100,
"feat_PCI/PSC/BSIC": 434,
"feat_ARFCN": 2050,
"feat_BAND": "2100 B4",
"feat_RSSI": -77.0,
"target": 7,
"feat_RSRQ/ECIO": -9,
"feat_SNR": 2.0,
"feat_CQI": null,
"feat_TA": null,
"feat_DISTANCE": null,
"feat_DELTA_AZI": null,
"feat_LAT": -3.988253,
"feat_LON": -79.198679,
"feat_SPEED": 17,
"feat_GPS_ACCURACY": 16,
"feat_UL": 0,
"feat_DL": 0,
"feat_BANDWIDTH": 15000,
"feat_BANDWIDTHS": 15,
"feat_CA": 1,
"feat_NR_STATE": "none",
"feat_NARFCN": null,
"feat_NR_BAND": null,
"feat_NR_PCI": null,
"feat_NR_SS_RSRP": null,
"feat_NR_SS_RSRQ": null,
"feat_NR_SS_SINR": null,
"feat_NR_CSI_RSRP": null,
"feat_NR_CSI_RSRQ": null,
"feat_NR_CSI_SINR": null,
"feat_CLF_LABEL": "--",
"feat_CLF_LOC": "--",
"feat_CLF_DESC": "No se encuentra la celda en la base de datos del CLF.",
"feat_DATE": "2023/04/17",
"feat_TIME": "16:00:39",
"feat_ROAMING": "HOME"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_PLMN": "Value(dtype='int64', id=None)",
"feat_SYSTEM": "Value(dtype='int64', id=None)",
"feat_XCI": "Value(dtype='int64', id=None)",
"feat_xNBID": "Value(dtype='int64', id=None)",
"feat_LOCAL_CID": "Value(dtype='float64', id=None)",
"feat_LAC/TAC": "Value(dtype='int64', id=None)",
"feat_PCI/PSC/BSIC": "Value(dtype='int64', id=None)",
"feat_ARFCN": "Value(dtype='int64', id=None)",
"feat_BAND": "Value(dtype='string', id=None)",
"feat_RSSI": "Value(dtype='float64', id=None)",
"target": "ClassLabel(names=['-100', '-101', '-102', '-103', '-104', '-105', '-106', '-107', '-108', '-109', '-110', '-111', '-112', '-113', '-114', '-115', '-116', '-118', '-119', '-120', '-138', '-69', '-71', '-73', '-77', '-79', '-83', '-87', '-96', '-97', '-98', '-99', '74001;3;65825455;27311;;50704;94;4366;850 B4&5;;-69;0;;;;;;-3.987520;-79.198380;3;4;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:17:38;HOME', '74001;3;65825455;27311;;50704;94;4366;850 B4&5;;-71;0;;;;;;-3.987607;-79.198345;3;4;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:17:28;HOME', '74001;3;65825455;27311;;50704;94;4366;850 B4&5;;-71;0;;;;;;-3.987699;-79.198315;3;4;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:17:18;HOME', '74001;3;65825455;27311;;50704;94;4366;850 B4&5;;-73;0;;;;;;-3.987167;-79.198514;4;4;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:18:14;HOME', '74001;3;65825455;27311;;50704;94;4366;850 B4&5;;-77;0;;;;;;-3.987270;-79.198486;4;4;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:18:05;HOME', '74001;3;65825455;27311;;50704;94;4366;850 B4&5;;-77;0;;;;;;-3.987360;-79.198472;4;4;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:17:57;HOME', '74001;3;65825455;27311;;50704;94;4366;850 B4&5;;-79;0;;;;;;-3.987451;-79.198444;3;4;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:17:48;HOME', '74001;3;65825455;27311;;50704;94;4366;850 B4&5;;-79;0;;;;;;-3.988235;-79.198371;0;60;5;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:16:09;HOME', '74001;3;65825455;27311;;50704;94;4366;850 B4&5;;-83;0;;;;;;-3.988344;-79.198366;1;84;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:16:05;HOME', '74001;3;65825462;27318;;50704;94;4387;850 B4&5;;-71;0;;;;;;-3.987790;-79.198323;3;10;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:17:09;HOME', '74001;3;65825462;27318;;50704;94;4387;850 B4&5;;-77;0;;;;;;-3.987882;-79.198338;3;4;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:16:59;HOME', '74001;3;65825462;27318;;50704;94;4387;850 B4&5;;-79;0;;;;;;-3.987980;-79.198352;3;4;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:16:49;HOME', '74001;3;65825462;27318;;50704;94;4387;850 B4&5;;-87;0;;;;;;-3.988059;-79.198402;4;7;5;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:16:34;HOME', '74001;3;65825462;27318;;50704;94;4387;850 B4&5;;-87;0;;;;;;-3.988152;-79.198415;4;5;16;7;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:16:27;HOME', '74002;3;4673935;20879;;61100;16;687;1900 B2;;-103;0;;;;;;-3.988341;-79.198406;1;14;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:53:03;HOME', '74002;3;4673935;20879;;61100;16;687;1900 B2;;-83;0;;;;;;-3.988380;-79.198523;0;21;0;0;5000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:53:14;HOME', '74002;3;4674058;21002;;61100;468;662;1900 B2;;-103;0;;;;;;-3.986482;-79.198245;8;20;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:22:46;HOME', '74002;3;4674058;21002;;61100;468;662;1900 B2;;-113;0;;;;;;-3.988332;-79.198535;4;30;0;0;5000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:10:06;HOME', '74002;3;4674058;21002;;61100;468;662;1900 B2;;-73;0;;;;;;-3.988032;-79.198565;9;16;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:52:17;HOME', '74002;3;4674058;21002;;61100;468;662;1900 B2;;-73;0;;;;;;-3.988157;-79.198524;22;18;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:52:18;HOME', '74002;3;4674058;21002;;61100;468;662;1900 B2;;-73;0;;;;;;-3.988282;-79.198483;23;21;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:52:20;HOME', '74002;3;4674058;21002;;61100;468;662;1900 B2;;-73;0;;;;;;-3.988324;-79.198393;0;23;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:52:24;HOME', '74002;3;4674067;21011;;61100;54;662;1900 B2;;-105;0;;;;;;-3.986474;-79.198272;9;17;0;0;5000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:24:17;HOME', '74002;3;4674067;21011;;61100;54;662;1900 B2;;-105;0;;;;;;-3.986571;-79.198224;1;18;0;0;5000;5;0;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:24:11;HOME', '74002;4;5654280;22087;8;31100;502;9585;700 B28;-69;-106;-15;-6.0;;;;;-3.988361;-79.198588;8;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:26:15;HOME', '74002;4;5654280;22087;8;31100;502;9585;700 B28;-69;-109;-15;-6.0;;;;;-3.988176;-79.198567;6;268;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:25:40;HOME', '74002;4;5654280;22087;8;31100;502;9585;700 B28;-69;-109;-15;-6.0;;;;;-3.988244;-79.198573;6;219;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:26:01;HOME', '74002;4;5654280;22087;8;31100;502;9585;700 B28;-69;-109;-15;-6.0;;;;;-3.988376;-79.198533;4;21;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:25:45;HOME', '74002;4;5654280;22087;8;31100;502;9585;700 B28;-79;-114;-16;-1.0;;;;;-3.988251;-79.198447;7;49;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:13:30;HOME', '74002;4;5654280;22087;8;31100;502;9585;700 B28;-79;-118;-17;-1.0;;;;;-3.988366;-79.198472;1;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:13:40;HOME', '74002;4;5654281;22087;9;31100;503;9585;700 B28;-69;-114;-15;6.0;;;;;-3.988317;-79.198455;6;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:15:16;HOME', '74002;4;5654281;22087;9;31100;503;9585;700 B28;-71;-110;-17;0.0;;6;;;-3.988016;-79.198569;9;15;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:53:33;HOME', '74002;4;5654281;22087;9;31100;503;9585;700 B28;-71;-110;-17;0.0;;6;;;-3.988127;-79.198564;7;19;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:53:30;HOME', '74002;4;5654281;22087;9;31100;503;9585;700 B28;-71;-110;-17;0.0;;6;;;-3.988264;-79.198556;4;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:53:29;HOME', '74002;4;5654281;22087;9;31100;503;9585;700 B28;-79;-103;-12;6.0;;6;;;-3.987761;-79.198637;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:46:44;HOME', '74002;4;5654281;22087;9;31100;503;9585;700 B28;-79;-105;-14;-1.0;;;;;-3.988369;-79.198573;4;19;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:14:56;HOME', '74002;4;5654281;22087;9;31100;503;9585;700 B28;-79;-110;-13;6.0;;6;;;-3.987667;-79.198623;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:46:35;HOME', '74002;4;5654281;22087;9;31100;503;9585;700 B28;-79;-110;-13;6.0;;;;;-3.987574;-79.198609;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:46:26;HOME', '74002;4;5654281;22087;9;31100;503;9585;700 B28;-79;-113;-17;6.0;;6;;;-3.987829;-79.198567;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:46:53;HOME', '74002;4;5656065;22094;1;31100;1;9585;700 B28;-69;-102;-11;-6.0;;;;;-3.988265;-79.198578;10;129;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:27:03;HOME', '74002;4;5656065;22094;1;31100;1;9585;700 B28;-71;-103;-12;0.0;;;;;-3.988072;-79.198564;14;15;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:53:53;HOME', '74002;4;5656065;22094;1;31100;1;9585;700 B28;-71;-103;-12;0.0;;;;;-3.988169;-79.198571;13;13;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:53:55;HOME', '74002;4;5656065;22094;1;31100;1;9585;700 B28;-71;-103;-15;0.0;;;;;-3.988115;-79.198581;1;15;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:53:42;HOME', '74002;4;5656065;22094;1;31100;1;9585;700 B28;-71;-103;-15;0.0;;;;;-3.988217;-79.198583;3;17;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:53:46;HOME', '74002;4;5656067;22094;3;31100;0;9585;700 B28;-67;-96;-11;-2.0;;;;;-3.988216;-79.198568;0;10;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:18:06;HOME', '74002;4;5656067;22094;3;31100;0;9585;700 B28;-67;-97;-11;-2.0;;;;;-3.988292;-79.198519;2;5;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:15:20;HOME', '74002;4;5656067;22094;3;31100;0;9585;700 B28;-69;-103;-14;-6.0;;;;;-3.988317;-79.198560;17;13;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:25:07;HOME', '74002;4;5656067;22094;3;31100;0;9585;700 B28;-79;-104;-10;6.0;;;;;-3.988161;-79.198448;28;16;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:50:47;HOME', '74002;4;5656067;22094;3;31100;0;9585;700 B28;-79;-104;-8;6.0;;;;;-3.987923;-79.198610;31;19;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:51:07;HOME', '74002;4;5656067;22094;3;31100;0;9585;700 B28;-79;-105;-13;6.0;;;;;-3.988238;-79.198399;4;19;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:51:04;HOME', '74002;4;5656067;22094;3;31100;0;9585;700 B28;-79;-106;-11;6.0;;;;;-3.987928;-79.198608;22;17;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:50:32;HOME', '74002;4;5656067;22094;3;31100;0;9585;700 B28;-79;-106;-11;6.0;;;;;-3.988185;-79.198438;22;16;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:50:22;HOME', '74002;4;5656067;22094;3;31100;0;9585;700 B28;-79;-106;-9;6.0;;;;;-3.987990;-79.198552;7;16;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:50:14;HOME', '74002;4;5656067;22094;3;31100;0;9585;700 B28;-79;-106;-9;6.0;;;;;-3.988065;-79.198487;7;16;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:50:11;HOME', '74002;4;5656067;22094;3;31100;0;9585;700 B28;-79;-106;-9;6.0;;;;;-3.988151;-79.198453;3;12;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:50:09;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-102;-8;6.0;;;;;-3.986612;-79.198283;3;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:44:42;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-103;-13;6.0;;;;;-3.987114;-79.198526;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:45:39;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-107;-11;6.0;;;;;-3.986872;-79.198385;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:45:11;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-107;-15;6.0;;;;;-3.987031;-79.198484;5;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:45:29;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-110;-14;6.0;;;;;-3.987196;-79.198564;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:45:49;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-110;-8;6.0;;;;;-3.986701;-79.198303;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:44:52;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-111;-10;6.0;;;;;-3.986958;-79.198429;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:45:21;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-112;-11;0.0;;;;;-3.986578;-79.198237;3;10;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:19:44;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-112;-20;6.0;;;;;-3.988256;-79.198392;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:47:40;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-112;-9;0.0;;;;;-3.986670;-79.198276;3;10;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:19:34;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-113;-11;0.0;;;;;-3.986842;-79.198378;3;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:19:15;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-113;-12;0.0;;;;;-3.986995;-79.198488;3;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:18:51;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-115;-13;0.0;;;;;-3.986756;-79.198326;3;6;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:19:24;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-115;-19;0.0;;;;;-3.987077;-79.198533;3;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:18:36;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-119;-13;0.0;;;;;-3.986912;-79.198450;3;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:19:02;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-79;-98;-9;6.0;;;;;-3.986793;-79.198341;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:45:02;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-81;-113;-12;1.0;;;;;-3.986448;-79.198394;2;11;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:28:23;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-81;-113;-9;1.0;;;;;-3.986508;-79.198323;6;11;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:28:16;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-83;-110;-8;2.0;;;;;-3.986509;-79.198320;1;11;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:29:04;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-83;-115;-11;2.0;;;;;-3.986560;-79.198276;4;17;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:33:27;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-83;-115;-14;2.0;;;;;-3.986470;-79.198295;9;14;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:33:21;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-83;-116;-10;0.0;;;;;-3.986415;-79.198292;14;12;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:42:59;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-83;-116;-12;4.0;;;;;-3.986449;-79.198291;11;17;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:39:58;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-83;-116;-12;4.0;;;;;-3.986532;-79.198249;10;17;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:40:02;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-83;-116;-9;2.0;;;;;-3.986547;-79.198234;5;15;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:32:56;HOME', '74002;4;5657857;22101;1;31100;323;9585;700 B28;-83;-118;-9;0.0;;;;;-3.986522;-79.198293;21;14;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:43:07;HOME', '74002;4;5658370;22103;2;31100;434;2050;2100 B4;-77;-106;-10;2.0;;;;;-3.988325;-79.198571;1;15;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:01:17;HOME', '74002;4;5658370;22103;2;31100;434;2050;2100 B4;-77;-107;-11;2.0;;;;;-3.988340;-79.198563;4;18;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:07:47;HOME', '74002;4;5658370;22103;2;31100;434;2050;2100 B4;-77;-107;-9;2.0;;;;;-3.988253;-79.198679;17;16;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:00:39;HOME', '74002;4;5658370;22103;2;31100;434;2050;2100 B4;-77;-108;-10;2.0;;;;;-3.988234;-79.198589;4;8;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:01:07;HOME', '74002;4;5658370;22103;2;31100;434;2050;2100 B4;-77;-108;-12;2.0;;;;;-3.988365;-79.198596;1;21;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:00:22;HOME', '74002;4;5658370;22103;2;31100;434;2050;2100 B4;-77;-109;-11;2.0;;;;;-3.988315;-79.198466;6;18;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:07:37;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-69;-100;-13;-3.0;;6;;;-3.988376;-79.198526;7;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:13:37;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-69;-101;-11;-3.0;;6;;;-3.988363;-79.198585;7;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:06:27;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-69;-101;-12;-3.0;;6;;;-3.988242;-79.198575;9;50;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:06:35;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-69;-101;-13;-3.0;;6;;;-3.988254;-79.198559;9;40;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:11:34;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-69;-102;-10;-3.0;;6;;;-3.988366;-79.198573;7;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:07:46;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-69;-102;-12;-3.0;;6;;;-3.988271;-79.198579;13;36;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:08:02;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-69;-103;-15;-3.0;;6;;;-3.988375;-79.198518;9;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:11:22;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-69;-107;-15;-3.0;;6;;;-3.988270;-79.198572;13;30;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:13:56;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-100;-10;0.0;;;;;-3.988225;-79.198564;18;15;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:55:35;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-100;-11;-3.0;;6;;;-3.988365;-79.198584;7;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:55:25;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-100;-11;0.0;;;;;-3.988096;-79.198557;27;18;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:55:19;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-100;-11;2.0;;6;;;-3.988255;-79.198580;12;40;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:51:46;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-100;-12;-3.0;;5;;;-3.988367;-79.198551;8;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:56:44;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-100;-12;0.0;;;;;-3.988047;-79.198565;14;18;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:55:55;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-100;-12;0.0;;;;;-3.988102;-79.198559;26;18;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:55:49;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-100;-12;0.0;;;;;-3.988201;-79.198533;3;19;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:55:51;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-100;-13;-3.0;;5;;;-3.988367;-79.198594;3;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:58:30;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-100;-14;2.0;;6;;;-3.988365;-79.198565;7;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:51:32;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-101;-10;-3.0;;5;;;-3.988375;-79.198495;8;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:58:23;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-101;-11;-3.0;;5;;;-3.988265;-79.198581;9;35;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:57:01;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-101;-11;0.0;;;;;-3.988059;-79.198566;26;18;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:54:26;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-101;-12;-3.0;;5;;;-3.988252;-79.198574;8;48;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:58:44;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-101;-12;-3.0;;6;;;-3.988364;-79.198587;6;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:04:41;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-101;-12;0.0;;;;;-3.988232;-79.198576;26;9;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:54:14;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-101;-13;-3.0;;6;;;-3.988239;-79.198577;11;38;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:55:47;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-101;-13;0.0;;;;;-3.988085;-79.198563;24;17;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:54:53;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-101;-13;0.0;;;;;-3.988182;-79.198575;16;15;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:54:30;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-101;-13;0.0;;;;;-3.988193;-79.198576;10;17;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:54:55;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-102;-11;-3.0;;5;;;-3.988243;-79.198574;12;36;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:00:37;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-102;-12;-3.0;;5;;;-3.988362;-79.198583;7;22;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:00:15;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-102;-13;0.0;;;;;-3.988077;-79.198556;3;14;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:54:00;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-102;-13;0.0;;;;;-3.988077;-79.198568;0;9;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:54:08;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-102;-13;0.0;;;;;-3.988181;-79.198574;2;10;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:54:07;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-98;-12;0.0;;;;;-3.988085;-79.198565;31;17;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:56:19;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-71;-99;-11;0.0;;;;;-3.988216;-79.198533;19;13;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:56:00;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-73;-102;-12;1.0;;6;;;-3.988251;-79.198575;7;53;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:05:16;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-75;-109;-13;-4.0;;;;;-3.988278;-79.198510;5;17;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:10:35;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-75;-114;-14;0.0;;;;;-3.988274;-79.198514;10;18;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:10:54;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-75;-114;-14;0.0;;;;;-3.988368;-79.198466;8;17;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:11:00;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-75;-115;-13;-4.0;;;;;-3.988352;-79.198451;13;16;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:10:43;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-79;-107;-13;6.0;;;;;-3.987292;-79.198574;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:45:58;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-79;-107;-14;6.0;;;;;-3.987384;-79.198585;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:46:07;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-79;-110;-12;6.0;;;;;-3.987918;-79.198528;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:47:03;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-79;-111;-19;6.0;;;;;-3.988015;-79.198517;4;4;-1041905;-16691092;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:47:12;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-79;-114;-13;-1.0;;;;;-3.988365;-79.198573;10;30;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:13:23;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-79;-114;-16;6.0;;6;;;-3.987483;-79.198589;4;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:46:17;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-79;-115;-14;-1.0;;;;;-3.988375;-79.198449;1;29;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:12:53;HOME', '74002;4;5658376;22103;8;31100;434;9585;700 B28;-79;-115;-15;-1.0;;;;;-3.988366;-79.198556;4;31;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:12:43;HOME', '74002;4;5658377;22103;9;31100;432;9585;700 B28;-77;-113;-18;0.0;;9;;;-3.988356;-79.198420;5;20;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:10:20;HOME', '74002;4;5775105;22559;1;31100;291;2050;2100 B4;-73;-116;-15;0.0;;;;;-3.988354;-79.198430;1;21;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;16:09:23;HOME', '74002;4;5775105;22559;1;31100;291;2050;2100 B4;-79;-115;-11;6.0;;;;;-3.988105;-79.198503;4;4;18;27;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:47:20;HOME', '74002;4;5778434;22572;2;31100;18;2050;2100 B4;-89;-120;-12;5.0;;;;;-3.986565;-79.198250;9;17;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:24:21;HOME', '74002;4;5778435;22572;3;31100;19;2050;2100 B4;-79;-119;-16;6.0;;;;;-3.988184;-79.198454;3;4;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:47:30;HOME', '74002;4;5778435;22572;3;31100;19;2050;2100 B4;-87;-138;-20;1.0;;;;;-3.988226;-79.198459;4;129;0;0;15000;15;1;none;;;;;;;;;;--;--;No se encuentra la celda en la base de datos del CLF.;2023/04/17;17:52:48;HOME'], id=None)",
"feat_RSRQ/ECIO": "Value(dtype='int64', id=None)",
"feat_SNR": "Value(dtype='float64', id=None)",
"feat_CQI": "Value(dtype='float64', id=None)",
"feat_TA": "Value(dtype='float64', id=None)",
"feat_DISTANCE": "Value(dtype='float64', id=None)",
"feat_DELTA_AZI": "Value(dtype='float64', id=None)",
"feat_LAT": "Value(dtype='float64', id=None)",
"feat_LON": "Value(dtype='float64', id=None)",
"feat_SPEED": "Value(dtype='int64', id=None)",
"feat_GPS_ACCURACY": "Value(dtype='int64', id=None)",
"feat_UL": "Value(dtype='int64', id=None)",
"feat_DL": "Value(dtype='int64', id=None)",
"feat_BANDWIDTH": "Value(dtype='int64', id=None)",
"feat_BANDWIDTHS": "Value(dtype='int64', id=None)",
"feat_CA": "Value(dtype='int64', id=None)",
"feat_NR_STATE": "Value(dtype='string', id=None)",
"feat_NARFCN": "Value(dtype='float64', id=None)",
"feat_NR_BAND": "Value(dtype='float64', id=None)",
"feat_NR_PCI": "Value(dtype='float64', id=None)",
"feat_NR_SS_RSRP": "Value(dtype='float64', id=None)",
"feat_NR_SS_RSRQ": "Value(dtype='float64', id=None)",
"feat_NR_SS_SINR": "Value(dtype='float64', id=None)",
"feat_NR_CSI_RSRP": "Value(dtype='float64', id=None)",
"feat_NR_CSI_RSRQ": "Value(dtype='float64', id=None)",
"feat_NR_CSI_SINR": "Value(dtype='float64', id=None)",
"feat_CLF_LABEL": "Value(dtype='string', id=None)",
"feat_CLF_LOC": "Value(dtype='string', id=None)",
"feat_CLF_DESC": "Value(dtype='string', id=None)",
"feat_DATE": "Value(dtype='string', id=None)",
"feat_TIME": "Value(dtype='string', id=None)",
"feat_ROAMING": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 141 |
| valid | 141 |
|
karohoden/autotrain-data-lte-4g
|
[
"region:us"
] |
2023-04-18T16:14:58+00:00
|
{}
|
2023-04-18T16:21:18+00:00
|
a4ab1ec65476f8dd7ccb4e87214d8f600a0f0234
|
# Dataset Card for "xorder_dish"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vhug/xorder_dish
|
[
"region:us"
] |
2023-04-18T17:05:39+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 495377032.0, "num_examples": 205}], "download_size": 0, "dataset_size": 495377032.0}}
|
2023-04-18T17:16:45+00:00
|
3bfea7b57a7a0697f036b1bc0ae36726a69b6875
|
# Dataset Card for "commits-codegeex"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bigcode/commits-codegeex
|
[
"region:us"
] |
2023-04-18T17:08:23+00:00
|
{"dataset_info": {"features": [{"name": "commit", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "repos", "dtype": "string"}, {"name": "old_file", "dtype": "string"}, {"name": "new_file", "dtype": "string"}, {"name": "new_contents", "dtype": "string"}, {"name": "old_contents", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 74990689340.85883, "num_examples": 2673685}], "download_size": 3670774449, "dataset_size": 74990689340.85883}}
|
2023-04-18T18:32:28+00:00
|
92394b03eecc06e7e93a2b9aced0d66c17e691df
|
# Dataset Card for "alpaca-bangla"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nihalbaig/alpaca-bangla
|
[
"region:us"
] |
2023-04-18T17:14:52+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "vectors", "dtype": "null"}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 36188108, "num_examples": 18000}], "download_size": 13437852, "dataset_size": 36188108}}
|
2023-04-18T17:14:59+00:00
|
c39c24422288bb579ce92dcbfa8d34bcf8ac37f7
|
miguelsn/micro-organism
|
[
"license:mit",
"region:us"
] |
2023-04-18T17:39:36+00:00
|
{"license": "mit"}
|
2023-04-18T18:30:18+00:00
|
|
5fc02df0c367de21cbeebe963520cbb92a01a248
|
Sanath369/Telugu_movie_reviews
|
[
"task_categories:text-classification",
"size_categories:n<1K",
"language:te",
"region:us"
] |
2023-04-18T17:43:59+00:00
|
{"language": ["te"], "size_categories": ["n<1K"], "task_categories": ["text-classification"]}
|
2023-04-18T20:24:14+00:00
|
|
6da79e82fd18dda69cf9018f30cadeec77be2b02
|
prajwalsahu5/Pub
|
[
"region:us"
] |
2023-04-18T18:10:49+00:00
|
{}
|
2023-04-18T18:12:17+00:00
|
|
39be0867aeabac87b629f986cc18f05e155679e9
|
# Dataset Card for "semeval-2018-grouped"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jacobthebanana/semeval-2018-grouped
|
[
"region:us"
] |
2023-04-18T18:54:59+00:00
|
{"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 135638, "num_examples": 1181}, {"name": "validation", "num_bytes": 52212, "num_examples": 449}], "download_size": 137805, "dataset_size": 187850}}
|
2023-04-18T18:55:01+00:00
|
c1c9d57fe71b50af817ea164b0f0b1c626abd521
|
# Dataset Card for "VQAv2_minival_google_flan_t5_xxl_mode_A_T_D_PNP_FILTER_C_Q_rices_ns_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_minival_google_flan_t5_xxl_mode_A_T_D_PNP_FILTER_C_Q_rices_ns_100
|
[
"region:us"
] |
2023-04-18T18:56:14+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random_", "num_bytes": 864416, "num_examples": 100}], "download_size": 120142, "dataset_size": 864416}}
|
2023-04-18T18:56:16+00:00
|
ccf192ca98cec05ce210875e0c877c0c90e535e0
|
yosinem/tweets-18-04-23
|
[
"region:us"
] |
2023-04-18T19:02:25+00:00
|
{}
|
2023-04-18T19:03:13+00:00
|
|
b5b6aa03a1fdb47de1b8f4a524f90133acc3d35e
|
asparius/babylm-10m
|
[
"license:mit",
"region:us"
] |
2023-04-18T19:04:04+00:00
|
{"license": "mit"}
|
2023-04-25T06:57:06+00:00
|
|
9a1640003838743c29036ce6b4b9221e2487a5ef
|
DO NOT USE - PLACEHOLDER DATASET
LITERALLY JUST THE SAME FEW ROWS REPEATED DOZENS OF TIMES
|
vicclab/HumanvGPT
|
[
"license:mit",
"region:us"
] |
2023-04-18T19:10:06+00:00
|
{"license": "mit"}
|
2023-04-18T20:42:12+00:00
|
5f559622fa1923d8f5e511d695b64fc242ec436f
|
Sam172/Patents48448
|
[
"license:bigscience-openrail-m",
"region:us"
] |
2023-04-18T19:56:28+00:00
|
{"license": "bigscience-openrail-m"}
|
2023-04-18T19:56:28+00:00
|
|
f7249098240ea269e649539d59a1a09e2d3040c3
|
grizzlybearbee/T
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-18T21:33:57+00:00
|
{"license": "apache-2.0"}
|
2023-04-18T21:33:57+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.