sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
8a0bb20a8071ab6acca6cdd2e13c22cb756dfc6b
|
# Dataset Card for "mmlu-security_studies-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-security_studies-dev
|
[
"region:us"
] |
2023-06-01T03:43:12+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "negate_openai_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "dev", "num_bytes": 6657, "num_examples": 5}], "download_size": 10237, "dataset_size": 6657}}
|
2023-06-01T03:43:16+00:00
|
c3618d6aff58fc7047b8533df1930580c3af457d
|
# Dataset Card for "mmlu-sociology-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-sociology-dev
|
[
"region:us"
] |
2023-06-01T03:43:19+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "negate_openai_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "dev", "num_bytes": 2830, "num_examples": 5}], "download_size": 6752, "dataset_size": 2830}}
|
2023-06-01T03:43:21+00:00
|
02b515f312e30e66eecef1e570766d177a1c9762
|
# Dataset Card for "mmlu-us_foreign_policy-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-us_foreign_policy-dev
|
[
"region:us"
] |
2023-06-01T03:43:24+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "negate_openai_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "dev", "num_bytes": 2891, "num_examples": 5}], "download_size": 6864, "dataset_size": 2891}}
|
2023-06-01T03:43:26+00:00
|
7581423f2b2d9fd9eb40439423c0508730803b8b
|
# Dataset Card for "mmlu-virology-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-virology-dev
|
[
"region:us"
] |
2023-06-01T03:43:29+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "negate_openai_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "dev", "num_bytes": 2327, "num_examples": 5}], "download_size": 6406, "dataset_size": 2327}}
|
2023-06-01T03:43:32+00:00
|
58d2849e92bfd8ab192357ed142317efa930f51a
|
# Dataset Card for "mmlu-world_religions-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-world_religions-dev
|
[
"region:us"
] |
2023-06-01T03:43:35+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "negate_openai_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "dev", "num_bytes": 1878, "num_examples": 5}], "download_size": 5710, "dataset_size": 1878}}
|
2023-06-01T03:52:24+00:00
|
05bd83f440e3fc414ebd948819456ae30aefc0a3
|
IneedtolearnDL/Codedata
|
[
"license:mit",
"region:us"
] |
2023-06-01T04:16:18+00:00
|
{"license": "mit"}
|
2023-06-01T04:26:06+00:00
|
|
cddcb73c259269928e974e0ce141f123eb068030
|
# Dataset Card for "KULLM-v2"
## Dataset Summary
Korean translation of GPT4ALL, Dolly, and Vicuna data.
repository: [nlpai-lab/KULLM](https://github.com/nlpai-lab/KULLM)
huggingface: [nlpai-lab/kullm-v2](https://huggingface.co/nlpai-lab/kullm-polyglot-12.8b-v2)
#### Translate dataset
Translated 'instruction', 'input', and 'output' in the dataset via the DeepL API
## Lisence
Apache-2.0
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("nlpai-lab/kullm-v2", split="train")
>>> ds
DatasetDict({
train: Dataset({
features: ['id', 'instruction', 'input', 'output'],
num_rows: 152630
})
})
```
```python
>>> ds[0]
{'id': 'alpaca_{idx}',
'instruction': '3์์์ด๋ ๋ฌด์์ธ๊ฐ์?',
'input': '',
'output': '์ธ ๊ฐ์ง ๊ธฐ๋ณธ ์์ ๋นจ๊ฐ, ํ๋, ๋
ธ๋์
๋๋ค. ์ด ์์ ๋ค๋ฅธ ์์ ํผํฉํ์ฌ ๋ง๋ค ์ ์๊ณ ๋ค๋ฅธ ๋ชจ๋ ์์ ๋ค์ํ ๋น์จ๋ก ์กฐํฉํ์ฌ ๋ง๋ค ์ ์๊ธฐ ๋๋ฌธ์ ์์์ด๋ผ๊ณ ๋ถ๋ฆ
๋๋ค. ๋น์ ์ฌ์ฉ๋๋ ์ฒจ๊ฐ์ ์์ ์์คํ
์์ ์์์ ๋นจ๊ฐ, ๋
น์, ํ๋(RGB)์
๋๋ค.'}
```
|
nlpai-lab/kullm-v2
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ko",
"license:apache-2.0",
"region:us"
] |
2023-06-01T04:26:22+00:00
|
{"language": ["ko"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "kullm"}
|
2023-06-01T04:45:04+00:00
|
7191ceeb92bbf46359794d7ba23a884043da96e1
|
Ryukijano/Pothole-detection-Yolov8
|
[
"task_categories:object-detection",
"size_categories:n<1K",
"license:openrail",
"code",
"region:us"
] |
2023-06-01T04:26:27+00:00
|
{"license": "openrail", "size_categories": ["n<1K"], "task_categories": ["object-detection"], "pretty_name": "Potholes", "tags": ["code"]}
|
2023-06-01T04:30:22+00:00
|
|
78aa7a051939397e7870f89b864f338c581b36ac
|
xuanmo/xbcm
|
[
"task_categories:text-generation",
"size_categories:100B<n<1T",
"language:zh",
"license:cc0-1.0",
"not-for-all-audiences",
"region:us"
] |
2023-06-01T04:37:28+00:00
|
{"language": ["zh"], "license": "cc0-1.0", "size_categories": ["100B<n<1T"], "task_categories": ["text-generation"], "pretty_name": "pri_xbcm", "tags": ["not-for-all-audiences"]}
|
2023-06-01T05:00:01+00:00
|
|
00686931a7c83e0b6cdd5b572a5d4c384c356bb8
|
# Dataset Card for "red_teaming_reward_modeling_pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
andersonbcdefg/red_teaming_reward_modeling_pairwise
|
[
"region:us"
] |
2023-06-01T05:02:38+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response_a", "dtype": "string"}, {"name": "response_b", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "preferred", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 41305999, "num_examples": 35279}], "download_size": 0, "dataset_size": 41305999}}
|
2023-06-01T06:00:45+00:00
|
138ed0420fba5ed6e03b90e309e805d5605fc1f6
|
# Dataset Card for "red_teaming_reward_modeling_pairwise_no_as_an_ai"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
andersonbcdefg/red_teaming_reward_modeling_pairwise_no_as_an_ai
|
[
"region:us"
] |
2023-06-01T05:03:08+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response_a", "dtype": "string"}, {"name": "response_b", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "preferred", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 40716699, "num_examples": 35279}], "download_size": 20420507, "dataset_size": 40716699}}
|
2023-06-01T06:01:10+00:00
|
d1020399f8b39c78d6a0cdd6584522e2da8095ce
|
# Dataset Card for "digit_mask_ensemble_distilled_from_cv12_balanced_mfcc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mazkooleg/digit_mask_ensemble_distilled_from_cv12_balanced_mfcc
|
[
"region:us"
] |
2023-06-01T05:28:01+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 51543708896, "num_examples": 10122488}], "download_size": 52609315089, "dataset_size": 51543708896}}
|
2023-06-01T06:04:59+00:00
|
268d3167272864740d43806904cb2f5abe432d60
|
# Dataset Card for "diffusion.3.dream_booth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lansinuote/diffusion.3.dream_booth
|
[
"region:us"
] |
2023-06-01T05:36:50+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5590823.0, "num_examples": 5}], "download_size": 5592148, "dataset_size": 5590823.0}}
|
2023-06-01T05:37:02+00:00
|
8b0048ad6ae8c22f46a78c15559dec98feef5539
|
## Dataset Card
This is a reupload of [this dataset](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) that was further cleaned by gozfarb.
|
Aeala/ShareGPT_Vicuna_unfiltered
|
[
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-06-01T05:54:32+00:00
|
{"language": ["en"], "license": "apache-2.0"}
|
2023-06-01T06:03:50+00:00
|
d1c4520a0fb31c086af415fc6a66fa1affa95a77
|
### Dataset Summary
Korean translation of Guanaco via the DeepL API
Note: There are cases where multilingual data has been converted to monolingual data during batch translation to Korean using the API.
Below is Guanaco's README.
----
This dataset is a subset of the Open Assistant dataset, which you can find here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main
This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
This dataset was used to train Guanaco with QLoRA.
For further information, please see the original dataset.
License: Apache 2.0
|
nlpai-lab/openassistant-guanaco-ko
|
[
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:ko",
"license:apache-2.0",
"region:us"
] |
2023-06-01T05:54:34+00:00
|
{"language": ["ko"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation", "question-answering", "summarization"]}
|
2023-06-01T09:44:35+00:00
|
851ee2b6945164942a58efaf7b101c2439d0dff9
|
Cherrycreamco/g4aucfiltered
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-01T06:07:42+00:00
|
{"license": "apache-2.0"}
|
2023-06-01T06:07:43+00:00
|
|
21db4709704b2079638ff69fbfb2523aa7e35139
|
cc2/conearth
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-01T06:19:15+00:00
|
{"license": "apache-2.0"}
|
2023-06-01T06:19:15+00:00
|
|
318e8cbdcbdfd1951abebb6e3152b11806cd5a87
|
zzzzhhh/MPT-7b-c4
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-01T06:23:54+00:00
|
{"license": "apache-2.0"}
|
2023-06-01T06:24:16+00:00
|
|
a90d318e8c5a3333038c9e6ce4e3cbf21857cf38
|
# Dataset Card for SOTAB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [SOTAB homepage](https://webdatacommons.org/structureddata/sotab/)
- **Repository:** [SOTAB repository](https://github.com/wbsg-uni-mannheim/wdc-sotab)
- **Paper:** [SOTAB: The WDC Schema.org Table Annotation Benchmark](https://ceur-ws.org/Vol-3320/paper1.pdf)
- **Leaderboard:** [SOTAB leaderboard on PaperWithCode](https://paperswithcode.com/paper/sotab-the-wdc-schema-org-table-annotation)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The SOTAB dataset is a large-scale dataset for the task of column type annotation on semi-structured tables.
### Supported Tasks and Leaderboards
table-annotation, column-type-annotation
### Languages
en
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
An example of 'validation' looks as follows:
```
{
"id": 0,
"column_index": 3,
"label": "currency",
"table": {
"name": "Book_7sat.co.uk_September2020_CTA.json.gz",
"header": ["col1", "col2", ...]
"rows": [
["2001", "2", "USL A-League", ...],
["2002", "2", "USL A-League", ...],
...
]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `int32` feature.
- `column_index`: a `string` feature.
- `label`: a `string` feature.
- `table`: a dictionary feature containing:
- `rows`: a `list` of `string` features.
- `rows`: a `list` of `list` of `string` features.
- `name`: a `string` feature.
### Data Splits
| name |train|validation|test |
|-------|-----:|---------:|----:|
|default|130471| 16840|15040|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Panupong Pasupat and Percy Liang
### Licensing Information
Creative Commons Attribution Share Alike 4.0 International
### Citation Information
```
```
### Contributions
Thanks to [@ShivangiBithel](https://github.com/shivangibithel) for adding this dataset.
|
shivangibithel/SOTAB
|
[
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"table-annotation",
"region:us"
] |
2023-06-01T06:53:42+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_ids": [], "pretty_name": "SOTAB_CTA", "tags": ["table-annotation"], "dataset_info": [{"features": [{"name": "column_index", "dtype": "int32"}, {"name": "label", "dtype": "string"}, {"name": "table", "struct": [{"name": "header", "sequence": "string"}, {"name": "rows", "sequence": {"sequence": "string"}}, {"name": "name", "dtype": "string"}]}], "splits": [{"name": "train", "num_examples": 130471}, {"name": "test", "num_examples": 15040}, {"name": "validation", "num_examples": 16840}], "dataset_size": 162351}]}
|
2023-06-14T10:44:31+00:00
|
8642af73f75892e69a7ed5f68983c07c614b480c
|
## Dataset Summary
MindBigData 2023 MNIST-8B is the largest, to date (June 1st 2023), brain signals open dataset created for Machine Learning, based on EEG signals from a single subject captured using a custom 128 channels device, replicating the full 70,000 digits from Yaan LeCun et all MNIST dataset. The brain signals were captured while the subject was watching the pixels of the original digits one by one on a screen and listening at the same time to the spoken number 0 to 9 from the real label.
Supporting dataset for paper https://arxiv.org/abs/2306.00455
The dataset contains 140,000 records from 128 EEG channels, each of 2 seconds, recorded at 250hz, in total 17,920,000 brain signals and 8,960,000,000 data points.
It consists of 2 main csv data files:
- โtrain.csvโ 45Gb Header + 120,000 rows 64,791 columns
- โtest.csvโ 7,52Gb Header + 20,000 rows 64,791 columns
10 audio files at a folder named โaudiolabelsโ: โ0.wavโ, โ1.wavโ......โ9.wavโ
And 1 csv file with 3d coordinates of the EEG electrodes: โ3Dcoords.csvโ 4,27Kb Header + 130 rows 4 columns
>update July 18th 2023: As requested a reduced 2Billion datapoints is released https://huggingface.co/datasets/DavidVivancos/MindBigData2023_MNIST-2B
## Dataset Structure
review supporting paper https://arxiv.org/abs/2306.00455
## Data Fields
review supporting paper https://arxiv.org/abs/2306.00455
## Citation
```sh
@article{MindBigData_2023_MNIST-8B,
title={MindBigData 2023 MNIST-8B The 8 billion datapoints Multimodal Dataset of Brain Signals},
author={David Vivancos},
journal={arXiv preprint arXiv:2306.00455},
year={2023}
}
```
|
DavidVivancos/MindBigData2023_MNIST-8B
|
[
"license:odbl",
"arxiv:2306.00455",
"region:us"
] |
2023-06-01T06:54:38+00:00
|
{"license": "odbl"}
|
2024-01-08T15:37:35+00:00
|
8220a688b92b2aab98ac16a30055f8119f3bc935
|
LauraExp/Donut
|
[
"region:us"
] |
2023-06-01T07:23:37+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21034569.0, "num_examples": 16}, {"name": "test", "num_bytes": 5895696.0, "num_examples": 4}], "download_size": 23582848, "dataset_size": 26930265.0}}
|
2023-06-01T10:03:44+00:00
|
|
78613ba106eca0b4b88249cb6f0aa07aefe947df
|
# Dataset Card for LaSOT
## Dataset Description
- **Homepage:** [LaSOT homepage](http://vision.cs.stonybrook.edu/~lasot/)
- **Paper:** [LaSOT: A High-quality Benchmark for Large-scale Single Object Tracking](https://arxiv.org/abs/1809.07845)
- **Point of Contact:** [Heng Fan]([email protected])
### Dataset Summary
**La**rge-scale **S**ingle **O**bject **T**racking (**LaSOT**) aims to provide a dedicated platform for training data-hungry deep trackers as well as assessing long-term tracking performance.
This repositoy contains the conference version of LaSOT, published on CVPR-19 ([LaSOT: A High-quality Benchmark for Large-scale Single Object Tracking](https://arxiv.org/abs/1809.07845)).
**LaSOT** is featured in:
- **Large-scale**: 1,400 sequences with more than 3.5 millions frames
- **High-quality**: Manual annotation with careful inspection in each frame
- **Category balance**: 70 categories, each containing 20 sequences
- **Long-term tracking**: An average video length of around 2,500 frames (i.e., 83 seconds)
- **Comprehensive labeling**: Providing both visual and lingual annotation for each sequence
For the new subset (15 categories with 150 videos) in [extended journal version](https://arxiv.org/abs/2009.03465) (commonly referred to as LaSOT<sub>ext</sub>), visit this [repo](https://huggingface.co/datasets/l-lt/LaSOT-ext).
## Download
You can download the whole dataset using Git (with Git LFS):
```bash
git clone https://huggingface.co/datasets/l-lt/LaSOT
```
Alternatively, download the videos of a specific category manually from this [page](https://huggingface.co/datasets/l-lt/LaSOT/tree/main).
LaSOT is also distributed through serval cloud storage services:
* In one zip file: [OneDrive](https://1drv.ms/u/s!Akt_zO4y_u6DgoQsxl9ixr5Y393qWA?e=7yTwjc)
* One zip file per category: [OneDrive](https://1drv.ms/f/s!Akt_zO4y_u6DgoNSoMJrfnVwveDjhA?e=PBeyuD) or [Baidu Pan](https://pan.baidu.com/s/1xFANiqkBHytE7stMOLUpLQ)
### Unzip
Unzip all zip files and the paths should be organized as following:
```
โโโ airplane
โ โโโ airplane-1
โ ...
โโโ basketball
...
โโโ training_set.txt
โโโ testing_set.txt
```
## Evaluation Metrics and Toolkit
See the [homepage](http://vision.cs.stonybrook.edu/~lasot/results.html) for more information.
|
l-lt/LaSOT
|
[
"arxiv:1809.07845",
"arxiv:2009.03465",
"region:us"
] |
2023-06-01T07:30:23+00:00
|
{"paperswithcode_id": "lasot", "viewer": false}
|
2023-06-13T11:40:35+00:00
|
820fd1cb8189e1e5ea6cb6273f212f23fb1ce008
|
# Dataset Card for Swiss Citation Extraction
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Citation Extraction is a multilingual, diachronic dataset of 131K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging token classification task.
### Supported Tasks and Leaderboards
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents |
|------------|------------|----------------------|
| German | **de** | 85K |
| French | **fr** | 38K |
| Italian | **it** | 8K |
## Dataset Structure
### Data Fields
```
decision_id:
considerations:
NER_labels: CITATION refers to a case citation or a reference to another court decision. LAW indicates a reference to a specific law. O is used for words or tokens that don't fall under the previous two labels. In accordance with the IOB format, each tag, apart from 'O', is accompanied by the 'B-' prefix if it marks the beginning of the span, or the 'I-' prefix if it's inside or at the end of the span.
law_area: (string)
language: (string)
year: (int64)
chamber: (string)
region: (string)
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
ยฉ Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stรผrmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
|
rcds/swiss_citation_extraction
|
[
"task_categories:token-classification",
"size_categories:100K<n<1M",
"language:de",
"language:fr",
"language:it",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",
"region:us"
] |
2023-06-01T07:32:26+00:00
|
{"language": ["de", "fr", "it"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["token-classification"], "pretty_name": "Swiss Citation Extraction"}
|
2023-08-31T11:22:28+00:00
|
1ef8ff1673caba6b803507d6ef857a82f920ec9c
|
ericdoug/SFT4All
|
[
"license:gpl-3.0",
"region:us"
] |
2023-06-01T07:36:03+00:00
|
{"license": "gpl-3.0"}
|
2023-06-01T07:36:03+00:00
|
|
a022b613c0cae91485adfe901d5597b2db2877ae
|
# Dataset Card for "russian-ner-runne"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
graviada/russian-ner-runne
|
[
"region:us"
] |
2023-06-01T07:44:08+00:00
|
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1981197, "num_examples": 2508}, {"name": "test", "num_bytes": 423023, "num_examples": 512}, {"name": "dev", "num_bytes": 396996, "num_examples": 536}], "download_size": 725522, "dataset_size": 2801216}}
|
2023-06-01T13:21:03+00:00
|
a15636543885151d02335f35b430aaa0c2ddd7e1
|
<img style="float:right; padding:1%" src="https://huggingface.co/datasets/mdroth/TinyGuanaco_DE/resolve/main/GuanacoBaby_SD2-1.jpg" alt="Picture of a young guanaco (thanks to Stable Diffusion 2.1)." width="25%">
# Dataset Card for _TinyGuanaco_DE_
**TinyGuanaco_DE**
- is intended for **development purposes**: use _TinyGuanaco_DE_ for prototyping your code
- is comprised of **German texts only** (hence _DE_)
- is really small: the `train` split has 4 instances and the `test` split has 2 instances
- has 3 columns: `index`, `query`, and `reply`
- the `query` column contains concatenations of a context ("Kontext:\n...") and a question ("Frage:\n...") that can be answered by knowing the context
- the `reply` column contains the according reply to that query
- features texts from the [`JosephusCheung/Guanaco`](https://huggingface.co/JosephusCheung/Guanaco) dataset and inherits its license from that dataset
License: [**gpl-3.0**](https://www.gnu.org/licenses/gpl-3.0.en.html)
|
mdroth/TinyGuanaco_DE
|
[
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:n<1K",
"language:de",
"license:gpl-3.0",
"llama",
"alpaca",
"vicuna",
"guanaco",
"region:us"
] |
2023-06-01T08:12:51+00:00
|
{"language": ["de"], "license": "gpl-3.0", "size_categories": ["n<1K"], "task_categories": ["question-answering", "conversational"], "tags": ["llama", "alpaca", "vicuna", "guanaco"], "dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "reply", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1675, "num_examples": 4}, {"name": "test", "num_bytes": 1038, "num_examples": 2}], "download_size": 10414, "dataset_size": 2713}}
|
2023-06-04T23:55:54+00:00
|
852a68357136f2356a9ac104de51692520ba281f
|
# Dataset Card for LaSOT-ext
## Dataset Description
- **Homepage:** [LaSOT homepage](http://vision.cs.stonybrook.edu/~lasot/)
- **Paper:** [LaSOT: A High-quality Large-scale Single Object Tracking Benchmark](https://arxiv.org/abs/2009.03465)
- **Point of Contact:** [Heng Fan]([email protected])
### Dataset Summary
**La**rge-scale **S**ingle **O**bject **T**racking (**LaSOT**) aims to provide a dedicated platform for training data-hungry deep trackers as well as assessing long-term tracking performance.
This repositoy contains the new subset provided in the journal version of LaSOT (commonly called **LaSOT<sub>ext</sub>**), published on IJCV ([LaSOT: A High-quality Large-scale Single Object Tracking Benchmark](https://arxiv.org/abs/2009.03465)).
For the training/testing splits of LaSOT (conference version), visit this [repo](https://huggingface.co/datasets/l-lt/LaSOT).
## Download
You can download the whole dataset using Git (with Git LFS):
```bash
git clone https://huggingface.co/datasets/l-lt/LaSOT-ext
```
Alternatively, download the videos of a specific category manually from this [page](https://huggingface.co/datasets/l-lt/LaSOT-ext/tree/main).
LaSOT<sub>ext</sub> can also be downloaded from:
* In one zip file: [OneDrive](https://1drv.ms/u/s!Akt_zO4y_u6DgoQrvo5h48AC15l67A?e=Zo6PWx) or [Homepage server](http://vision.cs.stonybrook.edu/~lasot/data/LaSOT_extension_subset.zip)
* One zip file per category: [OneDrive](https://1drv.ms/f/s!Akt_zO4y_u6DgoQZH_aGsNh2f6x6Dg?e=sldyAx)
### Unzip
Unzip all zip files and organize the paths as follows:
```
โโโ atv
โ โโโ atv-1
โ ...
โโโ badminton
...
```
## Evaluation Metrics and Toolkit
See the [homepage](http://vision.cs.stonybrook.edu/~lasot/results.html) for more information.
|
l-lt/LaSOT-ext
|
[
"arxiv:2009.03465",
"region:us"
] |
2023-06-01T08:13:39+00:00
|
{"viewer": false}
|
2023-06-13T12:02:39+00:00
|
c5068827af88637ec9d79e4e96fb76c4fff5e412
|
robertollweb/imgRober
|
[
"license:unknown",
"region:us"
] |
2023-06-01T08:20:53+00:00
|
{"license": "unknown"}
|
2023-06-01T08:23:31+00:00
|
|
b94fb4fe1fb52e524dba595d00add901671a2251
|
# Dataset Card for IRFL
- [Dataset Description](#dataset-description)
- [Leaderboards](#leaderboards)
- [Colab notebook code for IRFL evaluation](#colab-notebook-code-for-irfl-evaluation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
The IRFL dataset consists of idioms, similes, metaphors with matching figurative and literal images, and two novel tasks of multimodal figurative detection and retrieval.
Using human annotation and an automatic pipeline we created, we collected figurative and literal images for textual idioms, metaphors, and similes.
We annotated the relations between these images and the figurative phrase they originated from. We created two novel tasks of figurative detection and retrieval using these images.
The figurative detection task evaluates Vision and Language Pre-Trained Modelsโ (VL-PTMs) ability to choose the image that best visualizes the meaning of a figurative expression. The task is to choose the image that best visualizes the figurative phrase out of X candidates. The retrieval task examines VL-PTMs' preference for figurative images. In this task, Given a set of figurative and partially literal images, the task is to rank the images using the model-matching score such that the figurative images are ranked higher, and calculate the precision at k, where k is the number of figurative images in the input.
We evaluated state-of-the-art VL models and found that the best models achieved 22%, 30%, and 66% accuracy vs. humans 97%, 99.7%, and 100% on our detection task for idioms, metaphors, and similes respectively. The best model achieved an F1 score of 61 on the retrieval task.
- **Homepage:**
https://irfl-dataset.github.io/
- **Repository:**
https://github.com/irfl-dataset/IRFL
- **Paper:**
https://arxiv.org/abs/2303.15445
- **Leaderboard:**
https://irfl-dataset.github.io/leaderboard
- **Point of Contact:**
[email protected]; [email protected]
### Leaderboards
https://irfl-dataset.github.io/leaderboard
### Colab notebook for IRFL evaluation
https://colab.research.google.com/drive/1RfcUhBTHvREx5X7TMY5UAgMYX8NMKy7u?usp=sharing
### Languages
English.
## Dataset Structure
### Data Fields
โ
- refers to idiom-only fields
โบโ - refers to metaphor-only fields
Multimodal Figurative Language Detection task
- query (โ
): the idiom definition the answer image originated from.
- distractors: the distractor images
- answer: the correct image
- figurative_type: idiom | metaphor | simile
- type: the correct image type (Figurative or Figurative+Literal).
- definition (โ
): list of all the definitions of the idiom
- phrase: the figurative phrase.
Multimodal Figurative Language Retrieval task
- type: the rival categories FvsPL (Figurative images vs. Partial Literal) or FLvsPL (Figurative+Literal images vs. Partial Literal)
- figurative_type: idiom | metaphor | simile
- images_metadata: the metadata of the distractors and answer images.
- first_category: the first category images (Figurative images if FvsPL, Figurative Literal images if FLvsPL)
- second_category: the second category images (Partial Literal)
- definition (โ
): list of all the definitions of the idiom
- theme (โบโ): the theme of the partial literal distractor, for example, for the metaphor heart of gold, an image of a "gold bar" and an image of a "human heart" will have different theme value
- phrase: the figurative phrase.
The idioms, metaphor, and similes datasets contain all the figurative phrases, annotated images, and corresponding metadata. <br/>
## Dataset Collection
Using an automatic pipeline we created, we collected figurative and literal images for textual idioms, metaphors, and similes. We annotated the relations between these images and the figurative phrase they originated from.
#### Annotation process
We paid Amazon Mechanical Turk Workers to annotate the relation between each image and phrase (Figurative vs. Literal).
## Considerations for Using the Data
- Idioms: Annotated by five crowdworkers with rigorous qualifications and training.
- Metaphors and Similes: Annotated by three expert team members.
- Detection and Ranking Tasks: Annotated by three crowdworkers not involved in prior IRFL annotations.
### Licensing Information
CC-By 4.0
### Citation Information
@misc{yosef2023irfl,
title={IRFL: Image Recognition of Figurative Language},
author={Ron Yosef and Yonatan Bitton and Dafna Shahaf},
year={2023},
eprint={2303.15445},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
lampent/IRFL
|
[
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"figurative-language",
"multimodal-figurative-language",
" commonsense-reasoning",
"visual-reasoning",
"arxiv:2303.15445",
"region:us"
] |
2023-06-01T08:34:13+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "configs": [{"config_name": "idiom-detection-task", "data_files": [{"split": "test", "path": "idiom_detection_task.csv"}]}, {"config_name": "metaphor-detection-task", "data_files": [{"split": "test", "path": "metaphor_detection_task.csv"}]}, {"config_name": "simile-detection-task", "data_files": [{"split": "test", "path": "simile_detection_task.csv"}]}, {"config_name": "open-simile-detection-task", "data_files": [{"split": "test", "path": "open_simile_detection_task.csv"}]}, {"config_name": "idiom-retrieval-task", "data_files": [{"split": "test", "path": "idiom_retrieval_task.csv"}]}, {"config_name": "metaphor-retrieval-task", "data_files": [{"split": "test", "path": "metaphor_retrieval_task.csv"}]}, {"config_name": "simile-retrieval-task", "data_files": [{"split": "test", "path": "simile_retrieval_task.csv"}]}, {"config_name": "open-simile-retrieval-task", "data_files": [{"split": "test", "path": "open_simile_retrieval_task.csv"}]}, {"config_name": "idioms-dataset", "data_files": [{"split": "dataset", "path": "idioms_dataset.csv"}]}, {"config_name": "similes-dataset", "data_files": [{"split": "dataset", "path": "similes_dataset.csv"}]}, {"config_name": "metaphors-dataset", "data_files": [{"split": "dataset", "path": "metaphors_dataset.csv"}]}], "tags": ["figurative-language", "multimodal-figurative-language", " commonsense-reasoning", "visual-reasoning"]}
|
2023-11-30T12:40:28+00:00
|
183b7a25c302efac83478ee5ef9ab14e105af16e
|
Korean translation of databricks-dolly-15k via the DeepL API
Note: There are cases where multilingual data has been converted to monolingual data during batch translation to Korean using the API.
Below is databricks-dolly-15k's README.
---
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: English
Version: 1.0
**Owner: Databricks, Inc.**
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT.
Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including
the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using
information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly
instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors.
They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context`
field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts,
this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper.
For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a
corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to
restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might
provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from
these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source,
human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT.
Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including
academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization)
contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the
target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical
of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of
rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private personโs personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright ยฉ Wikipedia editors and contributors.
|
nlpai-lab/databricks-dolly-15k-ko
|
[
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:ko",
"license:cc-by-sa-3.0",
"arxiv:2203.02155",
"region:us"
] |
2023-06-01T09:19:09+00:00
|
{"language": ["ko"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "summarization"]}
|
2023-06-16T02:01:52+00:00
|
f8390b1597b5e384db96b4844ce81716c3bef535
|
Sources are listed below:
Chinese General Instruction 2000k BELLE https://huggingface.co/datasets/BelleGroup/train_2M_CN
English generic instruction 52k alpaca-gpt4 https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
Chinese generic dialog instructions 800k BELLE https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M
English Universal Dialog Instruction 94k sharegpt_vicuna https://huggingface.co/datasets/jeffwan/sharegpt_vicuna
Chinese-English-Japanese Universal Command 49k https://huggingface.co/datasets/JosephusCheung/GuanacoDataset/tree/main
|
BNNT/mozi_general_instructions_3m
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-01T09:21:43+00:00
|
{"license": "apache-2.0"}
|
2023-08-09T05:56:33+00:00
|
28187f05fc1c2b0e1b0271aab9c5190707af9932
|
# Dataset Card for NoReC TSA
This is a dataset for Targeted Sentiment Analysis (TSA) in Norwegian, derived from the the fine-grained annotations of N[NoReC_fine](https://github.com/ltgoslo/norec_fine). The dataset contains tokenized Norwegian sentences where each token is tagged for sentiment expressed within the sentence towards that token.
Since a sentiment target may be the target of several sentiment expressions, these are resolved to a final sentiment polarity (and intensity) using the conversion script in [NoReC_tsa](https://github.com/ltgoslo/norec_tsa). There is no "mixed" sentiment category. When a target is the receiver of both positive and negative sentiment, the strongest wins. If a tie, the last sentiment wins.
- **Curated by:** The [SANT](https://www.mn.uio.no/ifi/english/research/projects/sant/) project (Sentiment Analysis for Norwegian Text) at the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) (LTG) at the University of Oslo
- **Funded by:** The [SANT](https://www.mn.uio.no/ifi/english/research/projects/sant/) is funded by the [Research Council of Norway](https://www.forskningsradet.no/en/) (NFR grant number 270908).
- **Shared by:** The [SANT](https://www.mn.uio.no/ifi/english/research/projects/sant/) project (Sentiment Analysis for Norwegian Text) at the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) (LTG) at the University of Oslo
- **License:** The data is distributed under a [Creative Commons Attribution-NonCommercial licence](https://creativecommons.org/licenses/by-nc/4.0/) (CC BY-NC 4.0). The licence is motivated by the need to block the possibility of third parties redistributing the orignal reviews for commercial purposes. Note that machine learned models, extracted lexicons, embeddings, and similar resources that are created on the basis of NoReC are not considered to contain the original data and so can be freely used also for commercial purposes despite the non-commercial condition.
- **Language:** Norwegian ("no"): Predominantly Bokmรฅl (nb) written variant.
| variant | split | sents | docs |
|:-----|:--------|--------:|-------:|
| nb | dev | 1531 | 44 |
| nb | test | 1272 | 47 |
| nb | train | 8556 | 323 |
| nn | train | 78 | 4 |
## Dataset Sources
- **Repository:** https://github.com/ltgoslo/norec_tsa
- **Paper:** The underlying NoReC_fine dataset is described in the paper [A Fine-Grained Sentiment Dataset for Norwegian](https://aclanthology.org/2020.lrec-1.618/) by รvrelid et al., published at LREC 2020.
## Uses
The data is intended to be used for training and testing models for TSA token classification; identifying and classifying sentiment targets in Norwegian sentences.
Example models fine-tuned on this dataset can be found at [huggingface.co/collections/ltg/sentiment-analysis](https://huggingface.co/collections/ltg/sentiment-analysis-65c49c7247a0ffffa9897155)
## Dataset Structure
The dataset comes in two flavours:
- `default` configuration yields labels with binary Positive / Negative sentiment description
- `intensity` configuration yields labels with additional sentiment intensity, 1: Slight, 2: Standard, and 3: Strong.
The config is required for accessing the version with intensity. `tsa_data = load_dataset("ltg/norec_tsa", "intensity")`
The dataset comes with predefined train, dev (vallidation) and test splits.
### Data Instances
Config "default" example instance:
```
{'idx': '701363-08-02',
'tokens': ['Vi', 'liker', 'det', '.'],
'tsa_tags': ['O', 'O', 'B-targ-Positive', 'O']}
```
Config "intensity" example instance:
```
{'idx': '701363-08-02',
'tokens': ['Vi', 'liker', 'det', '.'],
'tsa_tags': ['O', 'O', 'B-targ-Positive-2', 'O']}
```
### Data Fields
- idx(str): Unique document-and sentence identifier from [NoReC_fine](https://github.com/ltgoslo/norec_fine). The 6-digit document identifier can also be used to look up the text and its metadata in [NoReC](https://github.com/ltgoslo/norec).
- tokens: (List[str]): List of the tokens in the sentence
- tsa_tags: (List[str]): List of the tags for each token in BIO format. There is no integer representation of these in the dataset.
### Data Splits
```
DatasetDict({
test: Dataset({
features: ['idx', 'tokens', 'tsa_tags'],
num_rows: 1272
})
train: Dataset({
features: ['idx', 'tokens', 'tsa_tags'],
num_rows: 8634
})
validation: Dataset({
features: ['idx', 'tokens', 'tsa_tags'],
num_rows: 1531
})
})
```
## Dataset Creation
### Source Data
The sentiment annotations are aggregated from the NoReC_fine dataset, which in turn comprises a subset of the documents in the [Norwegian Review Corpus](https://github.com/ltgoslo/norec) (NoReC), which contains full-text professional reviews collected from major Norwegian news sources and cover a range of different domains, including literature, movies, video games, restaurants, music and theater, in addition to product reviews across a range of categories. The review articles NoReC were originally donated by the media partners in the SANT project; the Norwegian Broadcasting Corporation (NRK), Schibsted Media Group and Aller Media. The data comprises reviews extracted from eight different Norwegian news sources: Dagbladet, VG, Aftenposten, Bergens Tidende, Fรฆdrelandsvennen, Stavanger Aftenblad, DinSide.no and P3.no. In terms of publishing date the reviews of NoReC mainly cover the time span 2003โ2019, although it also includes a handful of reviews dating back as far as 1998.
### Annotators
The original annotations of NoReC_fine that the sentence-level labels here are derived from, were originally created by hired annotators who were all BSc- or MSc-level students in the Language Technology study program at the Department of informatics, University of Oslo.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
The data does not contain information considered personal or sensitive.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Results obtained on this data might not generalize to texts from other domains or genres. Any biases in the sentiments expressed by the original review authors may carry over to models trained on this data.
## Citation
**BibTeX:**
```
@InProceedings{KutBarVel21,
author = {Andrey Kutuzov and Jeremy Barnes and Erik Velldal and Lilja {\O}vrelid and Stephan Oepen},
title = {Large-Scale Contextualised Language Modelling for Norwegian},
booktitle = {{Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021)}},
year = 2021
}
@InProceedings{OvrMaeBar20,
author = {Lilja {\O}vrelid and Petter M{\ae}hlum and Jeremy Barnes and Erik Velldal},
title = {A Fine-grained Sentiment Dataset for {N}orwegian},
booktitle = {{Proceedings of the 12th Edition of the Language Resources and Evaluation Conference}},
year = 2020,
address = "Marseille, France, 2020"
}
```
## Dataset Card Authors
Egil Rรธnningstad and Erik Velldal
## Dataset Card Contact
[email protected] and [email protected]
|
ltg/norec_tsa
|
[
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:no",
"language:nb",
"language:nn",
"license:cc-by-nc-4.0",
"region:us"
] |
2023-06-01T09:30:41+00:00
|
{"language": ["no", "nb", "nn"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["token-classification"], "pretty_name": "NoReC TSA", "dataset_info": [{"config_name": "default", "features": [{"name": "idx", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "tsa_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 2296476, "num_examples": 8634}, {"name": "validation", "num_bytes": 411562, "num_examples": 1531}, {"name": "test", "num_bytes": 346288, "num_examples": 1272}], "download_size": 899078, "dataset_size": 3054326}, {"config_name": "intensity", "features": [{"name": "idx", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "tsa_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 2316306, "num_examples": 8634}, {"name": "validation", "num_bytes": 414972, "num_examples": 1531}, {"name": "test", "num_bytes": 349228, "num_examples": 1272}], "download_size": 902284, "dataset_size": 3080506}], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "intensity", "data_files": [{"split": "train", "path": "intensity/train-*"}, {"split": "validation", "path": "intensity/validation-*"}, {"split": "test", "path": "intensity/test-*"}]}]}
|
2024-02-08T09:39:16+00:00
|
89ce6fc605149c7c206e33f86d5ca247258400c0
|
# Dataset Card for "Ner_with_jobs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
P3ps/Ner_with_jobs
|
[
"region:us"
] |
2023-06-01T09:33:35+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 6970390, "num_examples": 20856}, {"name": "test", "num_bytes": 1774323, "num_examples": 5215}], "download_size": 2605898, "dataset_size": 8744713}}
|
2023-06-01T09:33:39+00:00
|
ae4810b3b8972c253861ddd51069eb072bc5a868
|
# Dataset Card for "real-toxicity-prompts-severe0.7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
PanoEvJ/real-toxicity-prompts-severe0.7
|
[
"region:us"
] |
2023-06-01T09:37:18+00:00
|
{"dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "challenging", "dtype": "bool"}, {"name": "prompt", "struct": [{"name": "text", "dtype": "string"}, {"name": "threat", "dtype": "float64"}, {"name": "insult", "dtype": "float64"}, {"name": "severe_toxicity", "dtype": "float64"}, {"name": "toxicity", "dtype": "float64"}, {"name": "profanity", "dtype": "float64"}, {"name": "sexually_explicit", "dtype": "float64"}, {"name": "flirtation", "dtype": "float64"}, {"name": "identity_attack", "dtype": "float64"}]}, {"name": "continuation", "struct": [{"name": "text", "dtype": "string"}, {"name": "severe_toxicity", "dtype": "float64"}, {"name": "toxicity", "dtype": "float64"}, {"name": "profanity", "dtype": "float64"}, {"name": "sexually_explicit", "dtype": "float64"}, {"name": "identity_attack", "dtype": "float64"}, {"name": "flirtation", "dtype": "float64"}, {"name": "threat", "dtype": "float64"}, {"name": "insult", "dtype": "float64"}]}, {"name": "input_ids", "sequence": "int32"}, {"name": "query", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2181853, "num_examples": 3781}], "download_size": 1763414, "dataset_size": 2181853}}
|
2023-06-01T09:37:21+00:00
|
dd9a54b34fa4496ea64071552120fc2020ca0fd4
|
codebender/faq-vector-embeddings
|
[
"language:en",
"license:mit",
"us-medical",
"region:us"
] |
2023-06-01T09:53:40+00:00
|
{"language": ["en"], "license": "mit", "pretty_name": "faq-vector-embeddings", "tags": ["us-medical"]}
|
2023-06-01T09:56:03+00:00
|
|
db42f5fcda9f4a1352876a1da67ca8f489add087
|
kadirnar/diffusers_readme_images
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-01T10:03:40+00:00
|
{"license": "apache-2.0"}
|
2023-09-23T23:07:15+00:00
|
|
00dc0208f05cdea0fabc86d092a663e044a5c101
|
# Dataset Card for "conllpp"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/ZihanWangKi/CrossWeigh)
- **Repository:** [Github](https://github.com/ZihanWangKi/CrossWeigh)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/D19-1519)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
CoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set
have been manually corrected. The training set and development set from CoNLL2003 is included for completeness. One
correction on the test set for example, is:
```
{
"tokens": ["SOCCER", "-", "JAPAN", "GET", "LUCKY", "WIN", ",", "CHINA", "IN", "SURPRISE", "DEFEAT", "."],
"original_ner_tags_in_conll2003": ["O", "O", "B-LOC", "O", "O", "O", "O", "B-PER", "O", "O", "O", "O"],
"corrected_ner_tags_in_conllpp": ["O", "O", "B-LOC", "O", "O", "O", "O", "B-LOC", "O", "O", "O", "O"],
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
#### conllpp
- **Size of downloaded dataset files:** 4.85 MB
- **Size of the generated dataset:** 10.26 MB
- **Total amount of disk used:** 15.11 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": "0",
"document_id": 1,
"sentence_id": 3,
"tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
"pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
"ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
}
```
### Data Fields
The data fields are the same among all splits.
#### conllpp
- `id`: a `string` feature.
- `document_id`: an `int32` feature tracking which document the sample is from.
- `sentence_id`: an `int32` feature tracking which sentence in this document the sample is from.
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(` (4).
- `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4).
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4).
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|conll2003|14041| 3250|3453|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{wang2019crossweigh,
title={CrossWeigh: Training Named Entity Tagger from Imperfect Annotations},
author={Wang, Zihan and Shang, Jingbo and Liu, Liyuan and Lu, Lihao and Liu, Jiacheng and Han, Jiawei},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={5157--5166},
year={2019}
}
```
### Contributions
Thanks to [@ZihanWangKi](https://github.com/ZihanWangKi) for adding this dataset.
|
tomaarsen/conllpp
|
[
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|conll2003",
"language:en",
"license:unknown",
"region:us"
] |
2023-06-01T10:31:30+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|conll2003"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "conll", "pretty_name": "CoNLL++", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "\"", "1": "''", "2": "#", "3": "$", "4": "(", "5": ")", "6": ",", "7": ".", "8": ":", "9": "``", "10": "CC", "11": "CD", "12": "DT", "13": "EX", "14": "FW", "15": "IN", "16": "JJ", "17": "JJR", "18": "JJS", "19": "LS", "20": "MD", "21": "NN", "22": "NNP", "23": "NNPS", "24": "NNS", "25": "NN|SYM", "26": "PDT", "27": "POS", "28": "PRP", "29": "PRP$", "30": "RB", "31": "RBR", "32": "RBS", "33": "RP", "34": "SYM", "35": "TO", "36": "UH", "37": "VB", "38": "VBD", "39": "VBG", "40": "VBN", "41": "VBP", "42": "VBZ", "43": "WDT", "44": "WP", "45": "WP$", "46": "WRB"}}}}, {"name": "chunk_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-ADJP", "2": "I-ADJP", "3": "B-ADVP", "4": "I-ADVP", "5": "B-CONJP", "6": "I-CONJP", "7": "B-INTJ", "8": "I-INTJ", "9": "B-LST", "10": "I-LST", "11": "B-NP", "12": "I-NP", "13": "B-PP", "14": "I-PP", "15": "B-PRT", "16": "I-PRT", "17": "B-SBAR", "18": "I-SBAR", "19": "B-UCP", "20": "I-UCP", "21": "B-VP", "22": "I-VP"}}}}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "config_name": "conllpp", "splits": [{"name": "train", "num_bytes": 6931393, "num_examples": 14041}, {"name": "validation", "num_bytes": 1739247, "num_examples": 3250}, {"name": "test", "num_bytes": 1582078, "num_examples": 3453}], "download_size": 4859600, "dataset_size": 10252718}, "train-eval-index": [{"config": "conllpp", "task": "token-classification", "task_id": "entity_extraction", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"tokens": "tokens", "ner_tags": "tags"}, "metrics": [{"type": "seqeval", "name": "seqeval"}]}]}
|
2023-06-01T10:38:05+00:00
|
a21e2412c05d2d6c469d9394ec516267bf251eaf
|
punglee/librispeech_asr
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-06-01T10:47:23+00:00
|
{"license": "cc-by-4.0"}
|
2023-06-02T09:00:40+00:00
|
|
58467e6a69b1ec245c4630ccb8009849f1a52824
|
# Dataset Card for "dataset-sample-bloom-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
oshada-kasun/dataset-sample-bloom-test
|
[
"region:us"
] |
2023-06-01T11:15:01+00:00
|
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "price", "dtype": "string"}, {"name": "colors", "sequence": "string"}, {"name": "sizes", "sequence": "string"}, {"name": "ad", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2546, "num_examples": 5}], "download_size": 7494, "dataset_size": 2546}}
|
2023-06-01T11:15:03+00:00
|
61cf4cf857abaf414e74419b870af199d5064e4c
|
# Dataset Card for Janes-Preklop
### Dataset Summary
Janes-Preklop is a corpus of Slovene tweets that is manually annotated for code-switching: the use of words from two
or more languages within one sentence or utterance.
### Languages
Code-switched Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset - each word is annotated with its language, either `"default"`
(Slovenian/unclassifiable), `en` (English), `de` (German), `hbs` (Serbo-Croatian), `sp` (Spanish),
`la` (Latin), `ar` (Arabic), `fr` (French), `it` (Italian), or `pt` (Portuguese).
```
{
'id': 'tid.397447931558895616',
'words': ['Brad', 'Pitt', 'na', 'Planet', 'TV', '.', 'U', 'are', 'welcome', ';)'],
'language': ['default', 'default', 'default', 'default', 'default', 'default', 'B-en', 'I-en', 'I-en', 'I-en']
}
```
### Data Fields
- `id`: unique identifier of the example;
- `words`: words in the sentence;
- `language`: language of each word.
## Additional Information
### Dataset Curators
ล pela Reher, Tomaลพ Erjavec, Darja Fiลกer.
### Licensing Information
CC BY-SA 4.0.
### Citation Information
```
@misc{janes_preklop,
title = {Tweet code-switching corpus Janes-Preklop 1.0},
author = {Reher, {\v S}pela and Erjavec, Toma{\v z} and Fi{\v s}er, Darja},
url = {http://hdl.handle.net/11356/1154},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
issn = {2820-4042},
year = {2017}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
cjvt/janes_preklop
|
[
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:sl",
"license:cc-by-sa-4.0",
"tweets",
"code-mixing",
"code-switching",
"region:us"
] |
2023-06-01T11:21:41+00:00
|
{"language": ["sl"], "license": "cc-by-sa-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "language", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 410822, "num_examples": 1104}], "download_size": 623816, "dataset_size": 412672}, "tags": ["tweets", "code-mixing", "code-switching"]}
|
2023-06-07T20:56:34+00:00
|
410747db5bb05c8858649e8a6522aeb3bd7f4b7f
|
# Dataset Card for "voxelgym_5c_42x42_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Cubpaw/voxelgym_5c_42x42_10
|
[
"region:us"
] |
2023-06-01T12:00:40+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}, {"name": "rgb_label", "dtype": "image"}, {"name": "path_label", "dtype": "image"}, {"name": "path_rgb_label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 6953.0, "num_examples": 8}, {"name": "validation", "num_bytes": 1776.0, "num_examples": 2}], "download_size": 26790, "dataset_size": 8729.0}}
|
2023-06-01T12:00:45+00:00
|
edf12e33d94436fe822c2b92e47376e80ae7ab8b
|
# README
|
atsushi3110/ja-nmt-soda-v0
|
[
"region:us"
] |
2023-06-01T12:10:24+00:00
|
{}
|
2023-06-01T18:03:45+00:00
|
f9399a721bd62748ce924b9855be28b88147c2f8
|
# Dataset Card for "uzbek_news"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/elmurod1202/TextClassification](https://github.com/elmurod1202/TextClassification)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [https://arxiv.org/pdf/2302.14494](https://arxiv.org/pdf/2302.14494)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 593 MB
- **Size of the generated dataset:** 522 MB
- **Total amount of disk used:** 1115 MB
### Dataset Summary
Multi-label text classification dataset for Uzbek language and some sourcode for analysis. This repository contains the code and dataset used for text classification analysis for the Uzbek language. The dataset consists text data from 9 Uzbek news websites and press portals that included news articles and press releases. These websites were selected to cover various categories such as politics, sports, entertainment, technology, and others. In total, we collected 512,750 articles with over 120 million words accross 15 distinct categories, which provides a large and diverse corpus for text classification. It is worth noting that all the text in the corpus is written in the Latin script.
Please refer to [paper](https://arxiv.org/pdf/2302.14494) and [GitHub repository](https://github.com/elmurod1202/TextClassification) for further details.
Disclaimer: The team releasing UzTextClassification did not write this model card. This is HuggingFace version of the dataset that is created for mainly easy to access usage. The original dataset files can be accessed and downloaded from https://doi.org/10.5281/zenodo.7677431
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 593 MB
- **Size of the generated dataset:** 522 MB
- **Total amount of disk used:** 1115 MB
An example of 'train' looks as follows.
```
{
"label": 14,
"text": "Samsung Galaxy S21 Ultra eng yaxshi kamerofonlar reytingida 17-oโrinni egalladi DxOMark laboratoriyasi mutaxassislari Samsung Galaxy S21 Ultraโning asosiy ..."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `text`: a `string` feature.
- `label`: a classification label, with possible values including 'Avto' (0), 'Ayollar' (1), 'Dunyo' (2), 'Foto' (3), 'Iqtisodiyot' (4), 'Jamiyat' (5), 'Jinoyat' (6), 'Madaniyat' (7), 'Oโzbekiston' (8), 'Pazandachilik' (9), 'Qonunchilik' (10), 'Salomatlik' (11), 'Siyosat' (12), 'Sport' (13), 'Texnologiya' (14).
### Data Splits
| name |train |validation|test|
|-------|-----:|---------:|---:|
|default|410200|51275|51275|
### Citation Information
```
@proceedings{kuriyozov_elmurod_2023_7677431,
title = {{Text classification dataset and analysis for Uzbek
language}},
year = 2023,
publisher = {Zenodo},
month = feb,
doi = {10.5281/zenodo.7677431},
url = {https://doi.org/10.5281/zenodo.7677431}
}
```
### Contact
For any questions or issues related to the dataset or code, please contact [[email protected], [email protected]].
|
murodbek/uz-text-classification
|
[
"task_categories:text-classification",
"task_categories:fill-mask",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:uz",
"uz",
"news",
"arxiv:2302.14494",
"region:us"
] |
2023-06-01T12:12:51+00:00
|
{"language": ["uz"], "size_categories": ["100K<n<1M"], "task_categories": ["text-classification", "fill-mask", "text-generation"], "pretty_name": "UzbekTextClassification", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Avto", "1": "Ayollar", "2": "Dunyo", "3": "Foto", "4": "Iqtisodiyot", "5": "Jamiyat", "6": "Jinoyat", "7": "Madaniyat", "8": "O\u2018zbekiston", "9": "Pazandachilik", "10": "Qonunchilik", "11": "Salomatlik", "12": "Siyosat", "13": "Sport", "14": "Texnologiya"}}}}], "splits": [{"name": "train", "num_bytes": 892446788, "num_examples": 410200}, {"name": "validation", "num_bytes": 111174020, "num_examples": 51275}, {"name": "test", "num_bytes": 111663893, "num_examples": 51275}], "download_size": 593012664, "dataset_size": 1115284701}, "tags": ["uz", "news"]}
|
2023-10-31T11:05:52+00:00
|
1c69bbfab3805cba04257fb18de3aded8a40d28b
|
# Automatic Glossary of Clinical Terminology (v2023)
This dataset contains 422,070 short, computer-generated definitions for SnomedCT concepts, covering various domains such as diseases, procedures, drugs, and anatomy. To do so, we prompted the OpenAI Turbo model, a variant of GPT 3.5, using a high-quality verbalization of the SnomedCT relationships of the to-be-defined concept.

<div class="not-prose">
<img align="right" alt="figure-quality-graph-1.png" src="https://s3.amazonaws.com/moonup/production/uploads/5f04e8865d08220171a0ad3f/629gp8GJt_5STt-4fryMg.png" width="256" />
<!--<img align="right" alt="figure-quality-graph-2s.png" src="https://s3.amazonaws.com/moonup/production/uploads/5f04e8865d08220171a0ad3f/Ki4k8jt_YqDGgKA2sqkJy.png" width="160" />-->
</div>
## Quality Control
**IMPORTANT:** Following a quality control, we report that the definitions include a majority of factual, insightful, and fluent definitions. However, about 30% of the definitions generated by this procedure do not meet the high standards required for presentation to users, or for usage by machine learning models in scenarios requiring reasoning, due to their imperfect quality. However, more than 95% of the definitions appear useful for biomedical model pre-training. We therefore release this dataset for building retrieval-based systems, and evaluate large biomedical language models on the definition-generation task (and eventually for low-rank finetuning of existing language models).
<br clear="all" />
## License
The license for this work is subject to both [SnomedCT](https://www.nlm.nih.gov/healthit/snomedct/snomed_licensing.html) and [OpenAI API](https://openai.com/policies/terms-of-use) agreements. We strongly recommend checking those licenses before making use of this dataset.
## Citation
If you use this dataset, please cite the following work: [AGCT @ BioNLP 2023](https://aclanthology.org/2023.bionlp-1.23/)
```
@inproceedings{remy-etal-2023-automatic,
title = "Automatic Glossary of Clinical Terminology: a Large-Scale Dictionary of Biomedical Definitions Generated from Ontological Knowledge",
author = "Remy, Fran{\c{c}}ois and
Demuynck, Kris and
Demeester, Thomas",
editor = "Demner-fushman, Dina and
Ananiadou, Sophia and
Cohen, Kevin",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.23",
doi = "10.18653/v1/2023.bionlp-1.23",
pages = "265--272",
abstract = "Background: More than 400.000 biomedical concepts and some of their relationships are contained in SnomedCT, a comprehensive biomedical ontology. However, their concept names are not always readily interpretable by non-experts, or patients looking at their own electronic health records (EHR). Clear definitions or descriptions in understandable language or often not available. Therefore, generating human-readable definitions for biomedical concepts might help make the information they encode more accessible and understandable to a wider public. Objective: In this article, we introduce the Automatic Glossary of Clinical Terminology (AGCT), a large-scale biomedical dictionary of clinical concepts generated using high-quality information extracted from the biomedical knowledge contained in SnomedCT.Methods: We generate a novel definition for every SnomedCT concept, after prompting the OpenAI Turbo model, a variant of GPT 3.5, using a high-quality verbalization of the SnomedCT relationships of the to-be-defined concept. A significant subset of the generated definitions was subsequently evaluated by NLP researchers with biomedical expertise on 5-point scales along the following three axes: factuality, insight, and fluency. Results: AGCT contains 422,070 computer-generated definitions for SnomedCT concepts, covering various domains such as diseases, procedures, drugs, and anatomy. The average length of the definitions is 49 words. The definitions were assigned average scores of over 4.5 out of 5 on all three axes, indicating a majority of factual, insightful, and fluent definitions. Conclusion: AGCT is a novel and valuable resource for biomedical tasks that require human-readable definitions for SnomedCT concepts. It can also serve as a base for developing robust biomedical retrieval models or other applications that leverage natural language understanding of biomedical knowledge.",
}
```
|
FremyCompany/AGCT-Dataset
|
[
"size_categories:100K<n<1M",
"language:en",
"license:other",
"biology",
"medical",
"region:us"
] |
2023-06-01T12:37:33+00:00
|
{"language": ["en"], "license": "other", "size_categories": ["100K<n<1M"], "pretty_name": "Automatic Glossary of Clinical Terminology (v2023)", "tags": ["biology", "medical"]}
|
2023-11-28T21:32:26+00:00
|
5d5797365920426efb809feb90bb900256a0b60b
|
# Dataset Card for "stack-smol-xxl-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
cakiki/stack-smol-xxl-embeddings
|
[
"region:us"
] |
2023-06-01T12:39:19+00:00
|
{"dataset_info": {"features": [{"name": "token_ids", "sequence": "int64"}, {"name": "lri_160", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 231978165104, "num_examples": 11658586}], "download_size": 34909750705, "dataset_size": 231978165104}}
|
2023-06-01T13:40:12+00:00
|
f7c3f4692817bb3ae255b7642af041008836591f
|
MaheshMc2/petcare_sample
|
[
"license:other",
"region:us"
] |
2023-06-01T12:51:02+00:00
|
{"license": "other"}
|
2023-06-01T12:51:32+00:00
|
|
684ce4cf39bd1644ec7236650ce13d246d619a4b
|
# Dataset Card for "BioDEX-Conv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
BioDEX/BioDEX-Conv
|
[
"region:us"
] |
2023-06-01T13:07:12+00:00
|
{"dataset_info": {"features": [{"name": "pmid", "dtype": "string"}, {"name": "safetyreportid", "dtype": "int64"}, {"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "test", "num_bytes": 67996546, "num_examples": 3628}, {"name": "train", "num_bytes": 165008031, "num_examples": 9624}, {"name": "validation", "num_bytes": 42265039, "num_examples": 2407}], "download_size": 138712002, "dataset_size": 275269616}}
|
2023-06-01T13:20:54+00:00
|
f388483185cdcbcd4dc62987ea80c1a4ea39b27d
|
[From OpenAI](https://github.com/openai/prm800k)
# PRM800K: A Process Supervision Dataset
- [Blog Post](https://openai.com/research/improving-mathematical-reasoning-with-process-supervision)
This repository accompanies the paper [Let's Verify Step by Step](https://openai.com/research/improving-mathematical-reasoning-with-process-supervision) and presents the PRM800K dataset introduced there. PRM800K is a process supervision dataset containing 800,000 step-level correctness labels for model-generated solutions to problems from the [MATH](https://github.com/hendrycks/math) dataset. More information on PRM800K and the project can be found in the paper.
We are releasing the raw labels as well as the instructions we gave labelers during phase 1 and phase 2 of the project. Example labels can be seen in the image below.
<p align="center">
<img src="https://github.com/openai/prm800k/blob/main/prm800k/img/interface.png?raw=true" height="300"/>
</p>
## Data
The data contains our labels formatted as newline-delimited lists of `json` data.
Each line represents 1 full solution sample and can contain many step-level labels. Here is one annotated line:
```javascript
{
// UUID representing a particular labeler.
"labeler": "340d89bc-f5b7-45e9-b272-909ba68ee363",
// The timestamp this trajectory was submitted.
"timestamp": "2023-01-22T04:34:27.052924",
// In phase 2, we split our data collection into generations, using our best
// PRM so far to pick which solutions to score in the next generation.
// In phase 1, this value should always be null.
"generation": 9,
// In each generation, we reserve some solutions for quality control. We serve
// these solutions to every labeler, and check that they agree with our
// gold labels.
"is_quality_control_question": false,
// generation -1 was reserved for a set of 30 questions we served every
// labeler in order to screen for base task performance.
"is_initial_screening_question": false,
// Metadata about the question this solution is a response to.
"question": {
// Text of the MATH problem being solved.
"problem": "What is the greatest common factor of $20 !$ and $200,\\!000$? (Reminder: If $n$ is a positive integer, then $n!$ stands for the product $1\\cdot 2\\cdot 3\\cdot \\cdots \\cdot (n-1)\\cdot n$.)",
// Ground truth solution from the MATH dataset.
"ground_truth_solution": "The prime factorization of $200,000$ is $2^6 \\cdot 5^5$. Then count the number of factors of $2$ and $5$ in $20!$. Since there are $10$ even numbers, there are more than $6$ factors of $2$. There are $4$ factors of $5$. So the greatest common factor is $2^6 \\cdot 5^4=\\boxed{40,\\!000}$.",
// Ground truth answer.
"ground_truth_answer": "40,\\!000",
// The full steps of the model-generated solution. This is only set for
// phase 2 where we pre-generated all solutions that we labeled.
"pre_generated_steps": [
"I want to find the largest positive integer that divides both $20 !$ and $200,\\!000$ evenly.",
"One way to do this is to factor both numbers into prime factors and look for the common ones.",
"I know that $200,\\!000 = 2^5\\cdot 10^4 = 2^9\\cdot 5^4$.",
"To find the prime factorization of $20 !$, I can use the fact that it is the product of all the positive integers from $1$ to $20$.",
"For each prime number $p$ between $1$ and $20$, I can count how many multiples of $p$ are in that range.",
"For example, there are $10$ multiples of $2$ between $1$ and $20$, namely $2, 4, 6, \\dots, 20$.",
"But there are also $5$ multiples of $4$, which is $2^2$, and $2$ multiples of $8$, which is $2^3$, and $1$ multiple of $16$, which is $2^4$.",
"So, the total power of $2$ in $20 !$ is $10 + 5 + 2 + 1 = 18$.",
"Similarly, there are $4$ multiples of $5$, namely $5, 10, 15, 20$, so the power of $5$ in $20 !$ is $4$.",
"There are $6$ multiples of $3$, namely $3, 6, 9, \\dots, 18$, but there are also $2$ multiples of $9$, which is $3^2$, so the power of $3$ in $20 !$ is $6 + 2 = 8$.",
"There are $2$ multiples of $7$, namely $7$ and $14$, so the power of $7$ in $20 !$ is $2$.",
"There are $1$ multiple of each of the other prime numbers $11, 13, 17$, and $19$, so the powers of those primes in $20 !$ are $1$ each.",
"Therefore, the prime factorization of $20 !$ is $2^{18}\\cdot 3^8\\cdot 5^4\\cdot 7^2\\cdot 11\\cdot 13\\cdot 17\\cdot 19$.",
"To find the greatest common factor of $20 !$ and $200,\\!000$, I need to take the lowest power of each common prime factor.",
"The only common prime factors are $2$ and $5$, and the lowest powers are $9$ and $4$, respectively.",
"So, the greatest common factor is $2^9\\cdot 5^4 = 512\\cdot 625 = 320,\\!000$.\n\n# Answer\n\n320,000"
],
// The answer given as the end of the pre-generated solution. We can see
// this solution is incorrect.
"pre_generated_answer": "320,000",
// The score given by our PRM to this solution. This one isn't rated very
// highly!
"pre_generated_verifier_score": 0.010779580529581414
},
// The human data we collected for this solution, containing correctness
// labels for each step of the solution.
"label": {
"steps": [
// Each object here represents labels for one step of the solution.
{
// Each step will contain one or more completions. These are candidate
// steps the model output at this step of the trajectory. In phase 1,
// we frequently collect labels on alternative steps, while in phase 2
// we only collect labels on alternative steps after the first mistake,
// so most completions lists are singletons.
"completions": [
{
// Text of the step.
"text": "I want to find the largest positive integer that divides both $20 !$ and $200,\\!000$ evenly.",
// The rating the labeler gave to this step. Can be -1, 0, or +1.
// This is a 0 because it isn't incorrect, but it does not make
// any progress.
"rating": 0,
// The labeler can flag steps that they don't know how to label.
// This is rarely used.
"flagged": null
}
],
// In phase 1, if all completions were rated -1, we allowed labelers to
// write their own +1 step. This is null for all steps in phase 2.
"human_completion": null,
// The index of the completion "chosen" at this step, or null if the
// human_completion was used. You can reconstruct the solution
// trajectory like:
// [
// step["human_completion"] if step["chosen_completion"] is None
// else step["completions"][step["chosen_completion"]]["text"]
// for step in labeled_solution["label"]["steps"]
// ]
"chosen_completion": 0
},
{
"completions": [
{
"text": "One way to do this is to factor both numbers into prime factors and look for the common ones.",
"rating": 0,
"flagged": null
}
],
"human_completion": null,
"chosen_completion": 0
},
{
// Some steps contain multiple alternative completions, and each one
// gets a rating.
"completions": [
{
"text": "I know that $200,\\!000 = 2^5\\cdot 10^4 = 2^9\\cdot 5^4$.",
"rating": -1,
"flagged": null
},
{
"text": "To factor $20 !$, I can use the fact that every factorial is a multiple of every number less than or equal to it.",
"rating": 0,
"flagged": false
},
{
"text": "I can use a factor tree to find the prime factors of $200,\\!000$: $200,\\!000 = 2^5\\cdot 10^4 = 2^5\\cdot 2^4\\cdot 5^4 = 2^9\\cdot 5^4$.",
"rating": -1,
"flagged": false
},
{
"text": "I can use a factor tree to find the prime factors of $200,\\!000$.",
"rating": 0,
"flagged": false
},
{
"text": "To factor $20 !$, I can use the fact that any factorial is divisible by all the primes less than or equal to the input.",
"rating": 0,
"flagged": false
}
],
"human_completion": null,
"chosen_completion": null
}
],
// Total time in milliseconds spent on labeling this solution.
"total_time": 278270,
// Final result of labeling this solution. Will be one of:
// - "found_error": In phase 2 we stop labeling a solution after the
// first error is found.
// - "solution": We reached a step that concluded in the correct answer
// to the problem.
// - "bad_problem": The labeler reported the problem as broken.
// - "give_up": The labeler was stuck (the problem was taking too long,
// or the instructions were unclear) and moved onto the
// next problem.
"finish_reason": "found_error"
}
}
```
## Citation
Please use the below BibTeX entry to cite this dataset:
COMING SOON
|
alpayariyak/prm800k
|
[
"language:en",
"region:us"
] |
2023-06-01T13:12:22+00:00
|
{"language": ["en"]}
|
2023-06-01T13:51:25+00:00
|
f0323c66f23ef9ca4fabad38d05b21a1e4df9bae
|
ananyaayasi/ainu-processed-v2
|
[
"license:cc-by-sa-4.0",
"region:us"
] |
2023-06-01T13:31:00+00:00
|
{"license": "cc-by-sa-4.0"}
|
2023-06-05T20:03:09+00:00
|
|
26cd5079bb0d3cd1521c6894765a0b8edb159d7f
|
datasets-maintainers/audiofolder_no_configs_in_metadata
|
[
"region:us"
] |
2023-06-01T13:33:16+00:00
|
{"duplicated_from": "polinaeterna/audiofolder_no_configs_in_metadata"}
|
2023-06-01T13:33:16+00:00
|
|
1668dfc91efae975e44457cdabef60fb9200820a
|
datasets-maintainers/audiofolder_single_config_in_metadata
|
[
"region:us"
] |
2023-06-01T13:33:49+00:00
|
{"configs": [{"config_name": "custom", "data_dir": "v1", "drop_labels": true}], "duplicated_from": "polinaeterna/audiofolder_one_default_config_in_metadata"}
|
2023-06-02T18:10:17+00:00
|
|
e71bce498e6c2bd2c58b20b097fdd3389793263f
|
datasets-maintainers/audiofolder_two_configs_in_metadata
|
[
"region:us"
] |
2023-06-01T13:35:05+00:00
|
{"configs": [{"config_name": "v1", "data_dir": "v1", "drop_labels": true}, {"config_name": "v2", "data_dir": "v2", "drop_labels": false}], "duplicated_from": "polinaeterna/audiofolder_two_configs_in_metadata"}
|
2023-06-02T18:10:56+00:00
|
|
f4a98f28513eaacbd6f31035bcc0a8365c45d0a6
|
## Guidelines
Please, read the prompt carefully and...
## Fields
* `prompt` is of type <class 'str'>
* `response-1` is of type <class 'str'>
* `response-2` is of type <class 'str'>
## Questions
* `response_ranking` None
* `correct_response` None
## Load with Argilla
To load this dataset with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface('argilla/comparison-dataset-falcon')
```
## Load with Datasets
To load this dataset with Datasets, you'll just need to install Datasets as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset('argilla/comparison-dataset-falcon')
```
|
argilla/comparison-dataset-falcon
|
[
"region:us"
] |
2023-06-01T14:00:05+00:00
|
{}
|
2023-06-01T14:00:07+00:00
|
b49f0b72281dce7577ff9c923f6cbc1dbf7db257
|
kkk1272164178/eck
|
[
"license:openrail",
"region:us"
] |
2023-06-01T14:14:59+00:00
|
{"license": "openrail"}
|
2023-06-01T14:14:59+00:00
|
|
db8fc950ae87f9639f586335180ac2bdce96c991
|
ConnorDJ/ESSInfo
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-01T14:22:13+00:00
|
{"license": "apache-2.0"}
|
2023-06-01T14:23:19+00:00
|
|
0e4c2a38770fdee5535af5642bc7bd0c6cbc7866
|
kenkenZ0157/1570
|
[
"license:unknown",
"region:us"
] |
2023-06-01T14:37:58+00:00
|
{"license": "unknown"}
|
2023-06-01T14:43:51+00:00
|
|
6068e47a399eaf342cfb65862fcda7c030ce7a87
|
# Dataset Card for "prm800k_rlhf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tasksource/prm800k_rlhf
|
[
"region:us"
] |
2023-06-01T14:46:35+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "null"}], "splits": [{"name": "train"}, {"name": "test"}], "download_size": 0, "dataset_size": 0}}
|
2023-06-06T08:48:00+00:00
|
51305f0a841d5240ac369ef237af9256deca9ace
|
# Dataset Card for "test_yaml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
polinaeterna/test_yaml
|
[
"region:us"
] |
2023-06-01T14:49:59+00:00
|
{"builder_configs": {"data_files": [{"split": "test", "pattern": "data/test-*"}, {"split": "train", "pattern": "data/train-*"}]}, "dataset_info": [{"config_name": "default", "features": [{"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "test", "1": "train"}}}}], "splits": [{"name": "train", "num_bytes": 1648848.0, "num_examples": 5}, {"name": "test", "num_bytes": 1284881.0, "num_examples": 4}], "download_size": 0, "dataset_size": 2933729.0}, {"config_name": "second", "features": [{"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "test", "1": "train"}}}}], "splits": [{"name": "train", "num_bytes": 1648848.0, "num_examples": 5}, {"name": "test", "num_bytes": 1284881.0, "num_examples": 4}], "download_size": 1967307, "dataset_size": 2933729.0}, {"config_name": "third", "features": [{"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "test", "1": "train"}}}}], "splits": [{"name": "train", "num_bytes": 1648848.0, "num_examples": 5}, {"name": "test", "num_bytes": 1284881.0, "num_examples": 4}], "download_size": 1967307, "dataset_size": 2933729.0}], "configs": [{"config_name": "default", "data_files": [{"split": "train", "pattern": "data/train-*"}, {"split": "test", "pattern": "data/test-*"}]}, {"config_name": "second", "data_files": [{"split": "train", "pattern": "second/train-*"}, {"split": "test", "pattern": "second/test-*"}]}, {"config_name": "third", "data_files": [{"split": "train", "pattern": "third/train-*"}, {"split": "test", "pattern": "third/test-*"}]}]}
|
2023-06-02T18:34:54+00:00
|
cc3782650eae4b4decb7020fa6ceb7a70609bb93
|
# AutoTrain Dataset for project: image-attribute-prediction
## Dataset Description
This dataset has been automatically processed by AutoTrain for project image-attribute-prediction.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<261x300 RGB PIL image>",
"target": 0
},
{
"image": "<300x300 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Paisley_Floral_A-Line_Dress', 'Paisley_Maxi_Cami_Dress'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 59 |
| valid | 16 |
|
snjv90/autotrain-data-image-attribute-prediction
|
[
"task_categories:image-classification",
"region:us"
] |
2023-06-01T15:28:04+00:00
|
{"task_categories": ["image-classification"]}
|
2023-06-02T07:24:10+00:00
|
8d2d6d34ce02fea1731bc8d148d4730c05e7ebfe
|
snjv90/fashion-image-dataset
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-01T15:30:01+00:00
|
{"license": "apache-2.0"}
|
2023-06-02T07:22:00+00:00
|
|
8053131568f4dec462b1cffe1ebafca7b21110ff
|
# Dataset Card for "filtered_yelp_restaurant_reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vincha77/filtered_yelp_restaurant_reviews
|
[
"region:us"
] |
2023-06-01T15:33:54+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1512130573, "num_examples": 1428375}, {"name": "test", "num_bytes": 378270444, "num_examples": 356537}], "download_size": 1193315327, "dataset_size": 1890401017}}
|
2023-06-01T15:36:16+00:00
|
958553bb1cbdcb3c6943b79dfe3dc0732271fc78
|
Htorne/amazing_repo
|
[
"task_categories:text-classification",
"size_categories:n<1K",
"language:da",
"license:openrail",
"region:us"
] |
2023-06-01T15:40:03+00:00
|
{"language": ["da"], "license": "openrail", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "Amazingname"}
|
2023-06-01T17:16:10+00:00
|
|
52da1bef2556bb69c6248de65eec61757835efe7
|
This repository contains datasets for [PhaseNet-DAS](https://arxiv.org/abs/2302.08747). The DAS event format is explained here: [Seismic Event Format for DAS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format_das/)
- **Ridgrecrest**
The "ridgecrest_north" dataset is extracted from [**The SCEDC Earthquake Data AWS Public Dataset**](https://scedc.caltech.edu/data/cloud.html).
This dataset is collected by Prof. Zhongwen Zhan ([email protected]). Please inform the authors if you utilize this dataset in your research.
- **Eureka**
The "eureka" dataset is extracted from [**Spring 2022 Arcata to Eureka California, Distributed Acoustic Sensing (DAS) experimentt**](https://www.sciencebase.gov/catalog/item/62fd776fd34e3a444286cd54).
This dataset is collected by Jeffrey J McGuire ([email protected]) and Andrew J Barbour ([email protected]). Please inform the authors if you utilize this dataset in your research.
Note the "eureka" dataset uses a different hdf5 format of OptaSense.
|
AI4EPS/quakeflow_das
|
[
"license:mit",
"arxiv:2302.08747",
"doi:10.57967/hf/0962",
"region:us"
] |
2023-06-01T16:19:31+00:00
|
{"license": "mit"}
|
2023-11-10T21:23:17+00:00
|
856e37a94127e778e6ebdde6c13746e7f438c0f4
|
hazemessam/uniprot_sprot
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-01T17:35:42+00:00
|
{"license": "apache-2.0"}
|
2023-06-01T19:49:21+00:00
|
|
db09fe1cecb6e20f27e8bbd689938473d988769c
|
# Translated dataset
This dataset consists of Kyrgyz-to-English translations, of English part of the XLSum dataset.
|
mashrabburanov/en_to_ky_translated_xlsum
|
[
"region:us"
] |
2023-06-01T18:10:31+00:00
|
{}
|
2023-06-01T18:27:32+00:00
|
b2fc75a240243804775768a571405d7d5bb407c9
|
# Dataset Card for "wmt16_sentence_lang_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
madaanpulkit/wmt16_sentence_lang_en
|
[
"region:us"
] |
2023-06-01T18:15:07+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 713843218.0, "num_examples": 4548885}], "download_size": 451412645, "dataset_size": 713843218.0}}
|
2023-06-01T18:16:42+00:00
|
e510a8460d3325f34eedde8e639a4fcf0b3340a1
|
# Dataset Card for "Hatefulmemes_test_google_flan_t5_xl_mode_C_T_A_OCR_rices_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/Hatefulmemes_test_google_flan_t5_xl_mode_C_T_A_OCR_rices_ns_1000
|
[
"region:us"
] |
2023-06-01T18:59:09+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "sequence": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full__text", "num_bytes": 1150725, "num_examples": 1000}], "download_size": 200208, "dataset_size": 1150725}}
|
2023-06-01T18:59:13+00:00
|
14014f1176efd938fee69a031fc3ff4bc8f5f14c
|
# Dataset Summary
CIRAL is a collection for cross-lingual information retrieval research across four (4) African languages. The collection comprises English queries and query-passage relevance judgements manually annotated by native speakers.
This dataset stores passages which have been culled from news websites for CIRAL.
## Dataset Structure
This dataset is configured by language. An example of a passage data entry is
```json
{
'docid': 'DOCID#0#0',
'title': 'This is the title of a sample passage',
'text': 'This is the content of a sample passage',
'url': 'https:/\/\this-is-a-sample-url.com'
}
```
## Load Dataset
An example to load the dataset
```python
language = "hausa"
dataset = load_dataset("ciral/ciral-corpus", language)
```
## Translated Dataset
We also include a translated version of the dataset in English for all the languages. Translation was done using [NLLB 1.3B](https://huggingface.co/facebook/nllb-200-1.3B).
```python
language = "hausa"
dataset = load_dataset("ciral/ciral-corpus", language, translated=True)
```
## Citation
...
|
CIRAL/ciral-corpus
|
[
"language:ha",
"language:so",
"language:sw",
"language:yo",
"license:apache-2.0",
"region:us"
] |
2023-06-01T19:05:01+00:00
|
{"language": ["ha", "so", "sw", "yo"], "license": "apache-2.0", "mutilinguality": ["multilingual"], "task-categories": ["text-retrieval"], "viewer": true}
|
2024-02-08T15:18:02+00:00
|
be2865eaf6de775a767dff87c3daf18b478a19e4
|
# Dataset Card for "SST2_test_google_flan_t5_xxl_mode_C_rices_ns_1821"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/SST2_test_google_flan_t5_xxl_mode_C_rices_ns_1821
|
[
"region:us"
] |
2023-06-01T19:24:55+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "sequence": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0____text", "num_bytes": 1196023, "num_examples": 1821}], "download_size": 138123, "dataset_size": 1196023}}
|
2023-06-01T19:54:46+00:00
|
ffe0317c38e1c98ce155d0039fdbbf1cb99f10c0
|
# Dataset Card for "trainset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AwesomePeoplz257/trainset
|
[
"region:us"
] |
2023-06-01T19:53:39+00:00
|
{"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2881414016, "num_examples": 3000}], "download_size": 453992987, "dataset_size": 2881414016}}
|
2023-06-01T19:59:00+00:00
|
611f8376fa9a9b25aaeec01925c54768632391da
|
orderofmagnitude/alpaca_dataset.json
|
[
"license:apache-2.0",
"region:us"
] |
2023-06-01T19:55:39+00:00
|
{"license": "apache-2.0"}
|
2023-06-02T14:21:41+00:00
|
|
489ed334e3f57e26bf5c7d90f8f108d4a9025fdc
|
# Dataset Card for "testset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AwesomePeoplz257/testset
|
[
"region:us"
] |
2023-06-01T19:59:00+00:00
|
{"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 720353056, "num_examples": 750}], "download_size": 114268139, "dataset_size": 720353056}}
|
2023-06-01T20:00:15+00:00
|
1bb9d65727006064a4f91262237d9336705e1d34
|
# Dataset Card for "SST2_test_google_flan_t5_xxl_mode_C_SST_rices_ns_1821"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/SST2_test_google_flan_t5_xxl_mode_C_SST_rices_ns_1821
|
[
"region:us"
] |
2023-06-01T20:20:00+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "sequence": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0____", "num_bytes": 1155961, "num_examples": 1821}], "download_size": 0, "dataset_size": 1155961}}
|
2023-06-04T23:57:34+00:00
|
54345e433c64a79a4fb739d056ca3d8460267055
|
luist18/pt-parliament-interventions
|
[
"size_categories:n<1K",
"language:pt",
"license:mit",
"legal",
"parliament",
"region:us"
] |
2023-06-01T22:00:27+00:00
|
{"language": ["pt"], "license": "mit", "size_categories": ["n<1K"], "pretty_name": "Portuguese Parliament Interventions", "tags": ["legal", "parliament"]}
|
2023-06-01T22:01:04+00:00
|
|
28d3113d90f51f3f124c881356445bf4770dfd0d
|
luist18/ptparl
|
[
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"license:mit",
"region:us"
] |
2023-06-01T22:38:01+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pt"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "pretty_name": "PTPARL", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "group", "dtype": "string"}, {"name": "wing", "dtype": {"class_label": {"names": {"0": "LEFT", "1": "LEAN_LEFT", "2": "CENTER", "3": "LEAN_RIGHT", "4": "RIGHT"}}}}]}}
|
2023-06-01T22:54:37+00:00
|
|
956a1b5392ae70f31fb59b0ff192f3df15b1486d
|
# Dataset Card for "ag_news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
xwjzds/ag_news
|
[
"region:us"
] |
2023-06-01T22:43:12+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "World", "1": "Sports", "2": "Business", "3": "Sci/Tech"}}}}], "splits": [{"name": "train", "num_bytes": 29817303, "num_examples": 120000}, {"name": "test", "num_bytes": 1879474, "num_examples": 7600}], "download_size": 19820267, "dataset_size": 31696777}}
|
2023-06-02T19:57:30+00:00
|
9f033fb4092d9aa8dfd703ecccff5567c36d5aa9
|
# Dataset Card for "ag_newskeywords"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
xwjzds/ag_newskeywords
|
[
"region:us"
] |
2023-06-01T22:44:11+00:00
|
{"dataset_info": {"features": [{"name": "keyword", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 31466, "num_examples": 1760}], "download_size": 31546, "dataset_size": 31466}}
|
2023-06-03T22:12:16+00:00
|
01959cf8fc182ae7bbd9c659ae815dcd2d5a86a0
|
# Dataset Card for "bbc-newskeywords"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
xwjzds/bbc-newskeywords
|
[
"region:us"
] |
2023-06-01T22:44:39+00:00
|
{"dataset_info": {"features": [{"name": "keyword", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 19513, "num_examples": 1070}], "download_size": 17032, "dataset_size": 19513}}
|
2023-06-01T22:44:41+00:00
|
f4863fdc4f3b3e44b9d7e7ff2934681fa5dc2b3b
|
# Dataset Card for "20_newsgroupskeywords"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
xwjzds/20_newsgroupskeywords
|
[
"region:us"
] |
2023-06-01T22:45:46+00:00
|
{"dataset_info": {"features": [{"name": "keyword", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 8499, "num_examples": 505}], "download_size": 9137, "dataset_size": 8499}}
|
2023-06-02T16:33:42+00:00
|
314ad2c7524eb869c53e0067b9ab0f98951fa68a
|
***<p style="font-size: 20px">Please check out our Blog Post - [How we built a better GenAI with programmatic data development](snorkel.ai/how-we-built-a-better-genai-with-programmatic-data-development) for more details!</p>***
## Summary
`snorkel-curated-instruction-tuning` is a curated dataset that consists of high-quality instruction-response pairs.
These pairs were programmatically filtered with weak supervision from open-source datasets [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k),
[Open Assistant](https://huggingface.co/datasets/OpenAssistant/oasst1),
and [Helpful Instructions](https://huggingface.co/datasets/HuggingFaceH4/helpful_instructions).
To enhance the dataset, we also programmatically classified each instruction based on the InstructGPT paper.
For a more comprehensive understanding of our methodology, please visit our [blog](snorkel.ai/how-we-built-a-better-genai-with-programmatic-data-development).
## Dataset Overview & Methodology
Instruction tuning is an important step in developing effective [large language models (LLMs)](https://snorkel.ai/large-language-models-llms/) for generative AI tasks.
While proprietary datasets have been used by LLM-backed chatbots, the open-source community has created similar datasets accessible to everyone.
However, the quality of responses collected by volunteers has been inconsistent, affecting the quality of open-source models. Furthermore, there is currently no standard classification of instructions across datasets (many lack classification altogether), which can complicate measurements of instruction diversity when compiling from multiple sources.
Snorkel, with its expertise in converting noisy signals into high-quality supervision, addressed this issue by programmatically scoring, sampling, and filtering open-source datasets.
The curated dataset and methodology are now available for public use.
Please refer to our [blog](snorkel.ai/how-we-built-a-better-genai-with-programmatic-data-development) for more details on methods and evaluation.
## File descriptions
- `snorkel_curated_11k.jsonl`: 11k high-quality instruction-response pair selected from the mentioned open-source dataset. This is then used to instruction-tune the [snorkelai/RedPajama-7B-Chat-Curated](https://huggingface.co/snorkelai/RedPajama-7B-Chat-Curated/).
- `snorkel_hold_out_set.jsonl`: A hold-out set for evaluation, comparing human preferences between models.
## Intended Uses
- Instruction-tuning LLMs
For more detailed information, please refer to our blog post available at [How we built a better GenAI with programmatic data development](snorkel.ai/how-we-built-a-better-genai-with-programmatic-data-development).
## License/Attribution
**Copyright (2023) Snorkel AI, Inc.** This dataset was developed at [Snorkel AI](https://snorkel.ai/) and its use is subject to the Apache 2.0 license.
This work comes with the collaboration with Together Computer in releasing the [snorkelai/RedPajama-7B-Chat-Curated](https://huggingface.co/snorkelai/RedPajama-7B-Chat-Curated/) model.
Please refer to the licenses of the data subsets you use.
- [Open Assistant](https://huggingface.co/datasets/OpenAssistant/oasst1) is under Apache 2.0 license.
- [Helpful Instructions](https://huggingface.co/datasets/HuggingFaceH4/helpful_instructions) is under Apache 2.0 license.
- [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) is under CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/ Copyright ยฉ Wikipedia editors and contributors.
Databricks (https://www.databricks.com) Copyright ยฉ Databricks
## Language
English
## Version
Version: 1.0
To cite this dataset, please use:
```
@software{snorkel2023instructiontuning,
author = {Snorkel AI},
title = {Applying programmatic data development to Generative AI with Snorkel},
month = June,
year = 2023,
url = {https://huggingface.co/datasets/snorkelai/snorkel-curated-instruction-tuning}
}
```
**Owner: Snorkel AI, Inc.**
## Community
Join us on [Snorkel AI Slack](snorkel.ai/slack)
|
snorkelai/snorkel-curated-instruction-tuning
|
[
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-06-01T22:52:16+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "text-generation"]}
|
2023-07-24T17:48:48+00:00
|
9225c443fbfbf0792a2e580f134681c9da82be03
|
# Dataset Card for "ah_openai_qt_dialog"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Deojoandco/ah_openai_qt_dialog
|
[
"region:us"
] |
2023-06-01T23:19:39+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "num_comments", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "upvote_ratio", "dtype": "float64"}, {"name": "distinguished", "dtype": "null"}, {"name": "over_18", "dtype": "bool"}, {"name": "created_utc", "dtype": "float64"}, {"name": "comments", "list": [{"name": "body", "dtype": "string"}, {"name": "created_utc", "dtype": "float64"}, {"name": "distinguished", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "permalink", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}, {"name": "best_num_comments", "dtype": "int64"}, {"name": "dialog", "dtype": "string"}, {"name": "query_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 183642, "num_examples": 26}], "download_size": 159847, "dataset_size": 183642}}
|
2023-06-01T23:19:59+00:00
|
9d4fe96213f03fb04e7ed70ac737b07c52bb9979
|
# LAION-Audio-630K Freesound Dataset
[LAION-Audio-630K](https://github.com/LAION-AI/audio-dataset/blob/main/laion-audio-630k/README.md) is the largest audio-text dataset publicly available and a magnitude larger than previous audio-text datasets (by 2022-11-05). Notably, it combines eight distinct datasets, which includes the Freesound dataset.
Specifically, this Hugging face repository contains two versions of Freesound dataset. Details of each dataset (e.g. how captions are made etc.) could be found in the "datacard" column of the table below.
- **Freesound (full)**: The complete Freesound dataset, available at `/freesound` folder.
- **Freesound (no overlap)**: Made based on Freesound(full), with samples from ESC50, FSD50K, Urbansound8K and Clotho removed. available at `/freesound_no_overlap` folder.
As of the structure and format of `freesound` and `freesound_no_overlap` folder, please refer to [this page](https://github.com/LAION-AI/audio-dataset/blob/main/data_preprocess/README.md).
| Name |Duration |Number of Samples |Data Type | Metadata | Data Card |
|--------------------------------------------------|-------------------------|--------------------|--------- |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------- |
| Freesound (no overlap) |2817.31hrs | 460801 |1-2 captions per audio, audio | [website](https://freesound.org/) <br> [csv]()|[data card](/data_card/freesound.md)|
| Freesound (full) |3033.38hrs | 515581 |1-2 captions per audio, audio | [website](https://freesound.org/) <br> [csv]() |[data card](/data_card/freesound.md)|
## Metadata csv file
For each of the two datasets, we provide a metadata csv file including the following columns:
- **audio_filename**: The filename of the audio file in `.tar` files. `exemple: 2394.flac`
- **caption_i**: the i-th caption of the audio file
- **freesound_id**: The freesound id of the audio file.
- **username**: The username of the uploader of the audio file.
- **freesound_url**: The url of the audio file in freesound.org
- **username**: The freesound username of the uploader of the audio file.
- **license**: The license of the audio file. `http://creativecommons.org/licenses/by/3.0/`
## Credits & Licence
- **!!!TERM OF USE!!!**: **By downloading files in this repository, you agree that you will use them <u> for research purposes only </u>. If you want to use Freesound clips in LAION-Audio-630K for commercial purposes, please contact Frederic Font Corbera at [email protected].**
### Freesound Credit:
All audio clips from Freesound are released under Creative Commons (CC) licenses, while each clip has its own license as defined by the clip uploader in Freesound, some of them requiring attribution to their original authors and some forbidding further commercial reuse. Specifically, here is the statistics about licenses of audio clips involved in LAION-Audio-630K:
| License | Number of Samples |
| :--- | :--- |
| http://creativecommons.org/publicdomain/zero/1.0/ | 260134 |
| https://creativecommons.org/licenses/by/4.0/ | 97090 |
| http://creativecommons.org/licenses/by/3.0/ | 89337 |
| http://creativecommons.org/licenses/by-nc/3.0/ | 31680 |
| https://creativecommons.org/licenses/by-nc/4.0/ | 26736 |
| http://creativecommons.org/licenses/sampling+/1.0/ | 11116 |
## Acknowledgement
The whole collection process as well as all usage of the LAION-Audio-630K are conducted by Germany non-profit pure research organization [LAION](https://laion.ai/). All contributors and collectors of the dataset are considered as open source contributors affiliated to LAION. These community contributors (Discord ids) include but not limited to: @marianna13#7139, @Chr0my#0173, @PiEquals4#1909, @Yuchen Hui#8574, @Antoniooooo#4758, @IYWO#9072, krishna#1648, @dicknascarsixtynine#3885, and @turian#1607. We would like to appreciate all of them for their efforts on the LAION-Audio-630k dataset.
|
Meranti/CLAP_freesound
|
[
"task_categories:audio-classification",
"size_categories:1M<n<10M",
"language:en",
"audio",
"text",
"contrastive learning",
"region:us"
] |
2023-06-01T23:42:03+00:00
|
{"language": ["en"], "size_categories": ["1M<n<10M"], "task_categories": ["audio-classification"], "pretty_name": "freesound", "tags": ["audio", "text", "contrastive learning"]}
|
2023-07-09T16:09:18+00:00
|
596e500c9ab420802f0b360f79c3c9c417a74eda
|
TGSuri/eld
|
[
"license:afl-3.0",
"region:us"
] |
2023-06-02T00:40:25+00:00
|
{"license": "afl-3.0"}
|
2023-06-02T00:40:25+00:00
|
|
94cd17ebfd6702bf62281d3c89f22ff649815b46
|
garythung/trashnet
|
[
"license:mit",
"region:us"
] |
2023-06-02T00:46:39+00:00
|
{"license": "mit"}
|
2023-06-02T02:23:04+00:00
|
|
fafbecbddcbead51611b16a44950b3af34de4623
|
# ็กๅๆถๆฏ dataset
|
namiyao/bedtimenews
|
[
"size_categories:1K<n<10K",
"language:zh",
"region:us"
] |
2023-06-02T01:04:56+00:00
|
{"language": ["zh"], "size_categories": ["1K<n<10K"]}
|
2023-06-02T02:02:04+00:00
|
47f72a378191bf6948036e705d3ba53aebe43b11
|
# Dataset Card for "ah_openai_dialog"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Deojoandco/ah_openai_dialog
|
[
"region:us"
] |
2023-06-02T01:05:20+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "num_comments", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "upvote_ratio", "dtype": "float64"}, {"name": "distinguished", "dtype": "null"}, {"name": "over_18", "dtype": "bool"}, {"name": "created_utc", "dtype": "float64"}, {"name": "comments", "list": [{"name": "body", "dtype": "string"}, {"name": "created_utc", "dtype": "float64"}, {"name": "distinguished", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "permalink", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}, {"name": "best_num_comments", "dtype": "int64"}, {"name": "Error", "dtype": "bool"}, {"name": "query", "dtype": "string"}, {"name": "dialog", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 207637, "num_examples": 26}], "download_size": 152853, "dataset_size": 207637}}
|
2023-06-02T01:05:29+00:00
|
9a05bae363e74b312bbf7bee0f59f7512d1b9a8c
|
# Dataset Card for "ah_openai_st_dialog"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Deojoandco/ah_openai_st_dialog
|
[
"region:us"
] |
2023-06-02T02:17:39+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "num_comments", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "upvote_ratio", "dtype": "float64"}, {"name": "distinguished", "dtype": "null"}, {"name": "over_18", "dtype": "bool"}, {"name": "created_utc", "dtype": "float64"}, {"name": "comments", "list": [{"name": "body", "dtype": "string"}, {"name": "created_utc", "dtype": "float64"}, {"name": "distinguished", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "permalink", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}, {"name": "best_num_comments", "dtype": "int64"}, {"name": "Error", "dtype": "bool"}, {"name": "query", "dtype": "string"}, {"name": "dialog", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 207491, "num_examples": 26}], "download_size": 152824, "dataset_size": 207491}}
|
2023-06-02T02:17:46+00:00
|
58fd6a314b6cc532d8f5a1ff88309023208edd73
|
# Dataset Card for "ah_openai_dialog_annotation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Deojoandco/ah_openai_dialog_annotation
|
[
"region:us"
] |
2023-06-02T02:32:03+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "num_comments", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "upvote_ratio", "dtype": "float64"}, {"name": "distinguished", "dtype": "null"}, {"name": "over_18", "dtype": "bool"}, {"name": "created_utc", "dtype": "float64"}, {"name": "comments", "list": [{"name": "body", "dtype": "string"}, {"name": "created_utc", "dtype": "float64"}, {"name": "distinguished", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "permalink", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}, {"name": "best_num_comments", "dtype": "int64"}, {"name": "Error", "dtype": "bool"}, {"name": "query", "dtype": "string"}, {"name": "dialog", "dtype": "string"}, {"name": "annotation_error", "dtype": "bool"}, {"name": "annotation", "struct": [{"name": "success", "dtype": "bool"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 239418, "num_examples": 26}], "download_size": 180200, "dataset_size": 239418}}
|
2023-06-02T02:32:11+00:00
|
0a584c5249646b8246d8bb62fa20223e1020c935
|
# Dataset Card for "cvasnlp_sample_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Tristan/cvasnlp_sample_test
|
[
"region:us"
] |
2023-06-02T02:36:34+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "abyssinian", "1": "american bulldog", "2": "american pit bull terrier", "3": "basset hound", "4": "beagle", "5": "bengal", "6": "birman", "7": "bombay", "8": "boxer", "9": "british shorthair", "10": "chihuahua", "11": "egyptian mau", "12": "english cocker spaniel", "13": "english setter", "14": "german shorthaired", "15": "great pyrenees", "16": "havanese", "17": "japanese chin", "18": "keeshond", "19": "leonberger", "20": "maine coon", "21": "miniature pinscher", "22": "newfoundland", "23": "persian", "24": "pomeranian", "25": "pug", "26": "ragdoll", "27": "russian blue", "28": "saint bernard", "29": "samoyed", "30": "scottish terrier", "31": "shiba inu", "32": "siamese", "33": "sphynx", "34": "staffordshire bull terrier", "35": "wheaten terrier", "36": "yorkshire terrier"}}}}, {"name": "id", "dtype": "int64"}, {"name": "Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_with_openai", "sequence": "string"}, {"name": "blip_caption_False_beams_5_Salesforce_blip_image_captioning_large_max_length_30_hf", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 227704.0, "num_examples": 10}], "download_size": 179409, "dataset_size": 227704.0}}
|
2023-06-20T02:05:52+00:00
|
99cb0958d5fb92ceecf7c47fbab020a8931a8842
|
# Dataset Card for "example"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CHENHJDJSD/example
|
[
"region:us"
] |
2023-06-02T02:46:41+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 75848559.0, "num_examples": 185}], "download_size": 75853693, "dataset_size": 75848559.0}}
|
2023-06-02T02:47:12+00:00
|
ca7731b06a584752177bfb3b75c3863e299dff00
|
# Dataset Card for "example_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CHENHJDJSD/example_1
|
[
"region:us"
] |
2023-06-02T02:50:15+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 75848559.0, "num_examples": 185}, {"name": "test", "num_bytes": 75848559.0, "num_examples": 185}], "download_size": 151707386, "dataset_size": 151697118.0}}
|
2023-06-02T02:50:49+00:00
|
5d597b3d1775d5f6fe24a8457478b4b49bc6b090
|
# Dataset Card for "example_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CHENHJDJSD/example_2
|
[
"region:us"
] |
2023-06-02T02:51:39+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 67486109.0, "num_examples": 165}, {"name": "test", "num_bytes": 75848559.0, "num_examples": 185}], "download_size": 143344539, "dataset_size": 143334668.0}}
|
2023-06-02T02:52:36+00:00
|
90914c7b228cc92565e152e6f20761d3a15a978d
|
# Dataset Card for "infiltration_train_test_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CHENHJDJSD/infiltration_train_test_large
|
[
"region:us"
] |
2023-06-02T03:02:22+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 12841818918.08, "num_examples": 32080}, {"name": "test", "num_bytes": 9014937482.08, "num_examples": 22417}], "download_size": 21857676249, "dataset_size": 21856756400.16}}
|
2023-06-02T05:39:43+00:00
|
56a9520c945bdbf2dbbcf8e574d8e4135f50467b
|
# Dataset Card for "CalliFontXL"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arbml/CalliFontXL
|
[
"region:us"
] |
2023-06-02T03:31:30+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5638821095.223, "num_examples": 799657}, {"name": "validation", "num_bytes": 703970746.352, "num_examples": 99988}, {"name": "test", "num_bytes": 703868439.82, "num_examples": 99966}], "download_size": 7014821573, "dataset_size": 7046660281.3949995}}
|
2023-06-02T03:51:24+00:00
|
834fe6a49d86c6595d769726d0e63d7b4a146afc
|
# Dataset Card for "0d2e4cf8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/0d2e4cf8
|
[
"region:us"
] |
2023-06-02T03:42:51+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 180, "num_examples": 10}], "download_size": 1339, "dataset_size": 180}}
|
2023-06-02T03:42:52+00:00
|
9302ce95ee7ef8b3310c6cf78f3bfedc9bf8c370
|
# Dataset Card for "Hatefulmemes_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Hatefulmemes_validation
|
[
"region:us"
] |
2023-06-02T04:05:03+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "clip_tags_ViT_L_14_simple_specific", "dtype": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_with_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_with_openai_wordnet", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_with_openai", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_with_openai_wordnet", "sequence": "string"}, {"name": "Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "Attributes_ViT_L_14_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "blip_caption_False_beams_5_base_capfilt_large_max_length_30_source_a_pitcure_of_", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "clip_tags_LAION_ViT_H_14_2B_simple_specific", "dtype": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_laion.pt", "sequence": "string"}, {"name": "blip_caption_False_beams_5_Salesforce_blip_image_captioning_large_max_length_30_hf", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 189687368.0, "num_examples": 500}], "download_size": 189404660, "dataset_size": 189687368.0}}
|
2023-06-17T01:45:07+00:00
|
4b8b170f71e5d923c9085d10ed117652209be6e5
|
cosimoiaia/Loquace-102k
|
[
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:it",
"license:cc-by-nc-2.0",
"llama",
"instruction-tuning",
"region:us"
] |
2023-06-02T04:17:46+00:00
|
{"language": ["it"], "license": "cc-by-nc-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["conversational"], "tags": ["llama", "instruction-tuning"]}
|
2023-06-11T16:42:54+00:00
|
|
c7c82e0a1861074cebe9d381d730306201fda676
|
RD9PTU/test
|
[
"license:unknown",
"region:us"
] |
2023-06-02T04:19:47+00:00
|
{"license": "unknown"}
|
2023-06-02T04:20:27+00:00
|
|
4b1c37947d3a50f4cc2e989260f812af071d0d31
|
Dhika/defect_rail
|
[
"license:unknown",
"region:us"
] |
2023-06-02T04:29:04+00:00
|
{"license": "unknown"}
|
2023-06-02T04:39:14+00:00
|
|
1c7bfb964baba72865f9f0c43d0a77cf5aff5948
|
testing01
|
Jerry061/testing01
|
[
"task_categories:summarization",
"size_categories:n<1K",
"language:en",
"region:us"
] |
2023-06-02T04:39:33+00:00
|
{"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["summarization"]}
|
2023-06-02T05:25:23+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.