sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
8e8a05ab1ad3005e3a2f0242377d15b0aa4fada0 |
# Slyvanie Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file, as well as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by slyvanie_style"```
If it is to strong just add [] around it.
This embedding was trained to 14500 steps.
If you'd like to support the amazing artist whose artwork contributed to this embedding's training, I'd highly recommend you check out slyvanie [here](https://www.deviantart.com/slyvanie), [here](https://www.artstation.com/slyvanie) and [here](https://slyvanie.weebly.com/).
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/0PaBO0M.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/XpdAIdo.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/3TuxD9L.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/jsYluEQ.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/H9XScnZ.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | grullborg/slyvanie_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-27T02:13:44+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-27T02:42:32+00:00 |
78114523e12985450506aab2fddc6d4d26889057 | xixixi/test_db_sd | [
"license:openrail",
"region:us"
] | 2022-10-27T05:00:24+00:00 | {"license": "openrail"} | 2022-10-27T05:06:11+00:00 |
|
d52a3cb0779c7f33f85566d48737fa380d206769 |
This dataset contains 5 second clips of birdcalls for audio generation tests.
There are 20 species represented, with ~500 recordings each. Recordings are from xeno-canto.
These clips were taken from longer samples by identifying calls within the recordings using the approach shown here: https://www.kaggle.com/code/johnowhitaker/peak-identification
The audio is represented at 32kHz (mono) | tglcourse/5s_birdcall_samples_top20 | [
"license:unknown",
"region:us"
] | 2022-10-27T06:26:02+00:00 | {"license": ["unknown"], "pretty_name": "5s Birdcall Samples"} | 2022-10-27T06:34:37+00:00 |
1904eb1374e46b71e86ae1940dbe01678df6c3c6 |
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. | quincyqiang/test | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"qa-nli",
"coreference-nli",
"paraphrase-identification",
"doi:10.57967/hf/0065",
"region:us"
] | 2022-10-27T07:07:57+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification", "natural-language-inference", "semantic-similarity-scoring", "sentiment-classification", "text-scoring"], "paperswithcode_id": "glue", "pretty_name": "GLUE (General Language Understanding Evaluation benchmark)", "configs": ["ax", "cola", "mnli", "mnli_matched", "mnli_mismatched", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"], "tags": ["qa-nli", "coreference-nli", "paraphrase-identification"], "train-eval-index": [{"config": "cola", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "sst2", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "mrpc", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "qqp", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question1": "text1", "question2": "text2", "label": "target"}}, {"config": "stsb", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "mnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation_matched"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_mismatched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_matched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "qnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "text1", "sentence": "text2", "label": "target"}}, {"config": "rte", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "wnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}]} | 2022-10-27T07:17:23+00:00 |
d2dda6275beb2a5b8bd27d17ea0cb2548f3782fe | Madge/test1 | [
"license:openrail",
"region:us"
] | 2022-10-27T07:19:30+00:00 | {"license": "openrail"} | 2022-10-27T07:21:56+00:00 |
|
c975e4aa6efd560a1df5b0462ed88d60a55ec30b | quincyqiang/test2 | [
"license:apache-2.0",
"region:us"
] | 2022-10-27T07:19:47+00:00 | {"license": "apache-2.0"} | 2022-10-27T07:19:47+00:00 |
|
2aeec831e49b923d71b4f98ee2629ef659766959 | merve/tabular_benchmark | [
"license:apache-2.0",
"region:us"
] | 2022-10-27T09:26:45+00:00 | {"license": "apache-2.0"} | 2022-10-27T09:26:45+00:00 |
|
0dbbdb7bc4eda0c61bcbc73049e8aa39ef30913b |
# Dataset Card for V4Design Europeana style dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
> 1614 paintings belonging to the categories Baroque, Rococo, and Other. The images were obtained using the Europeana Search API, selecting open objects from the art thematic collection. 24k images were obtained, from which the current dataset was derived. The labels were added by the V4Design team, using a custom annotation tool. As described in the project documentation, other categories were used besides Baroque and Rococo. But for the sake of training a machine learning model we have retained only the categories with a significant number of annotations [source](https://zenodo.org/record/4896487)
This version of the dataset is generated using the [CSV file](https://zenodo.org/record/4896487) hosted on Zenodo. This CSV file contains the labels with URLs for the relevant images. Some of these URLs no longer resolve to an image. For consitency with the original dataset and if these URLs become valid again, these rows of the data are preserved here. If you want only successfully loaded images in your dataset, you can filter out the missing images as follows.
```python
ds = ds.filter(lambda x: x['image'] is not None)
```
### Supported Tasks and Leaderboards
This dataset is primarily intended for `image-classification`.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@dataset{europeana_2021_4896487,
author = {Europeana and
V4Design},
title = {V4Design/Europeana style dataset},
month = jun,
year = 2021,
publisher = {Zenodo},
doi = {10.5281/zenodo.4896487},
url = {https://doi.org/10.5281/zenodo.4896487}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
| biglam/v4design_europeana_style_dataset | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"license:other",
"region:us"
] | 2022-10-27T09:55:55+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": [], "language": [], "license": ["other"], "multilinguality": [], "size_categories": [], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "V4Design Europeana style dataset", "tags": [], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "uri", "dtype": "string"}, {"name": "style", "dtype": {"class_label": {"names": {"0": "Rococo", "1": "Baroque", "2": "Other"}}}}, {"name": "rights", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 536168550.923, "num_examples": 1613}], "download_size": 535393230, "dataset_size": 536168550.923}} | 2022-10-27T10:14:30+00:00 |
c4046158a56bfb31a1d03ab48d2b9b340bc2925f | ---
dataset_info:
- config_name: default
drop_labels: true
--- | polinaeterna/audios | [
"region:us"
] | 2022-10-27T10:28:42+00:00 | {} | 2022-11-03T12:47:07+00:00 |
61a3dba5b8d098de0ff73ac84525336ac03c84ca | ashaDanilova/dataset | [
"region:us"
] | 2022-10-27T10:48:06+00:00 | {} | 2022-10-28T16:06:27+00:00 |
|
5b62ab4c6ef313d063a3c4da33cb14bb2fe94dc9 |
# Dataset Card for Early Printed Books Font Detection Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**https://doi.org/10.5281/zenodo.3366686
- **Paper:**: https://doi.org/10.1145/3352631.3352640
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
> This dataset is composed of photos of various resolution of 35'623 pages of printed books dating from the 15th to the 18th century. Each page has been attributed by experts from one to five labels corresponding to the font groups used in the text, with two extra-classes for non-textual content and fonts not present in the following list: Antiqua, Bastaπrda, Fraktur, Gotico Antiqua, Greek, Hebrew, Italic, Rotunda, Schwabacher, and Textura.
[More Information Needed]
### Supported Tasks and Leaderboards
The primary use case for this datasets is
- `multi-label-image-classification`: This dataset can be used to train a model for multi label image classification where each image can have one, or more labels.
- `image-classification`: This dataset could also be adapted to only predict a single label for each image
### Languages
The dataset includes books from a range of libraries (see below for further details). The paper doesn't provide a detailed overview of language breakdown. However, the books are from the 15th-18th century and appear to be dominated by European languages from that time period. The dataset also includes Hebrew.
[More Information Needed]
## Dataset Structure
This dataset has a single configuration.
### Data Instances
An example instance from this dataset:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=3072x3840 at 0x7F6AC192D850>,
'labels': [5]}
```
### Data Fields
This dataset contains two fields:
- `image`: the image of the book page
- `labels`: one or more labels for the font used in the book page depicted in the `image`
### Data Splits
The dataset is broken into a train and test split with the following breakdown of number of examples:
- train: 24,866
- test: 10,757
## Dataset Creation
### Curation Rationale
The dataset was created to help train and evaluate automatic methods for font detection. The paper describing the paper also states that:
>data was cherry-picked, thus it is not statistically representative of what can be found in libraries. For example, as we had a small amount of Textura at the start, we specifically looked for more pages containing this font group, so we can expect that less than 3.6 % of randomly selected pages from libraries would contain Textura.
### Source Data
#### Initial Data Collection and Normalization
The images in this dataset are from books held by the British Library (London), Bayerische Staatsbibliothek München, Staatsbibliothek zu Berlin, Universitätsbibliothek Erlangen, Universitätsbibliothek Heidelberg, Staats- und Universitäatsbibliothek Göttingen, Stadt- und Universitätsbibliothek Köln, Württembergische Landesbibliothek Stuttgart and Herzog August Bibliothek Wolfenbüttel.
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| biglam/early_printed_books_font_detection | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:expert-generated",
"size_categories:10K<n<100K",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-27T11:12:02+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": [], "language": [], "license": ["cc-by-nc-sa-4.0"], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": ["multi-label-image-classification"], "pretty_name": "Early Printed Books Font Detection Dataset", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "greek", "1": "antiqua", "2": "other_font", "3": "not_a_font", "4": "italic", "5": "rotunda", "6": "textura", "7": "fraktur", "8": "schwabacher", "9": "hebrew", "10": "bastarda", "11": "gotico_antiqua"}}}}], "splits": [{"name": "test", "num_bytes": 2345451, "num_examples": 10757}, {"name": "train", "num_bytes": 5430875, "num_examples": 24866}], "download_size": 44212934313, "dataset_size": 7776326}, "tags": []} | 2022-10-28T14:39:50+00:00 |
8d0ff9103525b7e3579b180230fddb3186258301 |
# Tabular Benchmark
## Dataset Description
This dataset is a curation of various datasets from [openML](https://www.openml.org/) and is curated to benchmark performance of various machine learning algorithms.
- **Repository:** https://github.com/LeoGrin/tabular-benchmark/community
- **Paper:** https://hal.archives-ouvertes.fr/hal-03723551v2/document
### Dataset Summary
Benchmark made of curation of various tabular data learning tasks, including:
- Regression from Numerical and Categorical Features
- Regression from Numerical Features
- Classification from Numerical and Categorical Features
- Classification from Numerical Features
### Supported Tasks and Leaderboards
- `tabular-regression`
- `tabular-classification`
## Dataset Structure
### Data Splits
This dataset consists of four splits (folders) based on tasks and datasets included in tasks.
- reg_num: Task identifier for regression on numerical features.
- reg_cat: Task identifier for regression on numerical and categorical features.
- clf_num: Task identifier for classification on numerical features.
- clf_cat: Task identifier for classification on categorical features.
Depending on the dataset you want to load, you can load the dataset by passing `task_name/dataset_name` to `data_files` argument of `load_dataset` like below:
```python
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files="reg_cat/house_sales.csv")
```
## Dataset Creation
### Curation Rationale
This dataset is curated to benchmark performance of tree based models against neural networks. The process of picking the datasets for curation is mentioned in the paper as below:
- **Heterogeneous columns**. Columns should correspond to features of different nature. This excludes
images or signal datasets where each column corresponds to the same signal on different sensors.
- **Not high dimensional**. We only keep datasets with a d/n ratio below 1/10.
- **Undocumented datasets** We remove datasets where too little information is available. We did keep
datasets with hidden column names if it was clear that the features were heterogeneous.
- **I.I.D. data**. We remove stream-like datasets or time series.
- **Real-world data**. We remove artificial datasets but keep some simulated datasets. The difference is
subtle, but we try to keep simulated datasets if learning these datasets are of practical importance
(like the Higgs dataset), and not just a toy example to test specific model capabilities.
- **Not too small**. We remove datasets with too few features (< 4) and too few samples (< 3 000). For
benchmarks on numerical features only, we remove categorical features before checking if enough
features and samples are remaining.
- **Not too easy**. We remove datasets which are too easy. Specifically, we remove a dataset if a simple model (max of a single tree and a regression, logistic or OLS)
reaches a score whose relative difference with the score of both a default Resnet (from Gorishniy et al. [2021]) and a default HistGradientBoosting model (from scikit learn)
is below 5%. Other benchmarks use different metrics to remove too easy datasets, like removing datasets perfectly separated by a single decision classifier [Bischl et al., 2021],
but this ignores varying Bayes rate across datasets. As tree ensembles are superior to simple trees and logistic regresison [Fernández-Delgado et al., 2014],
a close score for the simple and powerful models suggests that we are already close to the best achievable score.
- **Not deterministic**. We remove datasets where the target is a deterministic function of the data. This
mostly means removing datasets on games like poker and chess. Indeed, we believe that these
datasets are very different from most real-world tabular datasets, and should be studied separately
### Source Data
**Numerical Classification**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|electricity|38474.0|7.0|https://www.openml.org/d/151|https://www.openml.org/d/44120|
|covertype|566602.0|10.0|https://www.openml.org/d/293|https://www.openml.org/d/44121|
|pol|10082.0|26.0|https://www.openml.org/d/722|https://www.openml.org/d/44122|
|house_16H|13488.0|16.0|https://www.openml.org/d/821|https://www.openml.org/d/44123|
|MagicTelescope|13376.0|10.0|https://www.openml.org/d/1120|https://www.openml.org/d/44125|
|bank-marketing|10578.0|7.0|https://www.openml.org/d/1461|https://www.openml.org/d/44126|
|Bioresponse|3434.0|419.0|https://www.openml.org/d/4134|https://www.openml.org/d/45019|
|MiniBooNE|72998.0|50.0|https://www.openml.org/d/41150|https://www.openml.org/d/44128|
|default-of-credit-card-clients|13272.0|20.0|https://www.openml.org/d/42477|https://www.openml.org/d/45020|
|Higgs|940160.0|24.0|https://www.openml.org/d/42769|https://www.openml.org/d/44129|
|eye_movements|7608.0|20.0|https://www.openml.org/d/1044|https://www.openml.org/d/44130|
|Diabetes130US|71090.0|7.0|https://www.openml.org/d/4541|https://www.openml.org/d/45022|
|jannis|57580.0|54.0|https://www.openml.org/d/41168|https://www.openml.org/d/45021|
|heloc|10000.0|22.0|"https://www.kaggle.com/datasets/averkiyoliabev/home-equity-line-of-creditheloc?select=heloc_dataset_v1+%281%29.csv"|https://www.openml.org/d/45026|
|credit|16714.0|10.0|"https://www.kaggle.com/c/GiveMeSomeCredit/data?select=cs-training.csv"|https://www.openml.org/d/44089|
|california|20634.0|8.0|"https://www.dcc.fc.up.pt/ltorgo/Regression/cal_housing.html"|https://www.openml.org/d/45028|
**Categorical Classification**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|electricity|38474.0|8.0|https://www.openml.org/d/151|https://www.openml.org/d/44156|
|eye_movements|7608.0|23.0|https://www.openml.org/d/1044|https://www.openml.org/d/44157|
|covertype|423680.0|54.0|https://www.openml.org/d/1596|https://www.openml.org/d/44159|
|albert|58252.0|31.0|https://www.openml.org/d/41147|https://www.openml.org/d/45035|
|compas-two-years|4966.0|11.0|https://www.openml.org/d/42192|https://www.openml.org/d/45039|
|default-of-credit-card-clients|13272.0|21.0|https://www.openml.org/d/42477|https://www.openml.org/d/45036|
|road-safety|111762.0|32.0|https://www.openml.org/d/42803|https://www.openml.org/d/45038|
**Numerical Regression**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|cpu_act|8192.0|21.0|https://www.openml.org/d/197|https://www.openml.org/d/44132|
|pol|15000.0|26.0|https://www.openml.org/d/201|https://www.openml.org/d/44133|
|elevators|16599.0|16.0|https://www.openml.org/d/216|https://www.openml.org/d/44134|
|wine_quality|6497.0|11.0|https://www.openml.org/d/287|https://www.openml.org/d/44136|
|Ailerons|13750.0|33.0|https://www.openml.org/d/296|https://www.openml.org/d/44137|
|yprop_4_1|8885.0|42.0|https://www.openml.org/d/416|https://www.openml.org/d/45032|
|houses|20640.0|8.0|https://www.openml.org/d/537|https://www.openml.org/d/44138|
|house_16H|22784.0|16.0|https://www.openml.org/d/574|https://www.openml.org/d/44139|
|delays_zurich_transport|5465575.0|9.0|https://www.openml.org/d/40753|https://www.openml.org/d/45034|
|diamonds|53940.0|6.0|https://www.openml.org/d/42225|https://www.openml.org/d/44140|
|Brazilian_houses|10692.0|8.0|https://www.openml.org/d/42688|https://www.openml.org/d/44141|
|Bike_Sharing_Demand|17379.0|6.0|https://www.openml.org/d/42712|https://www.openml.org/d/44142|
|nyc-taxi-green-dec-2016|581835.0|9.0|https://www.openml.org/d/42729|https://www.openml.org/d/44143|
|house_sales|21613.0|15.0|https://www.openml.org/d/42731|https://www.openml.org/d/44144|
|sulfur|10081.0|6.0|https://www.openml.org/d/23515|https://www.openml.org/d/44145|
|medical_charges|163065.0|5.0|https://www.openml.org/d/42720|https://www.openml.org/d/44146|
|MiamiHousing2016|13932.0|14.0|https://www.openml.org/d/43093|https://www.openml.org/d/44147|
|superconduct|21263.0|79.0|https://www.openml.org/d/43174|https://www.openml.org/d/44148|
**Categorical Regression**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|topo_2_1|8885.0|255.0|https://www.openml.org/d/422|https://www.openml.org/d/45041|
|analcatdata_supreme|4052.0|7.0|https://www.openml.org/d/504|https://www.openml.org/d/44055|
|visualizing_soil|8641.0|4.0|https://www.openml.org/d/688|https://www.openml.org/d/44056|
|delays_zurich_transport|5465575.0|12.0|https://www.openml.org/d/40753|https://www.openml.org/d/45045|
|diamonds|53940.0|9.0|https://www.openml.org/d/42225|https://www.openml.org/d/44059|
|Allstate_Claims_Severity|188318.0|124.0|https://www.openml.org/d/42571|https://www.openml.org/d/45046|
|Mercedes_Benz_Greener_Manufacturing|4209.0|359.0|https://www.openml.org/d/42570|https://www.openml.org/d/44061|
|Brazilian_houses|10692.0|11.0|https://www.openml.org/d/42688|https://www.openml.org/d/44062|
|Bike_Sharing_Demand|17379.0|11.0|https://www.openml.org/d/42712|https://www.openml.org/d/44063|
|Airlines_DepDelay_1M|1000000.0|5.0|https://www.openml.org/d/42721|https://www.openml.org/d/45047|
|nyc-taxi-green-dec-2016|581835.0|16.0|https://www.openml.org/d/42729|https://www.openml.org/d/44065|
|abalone|4177.0|8.0|https://www.openml.org/d/42726|https://www.openml.org/d/45042|
|house_sales|21613.0|17.0|https://www.openml.org/d/42731|https://www.openml.org/d/44066|
|seattlecrime6|52031.0|4.0|https://www.openml.org/d/42496|https://www.openml.org/d/45043|
|medical_charges|163065.0|5.0|https://www.openml.org/d/42720|https://www.openml.org/d/45048|
|particulate-matter-ukair-2017|394299.0|6.0|https://www.openml.org/d/42207|https://www.openml.org/d/44068|
|SGEMM_GPU_kernel_performance|241600.0|9.0|https://www.openml.org/d/43144|https://www.openml.org/d/44069|
### Dataset Curators
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux.
### Licensing Information
[More Information Needed]
### Citation Information
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux. Why do tree-based models still outperform deep
learning on typical tabular data?. NeurIPS 2022 Datasets and Benchmarks Track, Nov 2022, New
Orleans, United States. ffhal-03723551v2f
| inria-soda/tabular-benchmark | [
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"region:us"
] | 2022-10-27T11:34:58+00:00 | {"annotations_creators": [], "license": [], "task_categories": ["tabular-classification", "tabular-regression"], "pretty_name": "tabular_benchmark", "tags": [], "configs": [{"config_name": "clf_cat_albert", "data_files": "clf_cat/albert.csv"}, {"config_name": "clf_cat_compas-two-years", "data_files": "clf_cat/compas-two-years.csv"}, {"config_name": "clf_cat_covertype", "data_files": "clf_cat/covertype.csv"}, {"config_name": "clf_cat_default-of-credit-card-clients", "data_files": "clf_cat/default-of-credit-card-clients.csv"}, {"config_name": "clf_cat_electricity", "data_files": "clf_cat/electricity.csv"}, {"config_name": "clf_cat_eye_movements", "data_files": "clf_cat/eye_movements.csv"}, {"config_name": "clf_cat_road-safety", "data_files": "clf_cat/road-safety.csv"}, {"config_name": "clf_num_Bioresponse", "data_files": "clf_num/Bioresponse.csv"}, {"config_name": "clf_num_Diabetes130US", "data_files": "clf_num/Diabetes130US.csv"}, {"config_name": "clf_num_Higgs", "data_files": "clf_num/Higgs.csv"}, {"config_name": "clf_num_MagicTelescope", "data_files": "clf_num/MagicTelescope.csv"}, {"config_name": "clf_num_MiniBooNE", "data_files": "clf_num/MiniBooNE.csv"}, {"config_name": "clf_num_bank-marketing", "data_files": "clf_num/bank-marketing.csv"}, {"config_name": "clf_num_california", "data_files": "clf_num/california.csv"}, {"config_name": "clf_num_covertype", "data_files": "clf_num/covertype.csv"}, {"config_name": "clf_num_credit", "data_files": "clf_num/credit.csv"}, {"config_name": "clf_num_default-of-credit-card-clients", "data_files": "clf_num/default-of-credit-card-clients.csv"}, {"config_name": "clf_num_electricity", "data_files": "clf_num/electricity.csv"}, {"config_name": "clf_num_eye_movements", "data_files": "clf_num/eye_movements.csv"}, {"config_name": "clf_num_heloc", "data_files": "clf_num/heloc.csv"}, {"config_name": "clf_num_house_16H", "data_files": "clf_num/house_16H.csv"}, {"config_name": "clf_num_jannis", "data_files": "clf_num/jannis.csv"}, {"config_name": "clf_num_pol", "data_files": "clf_num/pol.csv"}, {"config_name": "reg_cat_Airlines_DepDelay_1M", "data_files": "reg_cat/Airlines_DepDelay_1M.csv"}, {"config_name": "reg_cat_Allstate_Claims_Severity", "data_files": "reg_cat/Allstate_Claims_Severity.csv"}, {"config_name": "reg_cat_Bike_Sharing_Demand", "data_files": "reg_cat/Bike_Sharing_Demand.csv"}, {"config_name": "reg_cat_Brazilian_houses", "data_files": "reg_cat/Brazilian_houses.csv"}, {"config_name": "reg_cat_Mercedes_Benz_Greener_Manufacturing", "data_files": "reg_cat/Mercedes_Benz_Greener_Manufacturing.csv"}, {"config_name": "reg_cat_SGEMM_GPU_kernel_performance", "data_files": "reg_cat/SGEMM_GPU_kernel_performance.csv"}, {"config_name": "reg_cat_abalone", "data_files": "reg_cat/abalone.csv"}, {"config_name": "reg_cat_analcatdata_supreme", "data_files": "reg_cat/analcatdata_supreme.csv"}, {"config_name": "reg_cat_delays_zurich_transport", "data_files": "reg_cat/delays_zurich_transport.csv"}, {"config_name": "reg_cat_diamonds", "data_files": "reg_cat/diamonds.csv"}, {"config_name": "reg_cat_house_sales", "data_files": "reg_cat/house_sales.csv"}, {"config_name": "reg_cat_medical_charges", "data_files": "reg_cat/medical_charges.csv"}, {"config_name": "reg_cat_nyc-taxi-green-dec-2016", "data_files": "reg_cat/nyc-taxi-green-dec-2016.csv"}, {"config_name": "reg_cat_particulate-matter-ukair-2017", "data_files": "reg_cat/particulate-matter-ukair-2017.csv"}, {"config_name": "reg_cat_seattlecrime6", "data_files": "reg_cat/seattlecrime6.csv"}, {"config_name": "reg_cat_topo_2_1", "data_files": "reg_cat/topo_2_1.csv"}, {"config_name": "reg_cat_visualizing_soil", "data_files": "reg_cat/visualizing_soil.csv"}, {"config_name": "reg_num_Ailerons", "data_files": "reg_num/Ailerons.csv"}, {"config_name": "reg_num_Bike_Sharing_Demand", "data_files": "reg_num/Bike_Sharing_Demand.csv"}, {"config_name": "reg_num_Brazilian_houses", "data_files": "reg_num/Brazilian_houses.csv"}, {"config_name": "reg_num_MiamiHousing2016", "data_files": "reg_num/MiamiHousing2016.csv"}, {"config_name": "reg_num_abalone", "data_files": "reg_num/abalone.csv"}, {"config_name": "reg_num_cpu_act", "data_files": "reg_num/cpu_act.csv"}, {"config_name": "reg_num_delays_zurich_transport", "data_files": "reg_num/delays_zurich_transport.csv"}, {"config_name": "reg_num_diamonds", "data_files": "reg_num/diamonds.csv"}, {"config_name": "reg_num_elevators", "data_files": "reg_num/elevators.csv"}, {"config_name": "reg_num_house_16H", "data_files": "reg_num/house_16H.csv"}, {"config_name": "reg_num_house_sales", "data_files": "reg_num/house_sales.csv"}, {"config_name": "reg_num_houses", "data_files": "reg_num/houses.csv"}, {"config_name": "reg_num_medical_charges", "data_files": "reg_num/medical_charges.csv"}, {"config_name": "reg_num_nyc-taxi-green-dec-2016", "data_files": "reg_num/nyc-taxi-green-dec-2016.csv"}, {"config_name": "reg_num_pol", "data_files": "reg_num/pol.csv"}, {"config_name": "reg_num_sulfur", "data_files": "reg_num/sulfur.csv"}, {"config_name": "reg_num_superconduct", "data_files": "reg_num/superconduct.csv"}, {"config_name": "reg_num_wine_quality", "data_files": "reg_num/wine_quality.csv"}, {"config_name": "reg_num_yprop_4_1", "data_files": "reg_num/yprop_4_1.csv"}]} | 2023-09-04T15:37:39+00:00 |
187435967cbdfa88395fd379e9f403c8b6ac46f3 | # AutoTrain Dataset for project: lojban-translation
## Dataset Description
This dataset has been automatically processed by AutoTrain for project lojban-translation.
### Languages
The BCP-47 code for the dataset's language is en2jb.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"source": "I read the poem for my child.",
"target": "mi tcidu lo pemci te cu'u le panzi be mi"
},
{
"source": "Jim is learning how to drive a car.",
"target": "la jim cilre fi lo nu klasazri lo karce"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"source": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 8000 |
| valid | 2000 | | woctordho/autotrain-data-lojban-translation | [
"task_categories:translation",
"language:en",
"language:jbo",
"license:mit",
"region:us"
] | 2022-10-27T12:05:43+00:00 | {"language": ["en", "jbo"], "license": "mit", "task_categories": ["translation"]} | 2023-11-17T11:18:19+00:00 |
7954876b4f617796157e6441b69128f228eabecc | ARTemAI/hands | [
"license:openrail",
"region:us"
] | 2022-10-27T12:45:00+00:00 | {"license": "openrail"} | 2022-10-27T12:45:00+00:00 |
|
a95e3d32256c9b0b1048b517554c9cf29adf3f2a | # AutoTrain Dataset for project: company-description-generator
## Dataset Description
This dataset has been automatically processed by AutoTrain for project company-description-generator.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Id": "0014U00002aSdZIQA0",
"text": "High Heat Rejection Window Film. Blocks 99% UV Rays and Rejects Up To 99% of Infrared Heat Radiation. 360\u00b0 Protection From Bacteria, Germs, & Viruses On Surfaces For Up To 90 days. EPA Registered. Safe For Food Industries, Hospitals, Schools, and More. CONCRETE & SURFACE COATINGS. Protective Coatings That Repel Water, Oils, Dirt, and More. Keeps Surfaces Protected and Easier To Clean. Keep Metal Surfaces Intact With A Strong Nanocoating Protectant The Mitigates The Growth Of Corrosion In Extreme Environments. At Snapguard Solutions we specialize in industrial nanocoatings to meet your needs. Through nanotechnology we are able to prolong and enhance the life of everyday commercial and residential items. Whether its blocking out the sun or repelling water, we have the right solution for you. Untreated surfaces absorb water and other liquids. This damages and deteriorates the integrity over time. Solution Applied on Surface. Our solutions fill and cover any imperfections on a surface, creating an invisible layer of protection designed to increase the longevity of the material. The treated surface is breathable and repels waters and other liquids. It can resists other elements such as snow, salt and mechanical oils. Utilize the same nanotechnology to protect what matters to you the most. Protect existing settings from the elements they encounter on the daily. Nanocoatings that can be applied to protect industrial settings and machinery. Multiple coatings available for all defense teams. SnapGuard Solutions, LLC is the leading innovator of advanced nano-technology solutions for the residential, industrial, commercial, and defense industry. Our solutions are ideal for protecting various porous and nonporous surfaces from water damage, stains, UV Light, corrosion, and dirt. Our product line includes: Glass Protectant, Fabric Protectant, One-Time Sealer, Solar Protectant, and Nano-Ceramic Tint. Fog build up can make it dangerous to see. Our nanotechnology based anti-fog films are the solution you need to prevent fog. It's application can be easily done and applied to any glass, mirror, or plastic in just a matter of minutes. AUTOMOTIVE 100% effective and durable. Our anti-fog films can be used in your automobile so you can be safe out. VISOR/GOGGLES Easily apply an anti-fog liner to any goggle or visor shield. See in high definition clarity. INDUSTRIAL Our films will not interfere with any radio, GPS, or cellular connections. Stay connected and protected from the sun. DEFENSE Keep clear visibility at all times and in any weather. Our Anti-Fog protective films are military grade certified. We are here to provide the correct solutions for you. Send us a brief message explaining what services you may require. One of our representatives will get back to you shortly. Thank you. LIFETIME WARRANTY FOR NANO CERAMIC WINDOW TINT. To Activate Your Limited Lifetime Warranty For Nano Ceramic window tint please to fill out the form. What is Covered and How Long Coverage Lasts. Snapguard Solutions warrants professionally sold and installed Snapguard Solutions Nano Ceramic Window Tint against the defects in manufacture or materials set forth below and for the time period set forth below. This warranty is valid only if the Products application was performed by a. Installer in the United States in accordance with manufacturer\u2019s application procedures and applicable law. This limited lifetime warranty coverage is offered only to the owner of the tint film at the time of the Product\u2019s installation, and is not transferable. Authorized dealers are also covered. To extend the life and looks of your. Nano Ceramic Window Tint Film and to maintain your warranty coverage, certain care and maintenance should be followed. Do not roll down Tinted windows for 6 days and until the Tint has properly adhered to the glass. Do not wash the film for 30 days after installation. Do not use abrasive cleaners or coarse cloths. Use a mild soap and a clean, soft cloth or synthetic sponge. THE EXPRESS WARRANTIES CONTAINED IN THIS AGREEMENT ARE IN LIEU OF ALL OTHER WARRANTIES, EXPRESS OR IMPLIED. SNAPGUARD SOLUTIONS HEREBY DISCLAIMS ALL OTHER EXPRESS AND IMPLIED WARRANTIES, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL SNAPGUARD SOLUTIONS OR ANY INSTALLER BE LIABLE FOR ANY INDIRECT, SPECIAL, INCIDENTAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES OF ANY KIND ARISING OUT OF OR RELATED TO (1) THE USE OF OR INABILITY TO USE THE PRODUCT, (2) THE BREACH OF ANY WARRANTY OR OF THIS AGREEMENT, (3) ANY ACT OR FAILURE TO ACT RELATING TO THIS AGREEMENT, OR OTHERWISE, INCLUDING WITHOUT LIMITATION DAMAGES FOR LOSS OF USE, LOST PROFITS, INTERRUPTION OF BUSINESS, OR ANY OTHER MONETARY OR OTHER LOSS, REGARDLESS OF THE FORM OF ACTION WHETHER IN CONTRACT, TORT (INCLUDING NEGLIGENCE) STRICT PRODUCT LIABILITY, OR OTHERWISE, EVEN IF SNAPGUARD SOLUTIONS HAS BEEN ADVISED OF OR IS OTHERWISE AWARE OF THE POSSIBILITY OF SUCH DAMAGES. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE LIMITATION OR EXCLUSION MAY NOT APPLY TO YOU. How State/Provincial Law Applies. This warranty gives you specific legal rights, and you may also have other rights that vary from jurisdiction to jurisdiction. EXCLUSIONS AND MISCELLANEOUS TERMS AND CONDITIONS (1) This warranty does not cover or apply to losses, costs, damages or defects arising from or caused by improper Product application, improper Product care, cleaning or abuse, misuse (including use not complying with applicable law? non-automotive applications, natural causes, accident, ordinary wear, damage caused by road debris, the physical impact of rocks, abrasion or scratching or any other acts, occurrence or defects, faults or damages not related to defects in materials or manufacture of the Product. Except as otherwise provided by applicable law, illegal application or use of the Product will render all warranties, whether express or implied, null and void and of no effect and. Snapguard Solutions shall have no liability therefor. (2) The. Snapguard Solutions dealer/installer is an independent contractor of. Snapguard Solutions is not responsible for improper installation or representations made by the dealer/installer. No contractor, including the. Snapguard Solutions dealer/installer, has any authority or power to modify or extend this limited warranty. The benefits under this warranty shall be the sole and exclusive remedy against. Snapguard Solutions for any loss arising out of the sale, application, and/or use of the Product. (3) If any provision of this warranty is unenforceable or ineffective, the rest of this warranty shall remain in effect and be construed as if such provision had not been contained in this warranty. (4) This warranty shall be governed by California law, excluding its laws relating to choice of law. Regardless of what venue would otherwise be permissive or required,. Snapguard Solutions and the customer stipulate that all actions arising under or related to this warranty shall be brought in the federal or state courts located in the City of Los Angeles, California,. Snapguard Solutions and the customer agree that such forum is mutually convenient and bears a reasonable relationship to this Agreement, and waive objection to any venue laid therein. HOW TO RECEIVE WARRANTY SERVICE. If you believe your. Nano Ceramic window tint is defective, please contact 1-323-797-7130 to see eligibility. Send along with the UPC code from the original packaging and a legible copy of your original receipt that includes the retailer name and address, date of purchase, and mail postage paid, to: Snapguard Solutions. Attn: Warranty Service Dept. 2150 Chenault Drive Carrollton, TX 75006. Snapguard Solutions product is covered by this limited warranty,. Snapguard Solutions will mail you replacement film. If your. Snapguard Solutions product is not covered by this limited warranty,. Snapguard Solutions will notify you of its decision in writing. Manufacturers\u2019 warranties may not apply in all cases, depending on factors such as use of the product, where the product was purchased, or who you purchased the product from. Please review the warranty carefully, and contact. Snapguard Solutions if you have any questions. Showing 34 of 34 products. Anti-Fog Film - 12 in x 18 in. Fabric Concentrate Water & Stain Repellent - 250ml. Fabric Protectant Water & Stain Repellent Spray - 200ml. Metal Protectant - 250ml. Nano Ceramic Window Tint - 2' ft x 100' ft. Nano Ceramic Window Tint - 2' ft x 25' ft. Nano Ceramic Window Tint - 2' ft x 50' ft. Nano Ceramic Window Tint - 2' ft x 6.5' ft. Nano Ceramic Window Tint - 2.5' ft x 12' ft. Nano Ceramic Window Tint - 2.5' ft x 50' ft. Nano Ceramic Window Tint - 2.5' ft x 6.5' ft. THIS ITEM EXCLUDED FROM ALL SALES. ",
"feat_Website": "https://snapguardsolutions.com",
"feat_scraping_date": "2022-10-12 19:05:50.082577+00:00",
"feat_Crunchbase_URL__c": "https://www.crunchbase.com/organization/snapguard",
"feat_Description": "SnapGuard Solutions, LLC is the leading innovator of advanced nano-technology solutions for the residential, industrial, commercial, and defense industry. Our solutions are ideal for protecting various porous and nonporous surfaces from water damage, stains, UV Light, corrosion, and dirt. Our product line includes: Glass Protectant, Fabric Protectant, One-Time Sealer, Solar Protectant, and Nano-Ceramic Tint.",
"feat_Name": "Snapguard",
"target": "Snapguard Solutions is a sealer for all natural stone and concrete material repels water and oil."
},
{
"feat_Id": "0012E00002gb2TiQAI",
"text": "A Game-Changing Mental Health and Wellbeing Solution for Employers, Employees and Insurers to help improve your employees' health and wellbeing at work. 24/7 access to unlimited mental health and wellbeing supports, including a personal Mental Health Coach and open-ended therapy, at the touch of a button. Burnout can cost employers as much as 8.3% of an employee\u2019s annual salary. While we\u2019ve been focused on dealing with the challenges of COVID-19, another crisis has been plaguing workplaces \u2013 burnout. The personal and financial costs of burnout are so great that no employer can afford to ignore it. Give your employees the support they need, when they need it. A complete range of supports to help your employees stay on top of their mental health at all times. Access to unlimited Mental Health Coaching to provide support and set goals wherever and whenever employees need it. Open-ended Mental Health Therapy Sessions with Psychotherapists and Counsellors. Concierge into Mental Health Insurance Benefits and Community Supports. Mental Health Digital Tools. Access to 100s of Digital Tools, Personalised Paths, Exercises and Tips for Mental Fitness delivered via video and podcasts. 24/7 Mental Health Support in Seconds by Phone, WhatsApp or Live Chat. We look after over 1,500 clients and support 1,000,000 employees, students and insurance members. How we Make an Impact. Our market-leading mental health supports can make a real, tangible difference to your employees and your business. increase in mentally healthy employees. decrease in reporting as severely distressed. We take a holistic approach to mental health and provide whatever supports are needed across body, mind and life with a comprehensive range of additional wellbeing services. Mental Health Training & Events. A whole range of seminars, workshops and 1-1 consultations offered digitally and onsite, delivered by experienced professionals. Digital Gym & Wellbeing Series. A digital gym, topical wellbeing series and bespoke events delivered by experts and guest presenters from our digital wellbeing studio. Strategic mental health programmes designed in consultation with an organisation from policy setting through to training and promotion. Discover how Total Mental Health can support your employees. Our Digital Studio and Digital Clinic solutions offer convenient access to a range of qualified & vetted clinical, fitness and wellbeing experts. A high quality, engaging experience to support employees at home or on the move. A year round series of weekly, wellbeing seminars focused on topical themes. Delivered from our 4G digital studio by our health and wellbeing presenter who is joined by a variety of expert guests. A weekly schedule of live and on demand fitness classes, delivered by experts who will demonstrate a safe and maintainable way to tackle fitness at home. Compliant with best practices. Cost effective, long term support. Digital Health & Wellbeing Solutions. Get access to fantastic weekly live streamed webinars from our 4K Wellbeing Studio. Each week contains a new topic delivered by an engaging host featuring a range of experts on that topic. With over 20 class types and 80 Live Streams per month, our Digital Gym has been extremely popular with employees who enjoy the variety of classes, expert delivery and convenience of being able to attend a class live when scheduled or access the same class at a time that suits them. Give your employees access to a range of Health Experts right in their Health and Wellbeing Platform. Book sessions with Physios, Nutritionists, Parenting Coaches, Remote Working Experts and Ergonomic Specialists. Peace of mind knowing that your employees\u2019 wellbeing needs are supported if they continue to work remotely. Strengthen workplace wellbeing and improve the overall atmosphere and culture where you work. Enable vital 1-1 opportunities to access a variety of wellbeing experts from home or on the move. Show employees that they are valued, and attract top talent with innovative wellbeing calendar of events and fitness. Access to truly engaging conversations about a range of topical wellbeing themes. Opportunities to put health & wellbeing questions to experts across a wide range of topics. Access to expert teams to consult with you wherever you are for the best advice to get you on the right track. A daily fitness schedule to participate in from the comfort of your home. Book a variety of digital and onsite workplace wellness events for your organisation, from Mental Health to Beauty. Access 100s of insured, qualified & vetted workplace wellbeing experts. Health risks will be significantly reduced, resulting in lower absenteeism and presenteeism rates. An improved cultural atmosphere develops as a result of a sense of togetherness, and often fun. An increased feeling of being valued among employees, which results in high levels of loyalty and retention. Improved employer brand. Having regular onsite wellness events is another reason for people to want to join your company. Employees will be equipped with the knowledge needed to focus on improving particular aspects of their wellbeing. Onsite wellbeing events give employees the chance to engage with one another in a different setting. Improved health and an increased sense of personal wellbeing, both physically and mentally. A heightened sense of value and belonging \u2014 it's important that employees feel as though their company cares about their wellbeing. Book your workplace wellbeing onsite events with access to 1000s of qualified wellness experts. Promote your onsite wellbeing event among employees, easily through the platform. Track event attendance and engagement to gain a better understanding of what interests employees most. Ask about onsite wellbeing. Spectrum.Life is the largest provider of employer health and wellness services in Ireland, and we're now available across the UK too! We look after the health and wellness needs of 100s of clients and over 500,000 users. Spectrum Life is the only Workplace Wellbeing provider that gives you digital and onsite wellbeing, all through one connected solution. We\u2019re combining Onsite Wellness, Digital Wellbeing, Employee Assistance Programmes and Health Screening managed on one platform and that\u2019s never been done before. With years of experience in managing workplace wellness for many different organisations, we noticed that having to go to various vendors for different elements of wellbeing was a pain point for a lot of people. We developed a platform that enables those tasked with managing wellbeing in the workplace to book and manage all aspects of it in one place. We pride ourselves on advising our clients on the latest approaches, technology, and wellness initiatives to ensure the best return on investment. Over the years, we have invested heavily in our tech team and also in our wellness team so that we can deliver a range of modern and innovative services that will evaluate, engage and energise your employees and their families to make behavioural changes and most importantly to stick to them. Spectrum.Life makes workplace wellbeing more manageable and accessible than ever for companies of all sizes. It\u2019s customisable, it\u2019s easy to use\u2026 it is Where Wellbeing Works. Learn how we have helped our clients achieve success in workplace wellbeing. Increasing engagement in workplace wellbeing. Wellbeing in a dynamic workplace. The New Benchmark in Employee Mental Health. A complete mental health and wellbeing programme for employers, employees and insurers. We provide employees with unlimited 24/7 access to unlimited mental health and wellbeing supports, including a personal Mental Health Coach and open-ended therapy, at the touch of a button. What is Total Mental Health. Employees can select a Mental Health Coach for regular live or via text one-to-one coaching on areas from \u2018improving sleep\u2019 to \u2018managing anxiety\u2019. Employees can Access open-ended Therapy via Counselling or Psychotherapy from a network of 1,000+ Counsellors within 48 hours of a referral. 24/7 On Demand Support. Employees can contact our Mental Health Team for support and on Demand in Seconds via Phone, Chat, WhatsApp or SMS. Increase in mentally healthy employees after using Spectrum.Life Mental Health Services. Return on Investment versus employees not receiving mental health support. Decrease in employees reporting as severely distressed after using Spectrum.Life Mental Health Services. Increase in productivity reported by Employees after using Spectrum.Life Mental Health Services. The Total Mental Health Experience. Mental Health Coaching offers employees preventative care and makes mental health support more accessible to everyone. Open Ended Mental Health Therapy. Open-Ended and Unlimited Therapy based on need, not quotas. Access to our network of 1,000+ Counsellors within 48 hours of a referral. Reassurance that your Employees, Leaders and Managers can speak to a Qualified Counsellor anytime, 24/7, 365. Advanced Mental Health Concierge. Care and Support into Inpatient Facilities, and Referral to a Mental Heath Specialist & mental health occupational assessments. Less than 1 in 4 people are getting the Mental Health support they need. Waiting Lists for Mental Health services are routinely over 6 months and Mental Health issues are on the increase. Peace of Mind \u2018I know all my employees will be safe, even if they don\u2019t talk to us about their problems\u2019. Employee are 76% more likely to join an organisation which has a clear commitment to mental health. There is often a stigma about asking for help. Research shows that 70%+ of people would choose a Coach over a Therapist, Employee Assistance Program or GP. Ease of discovery and access \u2018I know where to turn if I have a problem\u2019, \u2018I can always find an answer with Spectrum.Life'. Therapy is not always the solution \u2013 Coaching is a preventative measure for employees who need help with a breakthrough goal or who are struggling but don\u2019t need Therapy. Accessible at home and in the workplace, making it the perfect tool for employees and managers in a hybrid working world. Get first-hand data on the effects of workplace wellbeing and learn how this can be applied to your organisation. A Report on Mental Health in the Workplace \u2013 The Value of Having a Mental Health Programme in Your Organisation. The EAP Report- The effectiveness of EAP on workplace mental health. Mental Health in the Workplace. Digital and Onsite workplace mental health events are a great way to disassemble any stigma that may be present among employees. They also enable employees to understand their own mental wellbeing. Book seminars, training workshops and consultation clinics delivered by qualified mental health professionals. Our mental health seminars for the workplace are delivered by accredited mental health professionals. They are an effective way for employees to learn how best to manage and improve their mental wellbeing. These mental health training workshops empower specific groups of employees to build a stronger awareness about mental health in the workplace. Arrange mental health training for employees at your organisation to improve their ability to support colleagues in distress and to help them improve their own lifestyle habits. Why Book a Mental Health Event. Create a mental health-positive work environment. Help employees be proactive with their mental health. Give employees the tools to mind their mental health. Make your organisation a happy place to work. Gain tangible insights and learn how to best enhance employee wellbeing with our company guides. A Guide to Organising Workplace Mental Health Workshops. Employee Guide to Better Mental Health. HR Manager\u2019s Guide to Employee Financial Wellbeing. Sleep is a core component of our health and wellbeing and its impact on an organisation should not be overlooked. Poor sleep health can have a negative impact on businesses at an operational level. From absenteeism caused by related mental and physical illnesses to decreased levels of productivity, there is no denying that problems with sleep among employees affects the workplace. In this guide, we will highlight what sleep health is, how it impacts the workplace and how organisations can strive to improve it. What Is Sleep Health. Sleep is an essential part of our health and wellbeing. In fact, it is just as essential as nutrition and exercise. Unfortunately, many of us simply aren\u2019t getting enough sleep to maintain optimum cognitive function. Approximately 1 in 3 people are surviving on 6 hours or less. Most of us accept this as normal, however, consistently sleeping for less than the recommended hours can affect our wellbeing in several different ways. Sleep health also refers to the quality of sleep we get, whether it\u2019s restful enough, if it was interrupted and what our bedtime routine is like. A healthy sleep pattern means. You get an appropriate amount of sleep. You sleep throughout the night. You fall asleep within 20 minutes of going to bed. You feel energised when you awake. There are many factors that can influence our quality of sleep. We have an internal body clock that regulates our energy levels and tells us when our body is ready to sleep, but this can be impacted by our nutritional intake, our stress levels, our physical activity and external factors like screen time, noise pollution and so on. Sleep Health Impact in the Workplace. Billions of Euro are lost in companies worldwide as a result of insomnia and other sleep difficulties. It\u2019s been noted in recent studies that employers are becoming increasingly aware of the impact poor sleep health has among workers in their organisations. Lack of sleep or poor sleep quality negatively impacts employee performance. Millions of productive days are lost in organisations due to the impact poor sleep health has on productivity levels, and it\u2019s a direct influence on absenteeism. Poor sleep health also indirectly effects the workplace, as chronic sleep issues can cause mental and physical health difficulties that result in absences and decreased levels of productivity and engagement. In a culture where being \u201cbusy\u201d and overworked is worn as a badge of honour, sleep has become somewhat devalued in western society with disregard for how exactly it can impact our performance at work, and in other aspects of our lives. Lack of sleep impacts efficiency, productivity and more mistakes, according to a Harvard report. There has also been ample research that indicates that REM sleep is beneficial to the creative process, helping us to think outside the box. This stage of sleep is also essential for aiding problem solving. With this in mind, it\u2019s clear to see that under-sleeping employees are not performing to the best of their abilities, which ultimately results in under performance at a business level. Perhaps most concerning is the impact poor sleep health has on workplace safety. The same Harvard report says that between 50,000 and 100,000 deaths occur per year as a result of workers of all professions not getting enough sleep. It is also noted that more than a million workplace injuries occur due to sleep deprivation. The study noted that some of the deadliest accidents in recent times, including the explosion of the space shuttle Challenger, were caused by sleep deprivation in workers. Company Sleep Health In Numbers. Organisations & Sleep Health - How to Offer Help. Sleep is as crucial to performance and productivity as it is to physical as well as mental health. However, as a non\u2013 work activity that is heavily influenced by physical, mental and emotional wellbeing, organisations must find innovative ways to improve the sleep health of their employees. Sleep Health & Wellbeing. Including sleep health as part of a workplace wellbeing programme is one such way. As a practical solution for organisations to help employees understand and manage their own sleep health, a wellbeing programme can help with. Personal or work-related problems. Social, emotional and physical stress. Maintaining a work-life balance. A workplace wellbeing programme can also provide a dynamic platform and marketplace to share best\u2013practice expertise on the subject of sleep health. Through seminars, webinars and articles written by experts, employees can access knowledge and information in a variety of different digital and onsite formats to suit their particular working practices. In addition to the latest approaches, technology and wellness initiatives, employees can also seek advice from sleep health experts who can offer evidence \u2013based sleep training, workshops and private consultations. These qualified and experienced professionals can also help HR managers within an organisation to. Identify any company policies or behaviours that may be seen as a threat to sleep health. Implement a stand\u2013alone sleep management programme. Address sleep as part of an overall health and wellbeing strategy. In recognition of the impact that aspects of physical and mental wellbeing have on our sleep health, organisations can also use a workplace wellbeing programme to help employees understand sleep within a wider context. Sleep is important for our physical, social, intellectual and emotional wellbeing. So too is its co\u2013dependent relationship with nutrition and fitness. That\u2019s why it\u2019s important that employees have access to a workplace wellbeing programme that offers a whole wellness approach. 32,000,000 People In the UK have anti-social working patterns 25-30% higher risk of injury than working day shifts. Organisations & Sleep Health - More Ways to Help. A company culture that supports the work\u2013life balance can help employees make small but incremental changes to improve sleep. Organisations can support employees in adopting workplace habits such as. Taking regular breaks from screens. Learning to handle stress. In the age of connectivity, organisations can also help their employees protect their downtime, by allowing them to switch off from any social and productive requirements placed on them. Organisations can support employees adopting leisure habits such as. Cutting back on alcohol and caffeine. Switching off the mobile phone and that \u2018always on\u2019 blue light. Going to bed earlier and at the same time every night. The complexities of sleep can\u2019t be understood overnight. However, with a long\u2013term commitment to a workplace wellbeing programme, organisations can take clear and practical steps towards improving the sleep health of their employees. Having a workplace wellbeing programme that is rich in content and highly accessible will not only give employees the education and support they need to actively take responsibility for their own sleep health, but the motivation to make the behavioural changes necessary to reap the long\u2013term rewards of improved sleep. Percentage of workers who say their job allows them to get enough sleep. SHIFT WORK \u2013 63%. NON SHIFT WORK \u2013 89%. Do you want to put sleep health on the agenda of your workplace wellbeing programme? Talk to a wellness advisor about how we can help. ",
"feat_Website": "https://www.spectrum.life",
"feat_scraping_date": "2022-09-08 01:05:34.389293+00:00",
"feat_Crunchbase_URL__c": "https://www.crunchbase.com/organization/spectrum-life",
"feat_Description": "Spectrum.Life's comprehensive solution enables organisations to provide a workplace wellbeing programme that can have a substantially positive impact on their health and wellness, as well as on the culture and performance of the company.",
"feat_Name": "Spectrum Life",
"target": "Spectrum Life is a B2B mental health & wellness platform providing a clinically-backed product suite of tools and training."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Id": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"feat_Website": "Value(dtype='string', id=None)",
"feat_scraping_date": "Value(dtype='string', id=None)",
"feat_Crunchbase_URL__c": "Value(dtype='string', id=None)",
"feat_Description": "Value(dtype='string', id=None)",
"feat_Name": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2397 |
| valid | 600 |
| mindthebridge/autotrain-data-company-description-generator | [
"language:en",
"region:us"
] | 2022-10-27T12:47:29+00:00 | {"language": ["en"], "task_categories": ["conditional-text-generation"]} | 2022-10-27T12:49:03+00:00 |
381bc18db2d393aa18eeab8f92e0c135aa76ee1b | pere/sami_parallel | [
"license:apache-2.0",
"region:us"
] | 2022-10-27T14:06:38+00:00 | {"license": "apache-2.0"} | 2022-11-01T09:02:52+00:00 |
|
cb7f336db3519b9ce33ca2dcd11cf0e306f56dea | # Dataset Card for Product Reviews
Customer reviews of Amazon products, categorised by the number of stars assigned to each product. The dataset consists of several thousand reviews in English, German, and French.
## Licensing information
This datasets is based on the [`amazon_reviews_multi`](https://huggingface.co/datasets/amazon_reviews_multi) dataset. | mgb-dx-meetup/product-reviews | [
"region:us"
] | 2022-10-27T14:11:15+00:00 | {"dataset_info": {"features": [{"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "reviewer_id", "dtype": "string"}, {"name": "stars", "dtype": "int32"}, {"name": "review_body", "dtype": "string"}, {"name": "review_title", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "product_category", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 454952.85, "num_examples": 1500}, {"name": "train", "num_bytes": 6073361.466666667, "num_examples": 20000}], "download_size": 4034850, "dataset_size": 6528314.316666666}} | 2022-10-27T14:25:55+00:00 |
f4f954f99f54f4a8261f1ab7b28469550c4bceeb |
# Ao Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by ao_style"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/ec8MaO4.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/N4IRulK.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/22alJny.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/ZPPIs9L.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/XQZvjGs.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/ao_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-27T14:28:24+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-29T10:16:29+00:00 |
7f557c5d4da73b73ea90c3e0ab9663484f25b992 |
# Mikeou Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by mikeou_art"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/Anc83EO.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/NukXbXO.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/LcamHiI.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/sHL81zL.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/vrfu8WV.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/mikeou_art | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-27T14:29:59+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-29T10:18:34+00:00 |
49ebe79789fbdca8a8cef155ce3a78dc2475a69e | chloeliu/reddit_nosleep_posts | [
"license:unknown",
"region:us"
] | 2022-10-27T14:33:38+00:00 | {"license": "unknown"} | 2022-10-27T14:34:53+00:00 |
|
9961aeb4e5e069a1760792883bbb4df34eb03fad | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-ilpost
* Dataset: ARTeLab/ilpost
* Config: ARTeLab--ilpost
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. | autoevaluate/autoeval-eval-ARTeLab__ilpost-ARTeLab__ilpost-d2ea00-1904764775 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-27T14:40:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ARTeLab/ilpost"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-ilpost", "metrics": ["bertscore"], "dataset_name": "ARTeLab/ilpost", "dataset_config": "ARTeLab--ilpost", "dataset_split": "test", "col_mapping": {"text": "source", "target": "target"}}} | 2022-10-27T14:44:41+00:00 |
8ab5d278ab48d4d9943fca87fbaf33774faf65e8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-fanpage
* Dataset: ARTeLab/fanpage
* Config: ARTeLab--fanpage
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. | autoevaluate/autoeval-eval-ARTeLab__fanpage-ARTeLab__fanpage-6c7fce-1904864776 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-27T14:40:56+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ARTeLab/fanpage"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-fanpage", "metrics": ["bertscore"], "dataset_name": "ARTeLab/fanpage", "dataset_config": "ARTeLab--fanpage", "dataset_split": "test", "col_mapping": {"text": "source", "target": "target"}}} | 2022-10-27T14:47:53+00:00 |
4da865e1b2019c88a45f920e7c8896be5c86033d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-mlsum
* Dataset: ARTeLab/mlsum-it
* Config: ARTeLab--mlsum-it
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. | autoevaluate/autoeval-eval-ARTeLab__mlsum-it-ARTeLab__mlsum-it-b0baa7-1904964782 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-27T14:52:25+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ARTeLab/mlsum-it"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-mlsum", "metrics": ["bertscore"], "dataset_name": "ARTeLab/mlsum-it", "dataset_config": "ARTeLab--mlsum-it", "dataset_split": "test", "col_mapping": {"text": "source", "target": "target"}}} | 2022-10-27T14:55:45+00:00 |
8e4d20db185e50b3a66dcaa7f87468a48efedd55 | # Dataset Card for "hotel-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Data was obtained from [here](https://www.kaggle.com/datasets/jiashenliu/515k-hotel-reviews-data-in-europe) | ashraq/hotel-reviews | [
"region:us"
] | 2022-10-27T16:22:07+00:00 | {"dataset_info": {"features": [{"name": "review_date", "dtype": "string"}, {"name": "hotel_name", "dtype": "string"}, {"name": "review", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15043294, "num_examples": 93757}], "download_size": 6100544, "dataset_size": 15043294}} | 2022-10-27T16:24:29+00:00 |
1ed13e8ef280bd45e3bbac4cfa8bbd9d64ec9f89 | # Dataset Card for Naruto BLIP captions
_Dataset used to train [TBD](TBD)._
The original images were obtained from [narutopedia.com](https://naruto.fandom.com/wiki/Narutopedia) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Example stable diffusion outputs

> "Bill Gates with a hoodie", "John Oliver with Naruto style", "Hello Kitty with Naruto style", "Lebron James with a hat", "Mickael Jackson as a ninja", "Banksy Street art of ninja"
## Citation
If you use this dataset, please cite it as:
```
@misc{cervenka2022naruto2,
author = {Cervenka, Eole},
title = {Naruto BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/lambdalabs/naruto-blip-captions/}}
}
``` | lambdalabs/naruto-blip-captions | [
"region:us"
] | 2022-10-27T17:02:46+00:00 | {} | 2022-10-27T20:17:06+00:00 |
29d8c48af080c04fc9e645d72cae49b38866026c | # Dataset Card for "reqs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hasanriaz121/reqs | [
"region:us"
] | 2022-10-27T17:05:57+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "requirement_txt", "dtype": "string"}, {"name": "EF", "dtype": "int64"}, {"name": "PE", "dtype": "int64"}, {"name": "PO", "dtype": "int64"}, {"name": "RE", "dtype": "int64"}, {"name": "SE", "dtype": "int64"}, {"name": "US", "dtype": "int64"}, {"name": "X", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 53980, "num_examples": 285}, {"name": "train", "num_bytes": 431941, "num_examples": 2308}, {"name": "validation", "num_bytes": 49251, "num_examples": 257}], "download_size": 218916, "dataset_size": 535172}} | 2022-10-27T17:06:50+00:00 |
4788cd2a26eae8a1e6534d87b1bfbad82c3a9dc2 |
# Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/amazon-science/mintaka
- **Repository:** https://github.com/amazon-science/mintaka
- **Paper:** https://aclanthology.org/2022.coling-1.138/
- **Point of Contact:** [GitHub](https://github.com/amazon-science/mintaka)
### Dataset Summary
Mintaka is a complex, natural, and multilingual question answering (QA) dataset composed of 20,000 question-answer pairs elicited from MTurk workers and annotated with Wikidata question and answer entities. Full details on the Mintaka dataset can be found in our paper: https://aclanthology.org/2022.coling-1.138/
To build Mintaka, we explicitly collected questions in 8 complexity types, as well as generic questions:
- Count (e.g., Q: How many astronauts have been elected to Congress? A: 4)
- Comparative (e.g., Q: Is Mont Blanc taller than Mount Rainier? A: Yes)
- Superlative (e.g., Q: Who was the youngest tribute in the Hunger Games? A: Rue)
- Ordinal (e.g., Q: Who was the last Ptolemaic ruler of Egypt? A: Cleopatra)
- Multi-hop (e.g., Q: Who was the quarterback of the team that won Super Bowl 50? A: Peyton Manning)
- Intersection (e.g., Q: Which movie was directed by Denis Villeneuve and stars Timothee Chalamet? A: Dune)
- Difference (e.g., Q: Which Mario Kart game did Yoshi not appear in? A: Mario Kart Live: Home Circuit)
- Yes/No (e.g., Q: Has Lady Gaga ever made a song with Ariana Grande? A: Yes.)
- Generic (e.g., Q: Where was Michael Phelps born? A: Baltimore, Maryland)
- We collected questions about 8 categories: Movies, Music, Sports, Books, Geography, Politics, Video Games, and History
Mintaka is one of the first large-scale complex, natural, and multilingual datasets that can be used for end-to-end question-answering models.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for question answering.
To ensure comparability, please refer to our evaluation script here: https://github.com/amazon-science/mintaka#evaluation
### Languages
All questions were written in English and translated into 8 additional languages: Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"id": "a9011ddf",
"lang": "en",
"question": "What is the seventh tallest mountain in North America?",
"answerText": "Mount Lucania",
"category": "geography",
"complexityType": "ordinal",
"questionEntity":
[
{
"name": "Q49",
"entityType": "entity",
"label": "North America",
"mention": "North America",
"span": [40, 53]
},
{
"name": 7,
"entityType": "ordinal",
"mention": "seventh",
"span": [12, 19]
}
],
"answerEntity":
[
{
"name": "Q1153188",
"label": "Mount Lucania",
}
],
}
```
### Data Fields
The data fields are the same among all splits.
`id`: a unique ID for the given sample.
`lang`: the language of the question.
`question`: the original question elicited in the corresponding language.
`answerText`: the original answer text elicited in English.
`category`: the category of the question. Options are: geography, movies, history, books, politics, music, videogames, or sports
`complexityType`: the complexity type of the question. Options are: ordinal, intersection, count, superlative, yesno comparative, multihop, difference, or generic
`questionEntity`: a list of annotated question entities identified by crowd workers.
```
{
"name": The Wikidata Q-code or numerical value of the entity
"entityType": The type of the entity. Options are:
entity, cardinal, ordinal, date, time, percent, quantity, or money
"label": The label of the Wikidata Q-code
"mention": The entity as it appears in the English question text. Will be empty for non-English samples.
"span": The start and end characters of the mention in the English question text. Will be empty for non-English samples.
}
```
`answerEntity`: a list of annotated answer entities identified by crowd workers.
```
{
"name": The Wikidata Q-code or numerical value of the entity
"label": The label of the Wikidata Q-code
}
```
### Data Splits
For each language, we split into train (14,000 samples), dev (2,000 samples), and test (4,000 samples) sets.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
Amazon Alexa AI.
### Licensing Information
This project is licensed under the CC-BY-4.0 License.
### Citation Information
Please cite the following papers when using this dataset.
```latex
@inproceedings{sen-etal-2022-mintaka,
title = "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering",
author = "Sen, Priyanka and
Aji, Alham Fikri and
Saffari, Amir",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.138",
pages = "1604--1619"
}
```
### Contributions
Thanks to [@afaji](https://github.com/afaji) for adding this dataset. | AmazonScience/mintaka | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:ar",
"multilinguality:de",
"multilinguality:ja",
"multilinguality:hi",
"multilinguality:pt",
"multilinguality:en",
"multilinguality:es",
"multilinguality:it",
"multilinguality:fr",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:cc-by-4.0",
"region:us"
] | 2022-10-27T17:38:30+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "license": ["cc-by-4.0"], "multilinguality": ["ar", "de", "ja", "hi", "pt", "en", "es", "it", "fr"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "paperswithcode_id": "mintaka", "pretty_name": "Mintaka", "language_bcp47": ["ar-SA", "de-DE", "ja-JP", "hi-HI", "pt-PT", "en-EN", "es-ES", "it-IT", "fr-FR"]} | 2022-10-28T09:55:50+00:00 |
255f251fd722711e93bdb4df90ad4797715331dc | hoodhahmed/dhivehi_corpus | [
"license:openrail",
"region:us"
] | 2022-10-27T18:00:36+00:00 | {"license": "openrail"} | 2022-10-27T18:00:36+00:00 |
|
3d703f89b39dbd62d406e5863b32ea9afb4dc8a5 | memray/keyphrase | [
"region:us"
] | 2022-10-27T18:03:10+00:00 | {"license": "cc-by-nc-4.0"} | 2022-10-29T05:18:55+00:00 |
|
61b99919bdf522fee905ba7f3e3e8b67e58e80e5 | # Dataset Card for "early_printed_books_font_detection_loaded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | biglam/early_printed_books_font_detection_loaded | [
"region:us"
] | 2022-10-27T19:07:55+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "greek", "1": "antiqua", "2": "other_font", "3": "not_a_font", "4": "italic", "5": "rotunda", "6": "textura", "7": "fraktur", "8": "schwabacher", "9": "hebrew", "10": "bastarda", "11": "gotico_antiqua"}}}}], "splits": [{"name": "test", "num_bytes": 11398084794.636, "num_examples": 10757}, {"name": "train", "num_bytes": 21512059165.866, "num_examples": 24866}], "download_size": 44713803337, "dataset_size": 32910143960.502}} | 2022-10-28T07:47:45+00:00 |
d46098f2cd8b030fe0d6c9e5fe32e0e47aaad681 | <h4> Disclosure </h4>
<p> While its not perfect i hope that you are able to create some nice pieces with it, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know </p>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by skeleton slime </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by skeleton slime- 6500</em></li>
<li>10,000 steps <em>Usage: art by skeleton slime-10000</em> </li>
<li>15,000 steps <em>Usage: art by skeleton slime</em></li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/ATm5o4H.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/DpdwiyC.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/qwGmnel.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<a href="https://i.imgur.com/SF3kfd4.jpg" target="_blank"><img height="100%" width="100%" src="https://i.imgur.com/SF3kfd4.jpg"></a>
<h4> Licence </h4>
<p><span>This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:</span> </p>
<ol>
<li>You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content </li>
<li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li>
<li>You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">Please read the full license here</a></li>
</ol> | zZWipeoutZz/skeleton_slime | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-27T20:21:30+00:00 | {"license": "creativeml-openrail-m"} | 2022-10-28T08:48:03+00:00 |
099c1b164c1ef9ff0e7986bfb8f1b33d3ff8596a | adamlouly/enron_spam_data | [
"license:apache-2.0",
"region:us"
] | 2022-10-27T20:54:56+00:00 | {"license": "apache-2.0"} | 2022-10-27T22:11:14+00:00 |
|
ff3d266876d88b216558abbb04575e2efe7a252b | # Dataset Card for "tydiqa_secondary_task"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Mostafa3zazi/tydiqa_secondary_task | [
"region:us"
] | 2022-10-27T21:52:22+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 52948607, "num_examples": 49881}, {"name": "validation", "num_bytes": 5006461, "num_examples": 5077}], "download_size": 29688806, "dataset_size": 57955068}} | 2022-10-27T21:52:30+00:00 |
f364ba93d5e59758672fdf2ff59b4a505ab3caba | # Dataset Card for "eurosat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vicm0r/eurosat | [
"region:us"
] | 2022-10-27T23:17:50+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AnnualCrop", "1": "Forest", "2": "HerbaceousVegetation", "3": "Highway", "4": "Industrial", "5": "Pasture", "6": "PermanentCrop", "7": "Residential", "8": "River", "9": "SeaLake"}}}}], "splits": [{"name": "train", "num_bytes": 57259856.0, "num_examples": 27000}], "download_size": 88186968, "dataset_size": 57259856.0}} | 2022-10-27T23:17:56+00:00 |
99a8f2eb0f5e0d1f279020eb6260ca52b77875c4 | randomwalksky/shoes20 | [
"license:openrail",
"region:us"
] | 2022-10-28T00:23:11+00:00 | {"license": "openrail"} | 2022-10-28T00:32:51+00:00 |
|
901ddea7290a85838c328f14b6508db11d942970 | xixixi/images | [
"license:other",
"region:us"
] | 2022-10-28T00:40:28+00:00 | {"license": "other"} | 2022-10-28T00:41:32+00:00 |
|
1914ab53af43442e03b97a42d1fc6ba76e04bf53 | # Dataset Card for "Human_obj_bg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | TeddyCat/Human_obj_bg | [
"region:us"
] | 2022-10-28T02:25:32+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 350102.0, "num_examples": 20}], "download_size": 337556, "dataset_size": 350102.0}} | 2022-12-18T05:02:54+00:00 |
e6769ca6989c97a283bfd1da72627ce56a003b0d | Mohaddeseh/BioNLI | [
"license:cc",
"region:us"
] | 2022-10-28T02:55:43+00:00 | {"license": "cc"} | 2022-10-28T02:55:43+00:00 |
|
443f28582af7d75148a31c76a300efa4b5b0108a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164906 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T03:06:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-6.7b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T03:21:46+00:00 |
7f7e1e829257c402b1de674dcae98afac66756de | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164909 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T03:06:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-66b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T05:25:07+00:00 |
77fee1ab3232c91e763d3505780ec8e6b633e065 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164903 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T03:06:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "ArthurZ/opt-350m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T03:08:28+00:00 |
ef0156d81134002a97402df78322bb674e400708 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164908 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T03:06:46+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-30b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T04:06:39+00:00 |
f130023e49e8c83786974b72fc1852c574028a83 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164902 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T03:07:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "ArthurZ/opt-125m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T03:08:50+00:00 |
2acaa832b1e781b8a91915bdbc119828f71b5556 |
# Dataset Card for SyNLI
A synthetic NLI datasets from open domain sentences using T5 as data synthesizer. The data can be used to train sentence embedding models.
## Data Fields
The data have several fields:
- `sent0`: premise as a string
- `sent1`: entailment hypothesis as a string
- `hard_neg`: contradiction hypothesis as a string
| mattymchen/synli | [
"license:odc-by",
"region:us"
] | 2022-10-28T04:23:23+00:00 | {"license": "odc-by", "dataset_info": {"features": [{"name": "sent0", "dtype": "string"}, {"name": "sent1", "dtype": "string"}, {"name": "hard_neg", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11441750654, "num_examples": 60939492}], "download_size": 6904073153, "dataset_size": 11441750654}} | 2022-10-28T07:52:16+00:00 |
a4d47050c1f1a90dc09c8920cd66ebc1e1523ca0 | # Dataset Card for "Romance-cleaned-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MarkGG/Romance-cleaned-2 | [
"region:us"
] | 2022-10-28T06:20:14+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3407789.8839248433, "num_examples": 6466}, {"name": "validation", "num_bytes": 378936.11607515655, "num_examples": 719}], "download_size": 2403265, "dataset_size": 3786726.0}} | 2022-10-28T06:20:20+00:00 |
b600bc01160467f3102f821deadf0e130637f94e | # Dataset Card for "latent_lsun_church_256px"
This is derived from https://huggingface.co/datasets/tglcourse/lsun_church_train
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_lsun_church_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
```
| tglcourse/latent_lsun_church_256px | [
"region:us"
] | 2022-10-28T06:45:35+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9", "10": "a", "11": "b", "12": "c", "13": "d", "14": "e", "15": "f"}}}}, {"name": "latent", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "test", "num_bytes": 106824288, "num_examples": 6312}, {"name": "train", "num_bytes": 2029441460, "num_examples": 119915}], "download_size": 2082210019, "dataset_size": 2136265748}} | 2022-10-28T06:57:35+00:00 |
30044e415f19965e2435434396f050322bca523f | # Dataset Card for "uniprot_sprot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | wesleywt/uniprot_sprot | [
"region:us"
] | 2022-10-28T08:09:42+00:00 | {"dataset_info": {"features": [{"name": "uniprot_id", "dtype": "string"}, {"name": "sequences", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 21314102.893347207, "num_examples": 56801}, {"name": "train", "num_bytes": 191823924.1066528, "num_examples": 511201}], "download_size": 211969427, "dataset_size": 213138027.0}} | 2022-10-30T12:44:58+00:00 |
1a1563b4008cc29d8011a10272e286eac923d63c | siberspace/femmeaigle | [
"region:us"
] | 2022-10-28T08:16:58+00:00 | {} | 2022-10-28T08:17:35+00:00 |
|
e40d5764be1040bac56f49cea5df9d243e8d904b | # Dataset Card for "latent_afhqv2_256px"
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_afhqv2_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` | tglcourse/latent_afhqv2_256px | [
"region:us"
] | 2022-10-28T08:19:16+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "cat", "1": "dog", "2": "wild"}}}}, {"name": "latent", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 267449972, "num_examples": 15803}], "download_size": 260672854, "dataset_size": 267449972}} | 2022-10-28T10:51:36+00:00 |
39c63d396a8b291a2387b8499c84e7a3c4f3f451 |
# Dataset Card for [naacl2022]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a named entity recognition dataset annotated for the science entity recognition task, a [project](https://github.com/neubig/nlp-from-scratch-assignment-2022) from the CMU 11-711 course.
### Supported Tasks and Leaderboards
NER task.
### Languages
English
## Dataset Structure
### Data Instances
A sample of the dataset
{'id': '0',
'tokens': ['We', 'sample', '50', 'negative', 'cases', 'from', 'T5LARGE', '+', 'GenMC', 'for', 'each', 'dataset'],
'ner_tags':['O', 'O', 'O', 'O', 'O', 'O', 'B-MethodName', 'O', 'B-MethodName', 'O', 'O', 'O']}
### Data Fields
id,tokens,ner_tags
- `id`: a `string` feature give the sample index.
- `tokens`: a `list` of `string` features give the sequence.
- `ner_tags`: a `list` of classification labels for each token in the sentence, with possible values including
`O` (0), `B-MethodName` (1), `I-MethodName` (2), `B-HyperparameterName` (3),`I-HyperparameterName` (4),`B-HyperparameterValue` (5),`I-HyperparameterValue` (6),`B-MetricName` (7),`I-MetricName` (8),`B-MetricValue` (9),`I-MetricValue` (10),`B-TaskName` (11),`I-TaskName` (12),`B-DatasetName` (13),`I-DatasetName` (14).
### Data Splits
Data split into
train.txt
dev.txt
test.txt
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The data is annotated by using labelstudio, the papers are collected from TACL and ACL 2022 conferences.
#### Who are the annotators?
Xiaoyue Cui and Haotian Teng annotated the datasets.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@xcui297](https://github.com/xcui297); [@haotianteng](https://github.com/haotianteng) for adding this dataset.
| havens2/naacl2022 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:afl-3.0",
"acl",
"sciBERT",
"sci",
"11711",
"region:us"
] | 2022-10-28T08:38:15+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "sci_NER_naacl", "tags": ["acl", "sciBERT", "sci", "acl", "11711"]} | 2022-10-28T10:37:16+00:00 |
b1496cd7a3be1e9b1d7f1301c0df7353c17cc48b | puellacurae/x | [
"license:openrail",
"doi:10.57967/hf/0067",
"region:us"
] | 2022-10-28T08:43:22+00:00 | {"license": "openrail"} | 2022-10-28T08:44:17+00:00 |
|
eac45f711beabc481045075e3066be32ed55dc8e | # Dataset Card for "latent_afhqv2_512px"
Each image is cropped to 512px square and encoded to a 4x64x64 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_lsun_church_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 64, 3264
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` | tglcourse/latent_afhqv2_512px | [
"region:us"
] | 2022-10-28T09:21:26+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "cat", "1": "dog", "2": "wild"}}}}, {"name": "latent", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1052290164, "num_examples": 15803}], "download_size": 1038619876, "dataset_size": 1052290164}} | 2022-10-28T10:52:19+00:00 |
c5c8ed58a7134ad219a2ac61ed44427db1d26d23 |
# UD_Spanish-AnCora
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://github.com/UniversalDependencies/UD_Spanish-AnCora
- **Point of Contact:** [Daniel Zeman]([email protected])
### Dataset Summary
This dataset is composed of the annotations from the [AnCora corpus](http://clic.ub.edu/corpus/), projected on the [Universal Dependencies treebank](https://universaldependencies.org/). We use the POS annotations of this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
POS tagging
### Languages
The dataset is in Spanish (`es-ES`)
## Dataset Structure
### Data Instances
Three conllu files.
Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:
1) Word lines containing the annotation of a word/token in 10 fields separated by single tab characters (see below).
2) Blank lines marking sentence boundaries.
3) Comment lines starting with hash (#).
### Data Fields
Word lines contain the following fields:
1) ID: Word index, integer starting at 1 for each new sentence; may be a range for multiword tokens; may be a decimal number for empty nodes (decimal numbers can be lower than 1 but must be greater than 0).
2) FORM: Word form or punctuation symbol.
3) LEMMA: Lemma or stem of word form.
4) UPOS: Universal part-of-speech tag.
5) XPOS: Language-specific part-of-speech tag; underscore if not available.
6) FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available.
7) HEAD: Head of the current word, which is either a value of ID or zero (0).
8) DEPREL: Universal dependency relation to the HEAD (root iff HEAD = 0) or a defined language-specific subtype of one.
9) DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs.
10) MISC: Any other annotation.
From: [https://universaldependencies.org](https://universaldependencies.org/guidelines.html)
### Data Splits
- es_ancora-ud-train.conllu
- es_ancora-ud-dev.conllu
- es_ancora-ud-test.conllu
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
[UD_Spanish-AnCora](https://github.com/UniversalDependencies/UD_Spanish-AnCora)
#### Initial Data Collection and Normalization
The original annotation was done in a constituency framework as a part of the [AnCora project](http://clic.ub.edu/corpus/) at the University of Barcelona. It was converted to dependencies by the [Universal Dependencies team](https://universaldependencies.org/) and used in the CoNLL 2009 shared task. The CoNLL 2009 version was later converted to HamleDT and to Universal Dependencies.
For more information on the AnCora project, visit the [AnCora site](http://clic.ub.edu/corpus/).
To learn about the Universal Dependences, visit the webpage [https://universaldependencies.org](https://universaldependencies.org)
#### Who are the source language producers?
For more information on the AnCora corpus and its sources, visit the [AnCora site](http://clic.ub.edu/corpus/).
### Annotations
#### Annotation process
For more information on the first AnCora annotation, visit the [AnCora site](http://clic.ub.edu/corpus/).
#### Who are the annotators?
For more information on the AnCora annotation team, visit the [AnCora site](http://clic.ub.edu/corpus/).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>.
### Citation Information
The following paper must be cited when using this corpus:
Taulé, M., M.A. Martí, M. Recasens (2008) 'Ancora: Multilevel Annotated Corpora for Catalan and Spanish', Proceedings of 6th International Conference on Language Resources and Evaluation. Marrakesh (Morocco).
To cite the Universal Dependencies project:
Rueter, J. (Creator), Erina, O. (Contributor), Klementeva, J. (Contributor), Ryabov, I. (Contributor), Tyers, F. M. (Contributor), Zeman, D. (Contributor), Nivre, J. (Creator) (15 Nov 2020). Universal Dependencies version 2.7 Erzya JR. Universal Dependencies Consortium.
### Contributions
[N/A]
| PlanTL-GOB-ES/UD_Spanish-AnCora | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-10-28T09:30:03+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["es"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": ["part-of-speech"], "pretty_name": "UD_Spanish-AnCora", "tags": []} | 2022-11-17T12:07:35+00:00 |
51b89189df9e9a8f048f53c0e354767fd6a500f6 |
# CoNLL-NERC-es
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://www.cs.upc.edu/~nlp/tools/nerc/nerc.html
- **Point of Contact:** [Xavier Carreras]([email protected])
### Dataset Summary
CoNLL-NERC is the Spanish dataset of the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf). The dataset is annotated with four types of named entities --persons, locations, organizations, and other miscellaneous entities-- formatted in the standard Beginning-Inside-Outside (BIO) format. The corpus consists of 8,324 train sentences with 19,400 named entities, 1,916 development sentences with 4,568 named entities, and 1,518 test sentences with 3,644 named entities.
We use this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
Named Entity Recognition and Classification
### Languages
The dataset is in Spanish (`es-ES`)
## Dataset Structure
### Data Instances
<pre>
El DA O
Abogado NC B-PER
General AQ I-PER
del SP I-PER
Estado NC I-PER
, Fc O
Daryl VMI B-PER
Williams NC I-PER
, Fc O
subrayó VMI O
hoy RG O
la DA O
necesidad NC O
de SP O
tomar VMN O
medidas NC O
para SP O
proteger VMN O
al SP O
sistema NC O
judicial AQ O
australiano AQ O
frente RG O
a SP O
una DI O
página NC O
de SP O
internet NC O
que PR O
imposibilita VMI O
el DA O
cumplimiento NC O
de SP O
los DA O
principios NC O
básicos AQ O
de SP O
la DA O
Ley NC B-MISC
. Fp O
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one. The different files are separated by an empty line.
### Data Splits
- esp.train: 273037 lines
- esp.testa: 54837 lines (used as dev)
- esp.testb: 53049 lines (used as test)
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
#### Initial Data Collection and Normalization
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
#### Who are the source language producers?
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
### Annotations
#### Annotation process
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
#### Who are the annotators?
The annotation was carried out by the TALP Research Center2 of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC3 ) of the University of Barcelona (UB), and funded by the European Commission through the NAMIC pro ject (IST-1999-12392).
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset curators
### Licensing information
### Citation Information
The following paper must be cited when using this corpus:
Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).
### Contributions
[N/A]
| PlanTL-GOB-ES/CoNLL-NERC-es | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"language:es",
"region:us"
] | 2022-10-28T09:42:01+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["es"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": ["part-of-speech"], "pretty_name": "CoNLL-NERC-es", "tags": []} | 2022-11-18T11:55:41+00:00 |
8ebccbfbb024e9f07a36c44ca2ddea0165d2c261 | # Dataset Card for "latent_lsun_church_128px"
Each image is cropped to 128px square and encoded to a 4x16x16 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_lsun_church_128px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 16, 16)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` | tglcourse/latent_lsun_church_128px | [
"region:us"
] | 2022-10-28T09:48:21+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9", "10": "a", "11": "b", "12": "c", "13": "d", "14": "e", "15": "f"}}}}, {"name": "latent", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "test", "num_bytes": 27646560, "num_examples": 6312}, {"name": "train", "num_bytes": 525227700, "num_examples": 119915}], "download_size": 527167710, "dataset_size": 552874260}} | 2022-10-28T10:50:20+00:00 |
7d65bbde119c5f1fb64d27bdc8aafbcd65fd37dc | Dremy/test | [
"license:openrail",
"region:us"
] | 2022-10-28T09:50:29+00:00 | {"license": "openrail"} | 2022-10-28T09:50:29+00:00 |
|
7101673114b7f0b3f6dd1d57e9e480ba0cedee5a | web2write/kicowrite | [
"license:cc-by-4.0",
"region:us"
] | 2022-10-28T09:51:57+00:00 | {"license": "cc-by-4.0"} | 2022-10-28T10:00:26+00:00 |
|
307d2b5f10d43d92df35bc38dd08d6b2551e85f2 | # AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages
- [GitHub Repository of the Paper](https://github.com/bonaventuredossou/MLM_AL)
This repository contains the dataset for our paper [`AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages`](https://arxiv.org/pdf/2211.03263.pdf) which will appear at the third Simple and Efficient Natural Language Processing, at EMNLP 2022.
## Our self-active learning framework

## Languages Covered
AfroLM has been pretrained from scratch on 23 African Languages: Amharic, Afan Oromo, Bambara, Ghomalá, Éwé, Fon, Hausa, Ìgbò, Kinyarwanda, Lingala, Luganda, Luo, Mooré, Chewa, Naija, Shona, Swahili, Setswana, Twi, Wolof, Xhosa, Yorùbá, and Zulu.
## Evaluation Results
AfroLM was evaluated on MasakhaNER1.0 (10 African Languages) and MasakhaNER2.0 (21 African Languages) datasets; on text classification and sentiment analysis. AfroLM outperformed AfriBERTa, mBERT, and XLMR-base, and was very competitive with AfroXLMR. AfroLM is also very data efficient because it was pretrained on a dataset 14x+ smaller than its competitors' datasets. Below the average F1-score performances of various models, across various datasets. Please consult our paper for more language-level performance.
Model | MasakhaNER | MasakhaNER2.0* | Text Classification (Yoruba/Hausa) | Sentiment Analysis (YOSM) | OOD Sentiment Analysis (Twitter -> YOSM) |
|:---: |:---: |:---: | :---: |:---: | :---: |
`AfroLM-Large` | **80.13** | **83.26** | **82.90/91.00** | **85.40** | **68.70** |
`AfriBERTa` | 79.10 | 81.31 | 83.22/90.86 | 82.70 | 65.90 |
`mBERT` | 71.55 | 80.68 | --- | --- | --- |
`XLMR-base` | 79.16 | 83.09 | --- | --- | --- |
`AfroXLMR-base` | `81.90` | `84.55` | --- | --- | --- |
- (*) The evaluation was made on the 11 additional languages of the dataset.
- Bold numbers represent the performance of the model with the **smallest pretrained data**.
## Pretrained Models and Dataset
**Models:**: [AfroLM-Large](https://huggingface.co/bonadossou/afrolm_active_learning) and **Dataset**: [AfroLM Dataset](https://huggingface.co/datasets/bonadossou/afrolm_active_learning_dataset)
## HuggingFace usage of AfroLM-large
```python
from transformers import XLMRobertaModel, XLMRobertaTokenizer
model = XLMRobertaModel.from_pretrained("bonadossou/afrolm_active_learning")
tokenizer = XLMRobertaTokenizer.from_pretrained("bonadossou/afrolm_active_learning")
tokenizer.model_max_length = 256
```
`Autotokenizer` class does not successfully load our tokenizer. So we recommend using directly the `XLMRobertaTokenizer` class. Depending on your task, you will load the according mode of the model. Read the [XLMRoberta Documentation](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)
## Reproducing our result: Training and Evaluation
- To train the network, run `python active_learning.py`. You can also wrap it around a `bash` script.
- For the evaluation:
- NER Classification: `bash ner_experiments.sh`
- Text Classification & Sentiment Analysis: `bash text_classification_all.sh`
## Citation
``@inproceedings{dossou-etal-2022-afrolm,
title = "{A}fro{LM}: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 {A}frican Languages",
author = "Dossou, Bonaventure F. P. and
Tonja, Atnafu Lambebo and
Yousuf, Oreen and
Osei, Salomey and
Oppong, Abigail and
Shode, Iyanuoluwa and
Awoyomi, Oluwabusayo Olufunke and
Emezue, Chris",
booktitle = "Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.sustainlp-1.11",
pages = "52--64",}``
## Reach out
Do you have a question? Please create an issue and we will reach out as soon as possible | bonadossou/afrolm_active_learning_dataset | [
"task_categories:fill-mask",
"task_ids:masked-language-modeling",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:amh",
"language:orm",
"language:lin",
"language:hau",
"language:ibo",
"language:kin",
"language:lug",
"language:luo",
"language:pcm",
"language:swa",
"language:wol",
"language:yor",
"language:bam",
"language:bbj",
"language:ewe",
"language:fon",
"language:mos",
"language:nya",
"language:sna",
"language:tsn",
"language:twi",
"language:xho",
"language:zul",
"license:cc-by-4.0",
"afrolm",
"active learning",
"language modeling",
"research papers",
"natural language processing",
"self-active learning",
"arxiv:2211.03263",
"region:us"
] | 2022-10-28T10:07:51+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["amh", "orm", "lin", "hau", "ibo", "kin", "lug", "luo", "pcm", "swa", "wol", "yor", "bam", "bbj", "ewe", "fon", "mos", "nya", "sna", "tsn", "twi", "xho", "zul"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["fill-mask"], "task_ids": ["masked-language-modeling"], "pretty_name": "afrolm-dataset", "tags": ["afrolm", "active learning", "language modeling", "research papers", "natural language processing", "self-active learning"]} | 2023-03-29T17:10:21+00:00 |
e986b088ae469d2ba32caba321dbf911902ec8b7 |
# MLDoc
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://github.com/facebookresearch/MLDoc
### Dataset Summary
For document classification, we use the Multilingual Document Classification Corpus (MLDoc) [(Schwenk and Li, 2018)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf), a cross-lingual document classification dataset covering 8 languages. We use the Spanish portion to evaluate our models on monolingual classification as part of the EvalEs Spanish language benchmark. The corpus consists of 14,458 news articles from Reuters classified in four categories: Corporate/Industrial, Economics, Government/Social and Markets.
This dataset can't be downloaded straight from HuggingFace as it requires signing specific agreements. The detailed instructions on how to download it can be found in this [repository](https://github.com/facebookresearch/MLDoc).
### Supported Tasks and Leaderboards
Text Classification
### Languages
The dataset is in English, German, French, Spanish, Italian, Russian, Japanese and Chinese.
## Dataset Structure
### Data Instances
<pre>
MCAT b' FRANCFORT, 17 feb (Reuter) - La Bolsa de Francfort abri\xc3\xb3 la sesi\xc3\xb3n de corros con baja por la ca\xc3\xadda del viernes en Wall Street y una toma de beneficios. El d\xc3\xb3lar ayudaba a apuntalar al mercado, que pronto podr\xc3\xada reanudar su tendencia alcista. Volkswagen bajaba por los da\xc3\xb1os ocasionados por la huelga de camioneros en Espa\xc3\xb1a. Preussag participaba en un joint venture de exploraci\xc3\xb3n petrol\xc3\xadfera en Filipinas con Atlantic Richfield Co. A las 0951 GMT, el Dax 30 bajaba 10,49 puntos, un 0,32 pct, a 3.237,69 tras abrir a un m\xc3\xa1ximo de 3.237,69. (c) Reuters Limited 1997. '
</pre>
### Data Fields
- Label: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets)
- Text
### Data Splits
- train.tsv: 9,458 lines
- valid.tsv: 1,000 lines
- test.tsv: 4,000 lines
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The source data is from the Reuters Corpus. In 2000, Reuters Ltd made available a large collection of Reuters News stories for use in research and development of natural language processing, information retrieval, and machine learning systems. This corpus, known as "Reuters Corpus, Volume 1" or RCV1, is significantly larger than the older, well-known Reuters-21578 collection heavily used in the text classification community.
For more information visit the paper [(Lewis et al., 2004)](https://www.jmlr.org/papers/volume5/lewis04a/lewis04a.pdf).
#### Initial Data Collection and Normalization
For more information visit the paper [(Lewis et al., 2004)](https://www.jmlr.org/papers/volume5/lewis04a/lewis04a.pdf).
#### Who are the source language producers?
For more information visit the paper [(Lewis et al., 2004)](https://www.jmlr.org/papers/volume5/lewis04a/lewis04a.pdf).
### Annotations
#### Annotation process
For more information visit the paper [(Schwenk and Li, 2018; Lewis et al., 2004)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf).
#### Who are the annotators?
For more information visit the paper [(Schwenk and Li, 2018; Lewis et al., 2004)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
Access to the actual news stories of the Reuters Corpus (both RCV1 and RCV2) requires a NIST agreement. The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
- Organizational agreement: This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
- Individual agreement: This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
For more information about the agreement see [here](https://trec.nist.gov/data/reuters/reuters.html)
### Citation Information
The following paper must be cited when using this corpus:
```
@InProceedings{SCHWENK18.658,
author = {Holger Schwenk and Xian Li},
title = {A Corpus for Multilingual Document Classification in Eight Languages},
booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year = {2018},
month = {may},
date = {7-12},
location = {Miyazaki, Japan},
editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga},
publisher = {European Language Resources Association (ELRA)},
address = {Paris, France},
isbn = {979-10-95546-00-9},
language = {english}
}
@inproceedings{schwenk-li-2018-corpus,
title = "A Corpus for Multilingual Document Classification in Eight Languages",
author = "Schwenk, Holger and
Li, Xian",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1560",
}
```
| PlanTL-GOB-ES/MLDoc | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"language:es",
"license:cc-by-nc-4.0",
"region:us"
] | 2022-10-28T10:35:05+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["es"], "license": "cc-by-nc-4.0", "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "MLDoc", "tags": []} | 2022-11-03T09:24:03+00:00 |
d861d01d303d7a171b319d0e8dc01ff87ac3b2e0 | KETI-AIR/aihub_dialog_summarization | [
"license:apache-2.0",
"region:us"
] | 2022-10-28T10:40:25+00:00 | {"license": "apache-2.0"} | 2022-10-31T06:10:39+00:00 |
|
9a1c7f132e7b9066c18722c97c7dbf06b85012de | # Dataset Card for "latent_celebA_256px"
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_celebA_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` | tglcourse/latent_celebA_256px | [
"region:us"
] | 2022-10-28T10:45:46+00:00 | {"dataset_info": {"features": [{"name": "latent", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 3427164684, "num_examples": 202599}], "download_size": 3338993120, "dataset_size": 3427164684}} | 2022-10-28T10:49:27+00:00 |
9f23ec8ffc93cae32ae3c203ffa6d6610bbbd6c8 |
# Dataset Card for mt_en_it
## Table of Contents
- [Dataset Card for mt_en_it](#dataset-card-for-mt-en-it)
- [Table of Contents](#table-of-contents)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
### Dataset Summary
This dataset comprises traditional Neapolitan songs from [napoligrafia](https://www.napoligrafia.it) translated into Italian.
### Languages
- italian-to-neapolitan
### Data Instances
A sample from the dataset.
```python
{
'url': "url",
'napoletano': "o, quacche ghiuorno, 'a frennesia mme piglia",
'italiano': "o, qualche giorno, la rabbia mi prende"
}
```
The text is provided without further preprocessing or tokenization.
### Data Fields
- `url`: source URL.
- `napoletano`: Neapolitan text.
- `italiano`: Italian text.
### Dataset Creation
The dataset was created by scraping [napoligrafia](https://www.napoligrafia.it) songs. | efederici/mt_nap_it | [
"task_categories:translation",
"size_categories:unknown",
"language:it",
"license:unknown",
"conditional-text-generation",
"region:us"
] | 2022-10-28T10:51:09+00:00 | {"language": ["it"], "license": ["unknown"], "size_categories": ["unknown"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "mt_nap_it", "tags": ["conditional-text-generation"]} | 2022-10-28T13:32:26+00:00 |
30b32ca54b7c38130a1bcbf0b5f534904af9971f | <h4> Disclosure </h4>
<p> While its not perfect i hope that you are able to create some nice pictures, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know </p>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by spectral_wind </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by spectral_wind- 6500</em></li>
<li>10,000 steps <em>Usage: art by spectral_wind-10000</em> </li>
<li>15,000 steps <em>Usage: art by spectral_wind</em></li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/BJNFbAf.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/nKig2lQ.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/ElF2xde.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<a href="https://i.imgur.com/QSEM4jU.jpg" target="_blank"><img height="100%" width="100%" src="https://i.imgur.com/QSEM4jU.jpg"></a>
<h4> Licence </h4>
<p><span>This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:</span> </p>
<ol>
<li>You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content </li>
<li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li>
<li>You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">Please read the full license here</a></li>
</ol> | zZWipeoutZz/spectral_wind | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-28T10:52:24+00:00 | {"license": "creativeml-openrail-m"} | 2022-10-28T13:53:12+00:00 |
2c4c21cd368119bf57d5fce72dedf0f1476df226 | KETI-AIR/aihub_book_summarization | [
"license:apache-2.0",
"region:us"
] | 2022-10-28T10:53:01+00:00 | {"license": "apache-2.0"} | 2022-10-31T06:10:02+00:00 |
|
38ad02a2f5fe6817a0a4e820a8fb94ba2c8cfb3d | KETI-AIR/aihub_document_summarization | [
"license:apache-2.0",
"region:us"
] | 2022-10-28T11:41:57+00:00 | {"license": "apache-2.0"} | 2022-10-31T06:09:35+00:00 |
|
a60791cb04316a54dba05589959c132a4cdeae1d | KETI-AIR/aihub_paper_summarization | [
"license:apache-2.0",
"region:us"
] | 2022-10-28T12:23:40+00:00 | {"license": "apache-2.0"} | 2022-10-31T06:09:11+00:00 |
|
1f94d69bdf8e059de2b2163f99816d21d8efa413 | maximedb/massive_generated | [
"license:mit",
"region:us"
] | 2022-10-28T12:42:35+00:00 | {"license": "mit"} | 2022-10-28T12:43:16+00:00 |
|
422fa1b362f44da776232e5c6d79ef0e9d9d665e | # Media Dataset for IRAN Protests
Following recent protests in Iran corresponding to [__Mahsa Amini__](https://en.wikipedia.org/wiki/Death_of_Mahsa_Amini)'s death, her name has been a trend on social media like Twitter( [#MahsaAmini](https://twitter.com/search?q=%23MahsaAmini) , [#مهسا_امینی](https://twitter.com/search?q=%23%D9%85%D9%87%D8%B3%D8%A7_%D8%A7%D9%85%DB%8C%D9%86%DB%8C)).
Untile Octore 15, 2022, there have been 300+ million tweets on Twitter and among them, there are many posts including media files like images and videos.
It will be helpful for Media Companies, Developers or whoever is interested in reviewing and assessing these files. Our data has been collected since September 14, 2022.
More than __3.1M records__ (including(unique) 2.5M images and 600 thousands videos) is available in current dataset.
### Dataset:
1. created_at: datetime when the tweet posted
2. md_url: URL of the media
3. md_type: show media type (image or video)
4. tw_id: tweet id
## Disclaimer:
The dataset includes any type of media based on what is published by users on Twitter. So, there will be no accusation against the publisher of this dataset.
For more information about dataset and the way that able to download the read media files, please refer to [Github](https://github.com/M-Amrollahi/Iran-protests-media). | MahdiA/Iran-protests-media | [
"license:apache-2.0",
"region:us"
] | 2022-10-28T13:08:37+00:00 | {"license": "apache-2.0"} | 2022-10-28T13:59:06+00:00 |
91ca4a0e810217bb1ac2e440805ccb3514bf2637 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-1bbcaf-1917164990 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T13:22:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T13:26:30+00:00 |
08173c5722c09727379f8ec5f538618236827272 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-1bbcaf-1917164991 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T13:22:44+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T13:28:19+00:00 |
4a74d8864b2f0617d0e7e1e09d6e294c709b339d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-1bbcaf-1917164992 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T13:22:46+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T13:50:20+00:00 |
8fd1358ea3c1bda5910793a1568e66e08e96c478 | awacke1/ChatbotMemory.csv | [
"license:mit",
"region:us"
] | 2022-10-28T13:23:05+00:00 | {"license": "mit"} | 2023-01-29T14:02:13+00:00 |
|
7e9dcff44427e84a55e4f4f44223e979ff5eac19 |
# Maccha style embedding
## Samples
<img alt="Samples" src="https://huggingface.co/datasets/DJSoft/maccha_artist_style/resolve/main/samples.jpg" style="max-height: 80vh"/>
<img alt="Comparsion" src="https://huggingface.co/datasets/DJSoft/maccha_artist_style/resolve/main/steps.png" style="max-height: 80vh"/>
## About
Use this Stable Diffusion embedding to achieve style of Matcha_ / maccha_(mochancc) [Pixiv](https://www.pixiv.net/en/users/2583663)
## Usage
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add __art by maccha-*__
Add **( :1.0)** around it to modify its weight
## Included Files
- 8000 steps Usage: **art by maccha-8000**
- 15000 steps Usage: **art by maccha-15000**
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | DJSoft/maccha_artist_style | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-28T14:06:19+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-27T16:00:22+00:00 |
955a3de11d4490cdcc998065f1bbf956c6f7b5ad | siberspace/eric | [
"region:us"
] | 2022-10-28T14:13:49+00:00 | {} | 2022-10-28T14:14:27+00:00 |
|
8834a2b0cd1b4c82e9d6fb5c5ba80d9c2c916a13 |
# Yuki Miku 2017 embedding
## Samples
<img alt="Samples" src="https://huggingface.co/datasets/DJSoft/yuki_miku_2017_outfit/resolve/main/samples.jpg" style="max-height: 80vh"/>
<img alt="Comparsion" src="https://huggingface.co/datasets/DJSoft/yuki_miku_2017_outfit/resolve/main/steps.png" style="max-height: 80vh"/>
## About
Use this Stable Diffusion embedding to achieve the Hatsune Miku Yuki Style 2017 outfit
## Usage
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add __yuki_miku_2017-*__
Add **( :1.0)** around it to modify its weight
## Included Files
- 8000 steps Usage: **yuki_miku_2017-8000**
- 10000 steps Usage: **yuki_miku_2017-10000**
- 15000 steps Usage: **yuki_miku_2017-15000**
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | DJSoft/yuki_miku_2017_outfit | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-28T14:43:14+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-27T15:43:43+00:00 |
e4e0bfacafaf6c10eb3f6c1f862ce10f525a65e3 | nixjoe/mylover1 | [
"license:unknown",
"region:us"
] | 2022-10-28T14:57:33+00:00 | {"license": "unknown"} | 2022-10-28T14:58:32+00:00 |
|
12776b41f447dfaec6bb9fb9ad83e641a994e5ea | kkchi/123123 | [
"region:us"
] | 2022-10-28T16:49:43+00:00 | {} | 2022-10-28T17:48:57+00:00 |
|
3f8acaa1c5617254e9be52b421e7e9eafbc517d2 | Toveline/images | [
"license:unknown",
"region:us"
] | 2022-10-28T17:35:19+00:00 | {"license": "unknown"} | 2022-10-28T18:08:39+00:00 |
|
115a522e89601c99a3ee2b4f9622b8df0a19639f | # Dataset Card for "focus_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kanak8278/focus_test | [
"region:us"
] | 2022-10-28T17:42:49+00:00 | {"dataset_info": {"features": [{"name": "dialogID", "dtype": "string"}, {"name": "utterance", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "hit_knowledge", "dtype": "string"}, {"name": "ground_knowledge", "dtype": "string"}, {"name": "ground_persona", "dtype": "string"}, {"name": "similarity_score", "dtype": "float64"}, {"name": "persona1", "dtype": "string"}, {"name": "persona2", "dtype": "string"}, {"name": "persona3", "dtype": "string"}, {"name": "persona4", "dtype": "string"}, {"name": "persona5", "dtype": "string"}, {"name": "persona_grounding1", "dtype": "bool"}, {"name": "persona_grounding2", "dtype": "bool"}, {"name": "persona_grounding3", "dtype": "bool"}, {"name": "persona_grounding4", "dtype": "bool"}, {"name": "persona_grounding5", "dtype": "bool"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6713468, "num_examples": 9035}], "download_size": 2783764, "dataset_size": 6713468}} | 2022-10-28T17:42:53+00:00 |
84f973e948620e38b0c7e9fa880c20ab0eeede0a |
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. | severo/glue | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"qa-nli",
"coreference-nli",
"paraphrase-identification",
"region:us"
] | 2022-10-28T20:00:14+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification", "natural-language-inference", "semantic-similarity-scoring", "sentiment-classification", "text-scoring"], "paperswithcode_id": "glue", "pretty_name": "GLUE (General Language Understanding Evaluation benchmark)", "configs": ["ax", "cola", "mnli", "mnli_matched", "mnli_mismatched", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"], "tags": ["qa-nli", "coreference-nli", "paraphrase-identification"], "train-eval-index": [{"config": "cola", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "sst2", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "mrpc", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "qqp", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question1": "text1", "question2": "text2", "label": "target"}}, {"config": "stsb", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "mnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation_matched"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_mismatched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_matched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "qnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "text1", "sentence": "text2", "label": "target"}}, {"config": "rte", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "wnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}]} | 2022-10-28T15:35:04+00:00 |
8452a611917f774ceea2280390804e6d9c80eee5 | diversoailab/standard_humaneval | [
"task_categories:text-generation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:<1K",
"language:code",
"license:mit",
"region:us"
] | 2022-10-29T00:34:30+00:00 | {"annotations_creators": [], "language_creators": ["expert-generated"], "language": ["code"], "license": "mit", "multilinguality": ["monolingual"], "size_categories": ["<1K"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": [], "pretty_name": "The-Stack"} | 2022-12-12T10:30:24+00:00 |
|
d1113c43f763980d981d085de5f414342d0f15b3 | hamza50/testimg | [
"license:wtfpl",
"region:us"
] | 2022-10-29T04:09:29+00:00 | {"license": "wtfpl"} | 2022-10-29T04:09:29+00:00 |
|
8c8403a9c0cb6a7c50d305d661bb06f8f1eac2d5 | # Dataset Card for "Romance-cleaned-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MarkGG/Romance-cleaned-3 | [
"region:us"
] | 2022-10-29T05:03:19+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3369959.5092553934, "num_examples": 6466}, {"name": "validation", "num_bytes": 374729.4907446068, "num_examples": 719}], "download_size": 2300275, "dataset_size": 3744689.0}} | 2022-10-29T05:03:39+00:00 |
8b2557a673e0e0d687c1484a7e197d3f8c43c699 |
# Dataset Card for Pokémon BLIP captions with English and Japanese.
Dataset used to train Pokémon text to image model, add a Japanese Column of [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.
For each row the dataset contains image en_text (caption in English) and ja_text (caption in Japanese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.
The Japanese captions are translated by [Deepl](https://www.deepl.com/translator) | svjack/pokemon-blip-captions-en-ja | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:huggan/few-shot-pokemon",
"language:en",
"language:ja",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-29T06:26:57+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en", "ja"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["multilingual"], "size_categories": ["n<1K"], "source_datasets": ["huggan/few-shot-pokemon"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Pok\u00e9mon BLIP captions", "tags": []} | 2022-10-31T06:22:04+00:00 |
8179514e5c561b29e6d9c28548e10b6ead1856a2 | kejian/codeparrot-train-more-filter-3.3b-cleaned | [
"task_categories:text-classification",
"annotations_creators:machine-generated",
"size_categories:100K<n<1M",
"source_datasets:codeparrot",
"license:mit",
"pretraining-with-human-feedback",
"pep8",
"python",
"codeparrot",
"region:us"
] | 2022-10-29T07:00:31+00:00 | {"annotations_creators": ["machine-generated"], "license": "mit", "size_categories": ["100K<n<1M"], "source_datasets": ["codeparrot"], "task_categories": ["text-classification"], "tags": ["pretraining-with-human-feedback", "pep8", "python", "codeparrot"]} | 2023-02-21T04:40:49+00:00 |
|
473ce373f77f53101b124af68bc5d81ef8f8ef48 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: "Fashion captions"
size_categories:
- n<100K
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | duyngtr16061999/fashion_text_to_image | [
"region:us"
] | 2022-10-29T07:50:41+00:00 | {} | 2022-11-21T05:54:22+00:00 |
12d7062a7184418e6cd6c9f3cfb306683b945e73 | siberspace/eric2 | [
"region:us"
] | 2022-10-29T08:51:11+00:00 | {} | 2022-10-29T08:51:39+00:00 |
|
7f5566dbfedcb5db78e493a0bdf04b410ec769fe |
# Sam Yang Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by sam_yang"```
If it is to strong just add [] around it.
Trained until 5000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/cbtBjwH.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/r5s8bSO.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/NpGj5KU.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/eWJlaf5.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/DOJvxTJ.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/sam_yang | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-29T10:24:38+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-29T10:26:45+00:00 |
b0d014855f835843f12ca5da42f300baf8c60227 | Ceyase/audio-diffusion-touhou | [
"license:gpl-3.0",
"region:us"
] | 2022-10-29T11:05:15+00:00 | {"license": "gpl-3.0"} | 2022-10-29T11:14:50+00:00 |
|
a69385023798d1f563a7a7e9f4abd607a9df71f8 | Toveline/toveline | [
"license:unknown",
"region:us"
] | 2022-10-29T11:12:03+00:00 | {"license": "unknown"} | 2022-10-30T11:35:14+00:00 |
|
80d17c75a21f9f075690f5e142f76ee1343c7968 | Dialogue-Model-Research-Group/baike | [
"license:cc",
"region:us"
] | 2022-10-29T11:26:50+00:00 | {"license": "cc"} | 2022-11-12T16:00:22+00:00 |
|
34b9001dc31ff1fa092786fef67831c12719e37e | siberspace/katia | [
"region:us"
] | 2022-10-29T12:12:41+00:00 | {} | 2022-10-29T12:13:07+00:00 |
|
9cf098c4cfce7ab970110f983f16773087f13830 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: patrickvonplaten/bert2bert_cnn_daily_mail
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Jomon07](https://huggingface.co/Jomon07) for evaluating this model. | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-98a820-1924665124 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-29T13:24:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "patrickvonplaten/bert2bert_cnn_daily_mail", "metrics": ["accuracy", "bleu"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-10-29T14:11:10+00:00 |
a94866390f154522e1f1ae2c26f3cbfc22259d13 | e3rastel/training_christie | [
"region:us"
] | 2022-10-29T14:18:54+00:00 | {} | 2022-10-29T14:19:53+00:00 |
|
a9a91c5a9021379e94d680e3ec197ed446894ecd | joell/project1 | [
"license:mit",
"region:us"
] | 2022-10-29T14:24:40+00:00 | {"license": "mit"} | 2022-10-29T14:24:40+00:00 |
|
55c6aa8cf6594b07167e47488ae303b84f4daf38 | <h4> Disclosure </h4>
<p> I hope that you are able to create some nice pictures,, if you have any embedding suggestions or issues please let me know </p>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by insane_style </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by insane_style-6500</em></li>
<li>10,000 steps <em>Usage: art by insane_style-10000</em> </li>
<li>15,000 steps <em>Usage: art by insane_style</em></li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/YGROrC5.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/IFQRJcH.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/FwfXft0.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<em> click the image to enlarge</em>
<a href="https://i.imgur.com/SEkzaVr.jpg" target="_blank"><img height="50%" width="50%" src="https://i.imgur.com/SEkzaVr.jpg"></a>
| zZWipeoutZz/insane_style | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-29T15:14:08+00:00 | {"license": "creativeml-openrail-m"} | 2022-10-29T15:31:20+00:00 |
0ebc43ff404c90551a7aea88c55a074b8ac0af51 | rajshekar2591/testing | [
"license:afl-3.0",
"region:us"
] | 2022-10-29T16:32:52+00:00 | {"license": "afl-3.0"} | 2022-10-29T16:58:19+00:00 |
|
22ffd55109e12e1b82003a93e40fee0298e985a3 |
# Dataset Card for Sketch Scene Descriptions
_Dataset used to train [Sketch Scene text to image model]()_
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO. With practical applications in mind, we collect sketches that convey well scene content but can be sketched within a few minutes by a person with any sketching skills. Our dataset comprises around 10,000 freehand scene vector sketches with per-point space-time information by 100 non-expert individuals, offering both object- and scene-level abstraction. Each sketch is augmented with its text description.
For each row, the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Citation
If you use this dataset, please cite it as:
```
@inproceedings{fscoco,
title={FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in Context.}
author={Chowdhury, Pinaki Nath and Sain, Aneeshan and Bhunia, Ayan Kumar and Xiang, Tao and Gryaditskaya, Yulia and Song, Yi-Zhe},
booktitle={ECCV},
year={2022}
}
``` | zoheb/sketch-scene | [
"task_categories:text-to-image",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<10K",
"source_datasets:FS-COCO",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-29T17:15:58+00:00 | {"language_creators": ["machine-generated"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<10K"], "source_datasets": ["FS-COCO"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Sketch Scene Descriptions", "tags": []} | 2022-10-30T10:07:48+00:00 |
5c0abe70104c7e699d1834afd39232def41b0f77 | # Dataset Card for "turkishReviews-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | eminecg/turkishReviews-ds-mini | [
"region:us"
] | 2022-10-29T17:16:42+00:00 | {"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1296087.3, "num_examples": 3600}, {"name": "validation", "num_bytes": 144009.7, "num_examples": 400}], "download_size": 915922, "dataset_size": 1440097.0}} | 2022-11-07T10:03:01+00:00 |
13070b62b99dbd27502b95ef9980f2a34d32f691 | Pictures of ME! | stwhiteisme/Stwhiteisme | [
"region:us"
] | 2022-10-29T17:18:42+00:00 | {} | 2022-10-29T17:19:22+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.