sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
f0f1642c872cb3fe346c1805b06c7f72900255f7
## Dataset Description - **Homepage:** https://www.darrow.ai/ - **Repository:** https://github.com/darrow-labs/ClassActionPrediction - **Paper:** https://arxiv.org/abs/2211.00582 - **Leaderboard:** N/A - **Point of Contact:** [Gila Hayat](mailto:[email protected]),[Gil Semo](mailto:[email protected]) #### More Details & Collaborations Feel free to contact us in order to get a larger dataset. We would be happy to collaborate on future works. ### Dataset Summary USClassActions is an English dataset of 3K complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using Darrow.ai proprietary tool. ### Data Instances ```python from datasets import load_dataset dataset = load_dataset('darrow-ai/USClassActions') ``` ### Data Fields `id`: (**int**) a unique identifier of the document \ `target_text`: (**str**) the complaint text \ `verdict`: (**str**) the outcome of the case \ ### Curation Rationale The dataset was curated by Darrow.ai (2022). ### Citation Information *Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus* *ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US* *Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022* ``` @InProceedings{Darrow-Niklaus-2022, author = {Semo, Gil and Bernsohn, Dor and Hagag, Ben and Hayat, Gila and Niklaus, Joel}, title = {ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US}, booktitle = {Proceedings of the 2022 Natural Legal Language Processing Workshop}, year = {2022}, location = {Abu Dhabi, EMNLP2022}, } ```
darrow-ai/USClassActions
[ "task_categories:text-classification", "task_categories:zero-shot-classification", "language:en", "license:gpl-3.0", "legal", "legalnlp", "class action", "darrow", "arxiv:2211.00582", "region:us" ]
2022-10-24T11:00:55+00:00
{"language": ["en"], "license": "gpl-3.0", "task_categories": ["text-classification", "zero-shot-classification"], "tags": ["legal", "legalnlp", "class action", "darrow"]}
2024-01-24T10:00:39+00:00
91c9f5f11a05c71bc9a2a44628ce04d0b39d9cf0
# Dataset Card for Quasimodo ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/commonsense/quasimodo - **Repository:** https://github.com/Aunsiels/CSK - **Paper:** Romero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019 ### Dataset Summary A commonsense knowledge base constructed automatically from question-answering forums and query logs. ### Supported Tasks and Leaderboards Can be useful for tasks requiring external knowledge such as question answering. ### Languages English ## Dataset Structure ### Data Instances ```python { "subject": "elephant", "predicate": "has_body_part" "object": "trunk", "modality": "TBC[so long trunks] x#x2 // TBC[long trunks] x#x9 // TBC[big trunks] x#x6 // TBC[long trunk] x#x1 // TBC[such big trunks] x#x1 0 0.9999667967035647 elephants have trunks x#x34 x#xGoogle Autocomplete, Bing Autocomplete, Yahoo Questions, Answers.com Questions, Reddit Questions // a elephants have trunks x#x2 x#xGoogle Autocomplete // a elephant have a trunk x#x2 x#xGoogle Autocomplete // elephants have so long trunks x#x2 x#xGoogle Autocomplete // elephants have long trunks x#x8 x#xGoogle Autocomplete, Yahoo Questions, Answers.com Questions // elephants have big trunks x#x6 x#xGoogle Autocomplete, Answers.com Questions, Reddit Questions // elephants have trunk x#x3 x#xGoogle Autocomplete, Yahoo Questions // elephant have long trunks x#x1 x#xGoogle Autocomplete // elephant has a trunk x#x1 x#xGoogle Autocomplete // elephants have a trunk x#x2 x#xAnswers.com Questions // an elephant has a long trunk x#x1 x#xAnswers.com Questions // elephant have trunks x#x1 x#xAnswers.com Questions // elephants have such big trunks x#x1 x#xReddit Questions", "score": 0.9999667967668732, "local_sigma": 1.0 } ``` ### Data Fields - subject: The subject of the triple - predicate: The predicate of the triple - object: The object of the triple - modality: Modalities associated with the triples with their counts. TBC means the object can be further refined to the listed objects - is_negative: 1 if the statement was negated - score: salience score of the supervised scoring model - local sigma: strict conditional probability of observing a (predicate, object) with a specific subject. I.e., a measure of how unique a statement is. E.g., local_sigma(lawyers, defend, serial_killers) = 1, local_sigma(lawyers, make, money) = 0.01, even though both statements have a similar score of 0.99. ## Dataset Creation See original paper. ## Additional Information ### Licensing Information CC-BY 2.0 ### Citation Information Romero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019
Aunsiels/Quasimodo
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100M<n<1B", "source_datasets:original", "language:en", "license:cc-by-2.0", "knowledge base", "commonsense", "region:us" ]
2022-10-24T11:01:21+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "Quasimodo", "tags": ["knowledge base", "commonsense"]}
2022-10-24T11:30:23+00:00
51c3c74292f72507690e40032229fcad03f274f7
polinaeterna/image
[ "region:us" ]
2022-10-24T11:59:59+00:00
{"configs": [{"config_name": "labels", "drop_labels": false}, {"config_name": "no_labels", "drop_labels": true}]}
2023-12-01T16:48:53+00:00
326a090671e5d16285a76878114dc54704a26e4b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: dslim/bert-large-NER * Dataset: conll2003 * Config: conll2003 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@rdecoupes](https://huggingface.co/rdecoupes) for evaluating this model.
autoevaluate/autoeval-eval-conll2003-conll2003-623e8b-1865063750
[ "autotrain", "evaluation", "region:us" ]
2022-10-24T14:01:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "dslim/bert-large-NER", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-24T14:03:21+00:00
e7f950d67b7cae3e03abd83c243f2933e1823bb5
# Large Labelled Logo Dataset
LHF/l3d
[ "region:us" ]
2022-10-24T14:31:20+00:00
{}
2023-01-02T19:41:27+00:00
48f363dd35ced1e473e9efdf11e55046145d4ba8
This repo contains all the docs published on https://huggingface.co/docs. The docs are generated with https://github.com/huggingface/doc-builder.
hf-doc-build/doc-build
[ "license:mit", "region:us" ]
2022-10-24T14:39:05+00:00
{"license": "mit", "pretty_name": "Generated Docs for HF"}
2024-02-17T00:41:19+00:00
b00dc249a422f746fa6f3fe520e9dc1948b827f1
# Flame Surge Style Embedding / Textual Inversion ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"art by flame_surge_style"``` If it is to strong just add [] around it. Trained until 15000 steps I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 15k steps ver in your folder Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/GwRM6jf.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/vueZJGB.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/GnscYKw.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/VOyrp21.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/KlpeUpB.png width=100% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/flame_surge_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "region:us" ]
2022-10-24T18:18:40+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false}
2022-10-24T18:39:09+00:00
0fe3d57b821a925081220f954b454f10ace87af8
### Dataset Contents This dataset contains the concatenated scripts from the original (and best) Star Wars trilogy. The scripts are reduced to dialogue only, and are tagged with a line number and speaker. ### Dataset Disclaimer I don't own this data; or Star Wars. But it would be cool if I did. Star Wars is owned by Lucasfilms. I do not own any of the rights to this information. The scripts are derived from a couple sources: * This [GitHub Repo](https://github.com/gastonstat/StarWars) with raw files * A [Kaggle Dataset](https://www.kaggle.com/datasets/xvivancos/star-wars-movie-scripts) put together by whoever 'Xavier' is ### May the Force be with you
andrewkroening/Star-wars-scripts-dialogue-IV-VI
[ "license:cc", "region:us" ]
2022-10-24T18:31:55+00:00
{"license": "cc"}
2022-10-27T16:53:39+00:00
61f49d80d69c6208a9bfffb1cab4b98c9a9accf8
# Literature Dataset ## Files A dataset containing novels, epics and essays. The files are as follows: - main.txt, a file with all the texts, every text on a newline, all English - vocab.txt, a file with the trained (BERT) vocab, a newline a new word - train.csv, a file with length 129 sequences of tokens, csv of ints, containing 48,758 samples (6,289,782 tokens) - test.csv, the test split in the same way, 5,417 samples (698,793 tokens) - DatasetDistribution.png, a file with all the texts and a plot with character length ## Texts The texts used are these: - Wuthering Heights - Ulysses - Treasure Island - The War of the Worlds - The Republic - The Prophet - The Prince - The Picture of Dorian Gray - The Odyssey - The Great Gatsby - The Brothers Karamazov - Second Treatise of Goverment - Pride and Prejudice - Peter Pan - Moby Dick - Metamorphosis - Little Women - Les Misérables - Japanese Girls and Women - Iliad - Heart of Darkness - Grimms' Fairy Tales - Great Expectations - Frankenstein - Emma - Dracula - Don Quixote - Crime and Punishment - Christmas Carol - Beyond Good and Evil - Anna Karenina - Adventures of Sherlock Holmes - Adventures of Huckleberry Finn - Adventures in Wonderland - A Tale of Two Cities - A Room with A View
ACOSharma/literature
[ "license:cc-by-sa-4.0", "region:us" ]
2022-10-24T20:56:25+00:00
{"license": "cc-by-sa-4.0"}
2022-10-28T14:38:43+00:00
b97a2f9f26e3f520994730d5a3fa4002294dba0b
tramzel/fndds
[ "license:unknown", "region:us" ]
2022-10-24T22:06:49+00:00
{"license": "unknown"}
2022-10-24T22:14:22+00:00
66f4b74f4674267c30df8a5ed334d7e90cb59c1c
SickBoy/layout_documents
[ "license:openrail", "region:us" ]
2022-10-24T22:34:39+00:00
{"license": "openrail"}
2022-10-26T02:12:05+00:00
c404fe3052627c0d9bc1ea0b5aacab33507364d5
iejMac/CLIP-MSR-VTT
[ "license:mit", "region:us" ]
2022-10-25T00:34:03+00:00
{"license": "mit"}
2022-10-31T05:03:18+00:00
fb620fbe49fa4420e0734bd9c0df11f51176b61f
# DiffusionDB <img width="100%" src="https://user-images.githubusercontent.com/15007159/201762588-f24db2b8-dbb2-4a94-947b-7de393fc3d33.gif"> ## Table of Contents - [DiffusionDB](#diffusiondb) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Two Subsets](#two-subsets) - [Key Differences](#key-differences) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Dataset Metadata](#dataset-metadata) - [Metadata Schema](#metadata-schema) - [Data Splits](#data-splits) - [Loading Data Subsets](#loading-data-subsets) - [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader) - [Method 2. Use the PoloClub Downloader](#method-2-use-the-poloclub-downloader) - [Usage/Examples](#usageexamples) - [Downloading a single file](#downloading-a-single-file) - [Downloading a range of files](#downloading-a-range-of-files) - [Downloading to a specific directory](#downloading-to-a-specific-directory) - [Setting the files to unzip once they've been downloaded](#setting-the-files-to-unzip-once-theyve-been-downloaded) - [Method 3. Use `metadata.parquet` (Text Only)](#method-3-use-metadataparquet-text-only) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [DiffusionDB homepage](https://poloclub.github.io/diffusiondb) - **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb) - **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb) - **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896) - **Point of Contact:** [Jay Wang](mailto:[email protected]) ### Dataset Summary DiffusionDB is the first large-scale text-to-image prompt dataset. It contains **14 million** images generated by Stable Diffusion using prompts and hyperparameters specified by real users. DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb). ### Supported Tasks and Leaderboards The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models. ### Languages The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian. ### Two Subsets DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs. |Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table| |:--|--:|--:|--:|--:|--:| |DiffusionDB 2M|2M|1.5M|1.6TB|`images/`|`metadata.parquet`| |DiffusionDB Large|14M|1.8M|6.5TB|`diffusiondb-large-part-1/` `diffusiondb-large-part-2/`|`metadata-large.parquet`| ##### Key Differences 1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M. 2. Images in DiffusionDB 2M are stored in `png` format; images in DiffusionDB Large use a lossless `webp` format. ## Dataset Structure We use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB 2M are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. Similarly, the 14 million images in DiffusionDB Large are split into 14,000 folders. ```bash # DiffusionDB 2M ./ ├── images │   ├── part-000001 │   │   ├── 3bfcd9cf-26ea-4303-bbe1-b095853f5360.png │   │   ├── 5f47c66c-51d4-4f2c-a872-a68518f44adb.png │   │   ├── 66b428b9-55dc-4907-b116-55aaa887de30.png │   │   ├── [...] │   │   └── part-000001.json │   ├── part-000002 │   ├── part-000003 │   ├── [...] │   └── part-002000 └── metadata.parquet ``` ```bash # DiffusionDB Large ./ ├── diffusiondb-large-part-1 │   ├── part-000001 │   │   ├── 0a8dc864-1616-4961-ac18-3fcdf76d3b08.webp │   │   ├── 0a25cacb-5d91-4f27-b18a-bd423762f811.webp │   │   ├── 0a52d584-4211-43a0-99ef-f5640ee2fc8c.webp │   │   ├── [...] │   │   └── part-000001.json │   ├── part-000002 │   ├── part-000003 │   ├── [...] │   └── part-010000 ├── diffusiondb-large-part-2 │   ├── part-010001 │   │   ├── 0a68f671-3776-424c-91b6-c09a0dd6fc2d.webp │   │   ├── 0a0756e9-1249-4fe2-a21a-12c43656c7a3.webp │   │   ├── 0aa48f3d-f2d9-40a8-a800-c2c651ebba06.webp │   │   ├── [...] │   │   └── part-000001.json │   ├── part-010002 │   ├── part-010003 │   ├── [...] │   └── part-014000 └── metadata-large.parquet ``` These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB 2M) or a lossless `WebP` file (DiffusionDB Large). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters. ### Data Instances For example, below is the image of `f3501e05-aef7-4225-a9e9-f516527408ac.png` and its key-value pair in `part-000001.json`. <img width="300" src="https://i.imgur.com/gqWcRs2.png"> ```json { "f3501e05-aef7-4225-a9e9-f516527408ac.png": { "p": "geodesic landscape, john chamberlain, christopher balaskas, tadao ando, 4 k, ", "se": 38753269, "c": 12.0, "st": 50, "sa": "k_lms" }, } ``` ### Data Fields - key: Unique image name - `p`: Prompt - `se`: Random seed - `c`: CFG Scale (guidance scale) - `st`: Steps - `sa`: Sampler ### Dataset Metadata To help you easily access prompts and other attributes of images without downloading all the Zip files, we include two metadata tables `metadata.parquet` and `metadata-large.parquet` for DiffusionDB 2M and DiffusionDB Large, respectively. The shape of `metadata.parquet` is (2000000, 13) and the shape of `metatable-large.parquet` is (14000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table. Below are three random rows from `metadata.parquet`. | image_name | prompt | part_id | seed | step | cfg | sampler | width | height | user_name | timestamp | image_nsfw | prompt_nsfw | |:-----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------:|-------:|------:|----------:|--------:|---------:|:-----------------------------------------------------------------|:--------------------------|-------------:|--------------:| | 0c46f719-1679-4c64-9ba9-f181e0eae811.png | a small liquid sculpture, corvette, viscous, reflective, digital art | 1050 | 2026845913 | 50 | 7 | 8 | 512 | 512 | c2f288a2ba9df65c38386ffaaf7749106fed29311835b63d578405db9dbcafdb | 2022-08-11 09:05:00+00:00 | 0.0845108 | 0.00383462 | | a00bdeaa-14eb-4f6c-a303-97732177eae9.png | human sculpture of lanky tall alien on a romantic date at italian restaurant with smiling woman, nice restaurant, photography, bokeh | 905 | 1183522603 | 50 | 10 | 8 | 512 | 768 | df778e253e6d32168eb22279a9776b3cde107cc82da05517dd6d114724918651 | 2022-08-19 17:55:00+00:00 | 0.692934 | 0.109437 | | 6e5024ce-65ed-47f3-b296-edb2813e3c5b.png | portrait of barbaric spanish conquistador, symmetrical, by yoichi hatakenaka, studio ghibli and dan mumford | 286 | 1713292358 | 50 | 7 | 8 | 512 | 640 | 1c2e93cfb1430adbd956be9c690705fe295cbee7d9ac12de1953ce5e76d89906 | 2022-08-12 03:26:00+00:00 | 0.0773138 | 0.0249675 | #### Metadata Schema `metadata.parquet` and `metatable-large.parquet` share the same schema. |Column|Type|Description| |:---|:---|:---| |`image_name`|`string`|Image UUID filename.| |`prompt`|`string`|The text prompt used to generate this image.| |`part_id`|`uint16`|Folder ID of this image.| |`seed`|`uint32`| Random seed used to generate this image.| |`step`|`uint16`| Step count (hyperparameter).| |`cfg`|`float32`| Guidance scale (hyperparameter).| |`sampler`|`uint8`| Sampler method (hyperparameter). Mapping: `{1: "ddim", 2: "plms", 3: "k_euler", 4: "k_euler_ancestral", 5: "k_heun", 6: "k_dpm_2", 7: "k_dpm_2_ancestral", 8: "k_lms", 9: "others"}`. |`width`|`uint16`|Image width.| |`height`|`uint16`|Image height.| |`user_name`|`string`|The unique discord ID's SHA256 hash of the user who generated this image. For example, the hash for `xiaohk#3146` is `e285b7ef63be99e9107cecd79b280bde602f17e0ca8363cb7a0889b67f0b5ed0`. "deleted_account" refer to users who have deleted their accounts. None means the image has been deleted before we scrape it for the second time.| |`timestamp`|`timestamp`|UTC Timestamp when this image was generated. None means the image has been deleted before we scrape it for the second time. Note that timestamp is not accurate for duplicate images that have the same prompt, hypareparameters, width, height.| |`image_nsfw`|`float32`|Likelihood of an image being NSFW. Scores are predicted by [LAION's state-of-art NSFW detector](https://github.com/LAION-AI/LAION-SAFETY) (range from 0 to 1). A score of 2.0 means the image has already been flagged as NSFW and blurred by Stable Diffusion.| |`prompt_nsfw`|`float32`|Likelihood of a prompt being NSFW. Scores are predicted by the library [Detoxicy](https://github.com/unitaryai/detoxify). Each score represents the maximum of `toxicity` and `sexual_explicit` (range from 0 to 1).| > **Warning** > Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects. <img src="https://i.imgur.com/1RiGAXL.png" width="100%"> ### Data Splits For DiffusionDB 2M, we split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file. For DiffusionDB Large, we split 14 million images into 14,000 folders where each folder contains 1,000 images and a JSON file. ### Loading Data Subsets DiffusionDB is large (1.6TB or 6.5 TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary. #### Method 1: Using Hugging Face Datasets Loader You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the [Dataset Preview](https://huggingface.co/datasets/poloclub/diffusiondb/viewer/all/train). ```python import numpy as np from datasets import load_dataset # Load the dataset with the `large_random_1k` subset dataset = load_dataset('poloclub/diffusiondb', 'large_random_1k') ``` #### Method 2. Use the PoloClub Downloader This repo includes a Python downloader [`download.py`](https://github.com/poloclub/diffusiondb/blob/main/scripts/download.py) that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB. ##### Usage/Examples The script is run using command-line arguments as follows: - `-i` `--index` - File to download or lower bound of a range of files if `-r` is also set. - `-r` `--range` - Upper bound of range of files to download if `-i` is set. - `-o` `--output` - Name of custom output directory. Defaults to the current directory if not set. - `-z` `--unzip` - Unzip the file/files after downloading - `-l` `--large` - Download from Diffusion DB Large. Defaults to Diffusion DB 2M. ###### Downloading a single file The specific file to download is supplied as the number at the end of the file on HuggingFace. The script will automatically pad the number out and generate the URL. ```bash python download.py -i 23 ``` ###### Downloading a range of files The upper and lower bounds of the set of files to download are set by the `-i` and `-r` flags respectively. ```bash python download.py -i 1 -r 2000 ``` Note that this range will download the entire dataset. The script will ask you to confirm that you have 1.7Tb free at the download destination. ###### Downloading to a specific directory The script will default to the location of the dataset's `part` .zip files at `images/`. If you wish to move the download location, you should move these files as well or use a symbolic link. ```bash python download.py -i 1 -r 2000 -o /home/$USER/datahoarding/etc ``` Again, the script will automatically add the `/` between the directory and the file when it downloads. ###### Setting the files to unzip once they've been downloaded The script is set to unzip the files _after_ all files have downloaded as both can be lengthy processes in certain circumstances. ```bash python download.py -i 1 -r 2000 -z ``` #### Method 3. Use `metadata.parquet` (Text Only) If your task does not require images, then you can easily access all 2 million prompts and hyperparameters in the `metadata.parquet` table. ```python from urllib.request import urlretrieve import pandas as pd # Download the parquet table table_url = f'https://huggingface.co/datasets/poloclub/diffusiondb/resolve/main/metadata.parquet' urlretrieve(table_url, 'metadata.parquet') # Read the table using Pandas metadata_df = pd.read_parquet('metadata.parquet') ``` ## Dataset Creation ### Curation Rationale Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos. However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt. Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images. To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs. ### Source Data #### Initial Data Collection and Normalization We construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion) because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information. #### Who are the source language producers? The language producers are users of the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion). ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information The authors removed the discord usernames from the dataset. We decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop better understanding of large text-to-image generative models. The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models. It should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a [Google Form](https://forms.gle/GbYaSpRNYqxCafMZ9) on the [DiffusionDB website](https://poloclub.github.io/diffusiondb/) where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB. ### Discussion of Biases The 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images. ### Other Known Limitations **Generalizability.** Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models. Therefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models. ## Additional Information ### Dataset Curators DiffusionDB is created by [Jay Wang](https://zijie.wang), [Evan Montoya](https://www.linkedin.com/in/evan-montoya-b252391b4/), [David Munechika](https://www.linkedin.com/in/dmunechika/), [Alex Yang](https://alexanderyang.me), [Ben Hoover](https://www.bhoov.com), [Polo Chau](https://faculty.cc.gatech.edu/~dchau/). ### Licensing Information The DiffusionDB dataset is available under the [CC0 1.0 License](https://creativecommons.org/publicdomain/zero/1.0/). The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE). ### Citation Information ```bibtex @article{wangDiffusionDBLargescalePrompt2022, title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models}, author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng}, year = {2022}, journal = {arXiv:2210.14896 [cs]}, url = {https://arxiv.org/abs/2210.14896} } ``` ### Contributions If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact [Jay Wang](https://zijie.wang).
poloclub/diffusiondb
[ "task_categories:text-to-image", "task_categories:image-to-text", "task_ids:image-captioning", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:n>1T", "source_datasets:original", "language:en", "license:cc0-1.0", "stable diffusion", "prompt engineering", "prompts", "research paper", "arxiv:2210.14896", "region:us" ]
2022-10-25T01:25:28+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["n>1T"], "source_datasets": ["original"], "task_categories": ["text-to-image", "image-to-text"], "task_ids": ["image-captioning"], "pretty_name": "DiffusionDB", "layout": "default", "title": "Home", "nav_order": 1, "has_children": false, "tags": ["stable diffusion", "prompt engineering", "prompts", "research paper"]}
2024-01-22T22:17:47+00:00
b9bf171f5074372f246208f7c42ff581dfe85e93
workitos/SD_Anime_Characters_Repository
[ "license:unknown", "region:us" ]
2022-10-25T01:26:46+00:00
{"license": "unknown"}
2022-11-11T10:20:30+00:00
316f42386810b2f6ed884e884b05cdc085821a05
erya/1111
[ "license:other", "region:us" ]
2022-10-25T01:28:35+00:00
{"license": "other"}
2022-10-25T01:28:35+00:00
37b04e9237bdfaba2f149f437f104f63a6d4f25a
# Dataset Card for "eraser_cose" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
niurl/eraser_cose
[ "region:us" ]
2022-10-25T02:21:49+00:00
{"dataset_info": {"features": [{"name": "doc_id", "dtype": "string"}, {"name": "question", "sequence": "string"}, {"name": "query", "dtype": "string"}, {"name": "evidence_span", "sequence": {"sequence": "int64"}}, {"name": "classification", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 282071, "num_examples": 1079}, {"name": "train", "num_bytes": 2316094, "num_examples": 8752}, {"name": "val", "num_bytes": 288029, "num_examples": 1086}], "download_size": 1212369, "dataset_size": 2886194}}
2022-10-25T02:22:37+00:00
2192eb5fc49e5dda28d7e3ea9aa4cd35ab00ef5b
# Dataset Card for COPA-SSE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/a-brassard/copa-sse - **Paper:** [COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning](https://arxiv.org/abs/2201.06777) - **Point of Contact:** [Ana Brassard](mailto:[email protected]) ### Dataset Summary ![Crowdsourcing protocol](crowdsourcing_protocol.png) COPA-SSE contains crowdsourced explanations for the [Balanced COPA](https://balanced-copa.github.io/) dataset, a variant of the [Choice of Plausible Alternatives (COPA)](https://people.ict.usc.edu/~gordon/copa.html) benchmark. The explanations are formatted as a set of triple-like common sense statements with [ConceptNet](https://conceptnet.io/) relations but freely written concepts. ### Supported Tasks and Leaderboards Can be used to train a model for explain+predict or predict+explain settings. Suited for both text-based and graph-based architectures. Base task is COPA (causal QA). ### Languages English ## Dataset Structure ### Data Instances Validation and test set each contains Balanced COPA samples with added explanations in `.jsonl` format. The question ids match the original questions of the Balanced COPA validation and test sets, respectively. ### Data Fields Each entry contains: - the original question (matching format and ids) - `human-explanations`: a list of explanations each containing: - `expl-id`: the explanation id - `text`: the explanation in plain text (full sentences) - `worker-id`: anonymized worker id (the author of the explanation) - `worker-avg`: the average score the author got for their explanations - `all-ratings`: all collected ratings for the explanation - `filtered-ratings`: ratings excluding those that failed the control - `triples`: the triple-form explanation (a list of ConceptNet-like triples) Example entry: ``` id: 1, asks-for: cause, most-plausible-alternative: 1, p: "My body cast a shadow over the grass.", a1: "The sun was rising.", a2: "The grass was cut.", human-explanations: [ {expl-id: f4d9b407-681b-4340-9be1-ac044f1c2230, text: "Sunrise causes casted shadows.", worker-id: 3a71407b-9431-49f9-b3ca-1641f7c05f3b, worker-avg: 3.5832864694635025, all-ratings: [1, 3, 3, 4, 3], filtered-ratings: [3, 3, 4, 3], filtered-avg-rating: 3.25, triples: [["sunrise", "Causes", "casted shadows"]] }, ...] ``` ### Data Splits Follows original Balanced COPA split: 1000 dev and 500 test instances. Each instance has up to nine explanations. ## Dataset Creation ### Curation Rationale The goal was to collect human-written explanations to supplement an existing commonsense reasoning benchmark. The triple-like format was designed to support graph-based models and increase the overall data quality, the latter being notoriously lacking in freely-written crowdsourced text. ### Source Data #### Initial Data Collection and Normalization The explanations in COPA-SSE are fully crowdsourced via the Amazon Mechanical Turk platform. Workers entered explanations by providing one or more concept-relation-concept triples. The explanations were then rated by different annotators with one- to five-star ratings. The final dataset contains explanations with a range of quality ratings. Additional collection rounds guaranteed that each sample has at least one explanation rated 3.5 stars or higher. #### Who are the source language producers? The original COPA questions (500 dev+500 test) were initially hand-crafted by experts. Similarly, the additional 500 development samples in Balanced COPA were authored by a small team of NLP researchers. Finally, the added explanations and quality ratings in COPA-SSE were collected with the help of Amazon Mechanical Turk workers who passed initial qualification rounds. ### Annotations #### Annotation process Workers were shown a Balanced COPA question, its answer, and a short instructional text. Then, they filled in free-form text fields for head and tail concepts and selected the relation from a drop-down menu with a curated selection of ConceptNet relations. Each explanation was rated by five different workers who were shown the same question and answer with five candidate explanations. #### Who are the annotators? The workers were restricted to persons located in the U.S. or G.B., with a HIT approval of 98% or more, and 500 or more approved HITs. Their identity and further personal information are not available. ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset Models trained to output similar explanations as those in COPA-SSE may not necessarily provide convincing or faithful explanations. Researchers should carefully evaluate the resulting explanations before considering any real-world applications. ### Discussion of Biases COPA questions ask for causes or effects of everyday actions or interactions, some of them containing gendered language. Some explanations may reinforce harmful stereotypes if their reasoning is based on biased assumptions. These biases were not verified during collection. ### Other Known Limitations The data was originally intended to be explanation *graphs*, i.e., hypothetical "ideal" subgraphs of a commonsense knowledge graph. While they can still function as valid natural language explanations, their wording may be at times unnatural to a human and may be better suited for graph-based implementations. ## Additional Information ### Dataset Curators This work was authored by Ana Brassard, Benjamin Heinzerling, Pride Kavumba, and Kentaro Inui. All are both members of the Riken AIP Natural Language Understanding Team and the Tohoku NLP Lab under Tohoku University. ### Licensing Information COPA-SSE is released under the [MIT License](https://mit-license.org/). ### Citation Information ``` @InProceedings{copa-sse:LREC2022, author = {Brassard, Ana and Heinzerling, Benjamin and Kavumba, Pride and Inui, Kentaro}, title = {COPA-SSE: Semi-structured Explanations for Commonsense Reasoning}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {3994--4000}, url = {https://aclanthology.org/2022.lrec-1.425} } ``` ### Contributions Thanks to [@a-brassard](https://github.com/a-brassard) for adding this dataset.
anab/copa-sse
[ "task_categories:text2text-generation", "task_categories:multiple-choice", "task_ids:explanation-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:mit", "commonsense reasoning", "explanation", "graph-based reasoning", "arxiv:2201.06777", "region:us" ]
2022-10-25T06:11:33+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text2text-generation", "multiple-choice"], "task_ids": ["explanation-generation"], "pretty_name": "Semi-structured Explanations for Commonsense Reasoning", "tags": ["commonsense reasoning", "explanation", "graph-based reasoning"]}
2022-10-26T00:53:17+00:00
d2ee25d7fb18334d410a678499a94afede8ec4f4
# FindZebra corpus A collection of 30.658 curated articles about rare diseases gathered from GARD, GeneReviews, Genetics Home Reference, OMIM, Orphanet, and Wikipedia. Each article is referenced with a Concept Unique Identifier ([CUI](https://www.nlm.nih.gov/research/umls/new_users/online_learning/Meta_005.html)). ## Preprocessing The raw HTML content of each article has been processed using the following code (`text` column): ```python # Preprocessing code import math import html2text parser = html2text.HTML2Text() parser.ignore_links = True parser.ignore_images = True parser.ignore_tables = True parser.ignore_emphasis = True parser.body_width = math.inf parser.body_width = math.inf article_text = parser.handle(article_html) ```
findzebra/corpus
[ "region:us" ]
2022-10-25T07:05:58+00:00
{}
2022-10-25T08:58:33+00:00
91b1380fc7ff16a970b8b240e56c427b5638087a
# Lightning Style Embedding / Textual Inversion ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"art by lightning_style"``` If it is to strong just add [] around it. Trained until 10000 steps I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/HNHRcZg.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/8B31Umz.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/88sHalA.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/WhlLomb.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/a1Usv3u.png width=100% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/lightning_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "region:us" ]
2022-10-25T08:56:21+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false}
2022-10-25T09:05:17+00:00
8552aab8a6e2bb55739fba702171fd1a4a12d181
# FindZebra Queries A set of 248 search queries annotated with the correct diagnosis. The diagnosis is referenced with a Concept Unique Identifier ([CUI](https://www.nlm.nih.gov/research/umls/new_users/online_learning/Meta_005.html)). In a retrieval setting, the task consists of retrieving an article from the [FindZebra corpus](https://huggingface.co/datasets/findzebra/corpus) with a CUI that matches the query CUI.
findzebra/queries
[ "region:us" ]
2022-10-25T08:58:49+00:00
{}
2022-10-25T09:02:34+00:00
25700c3e831b26e4224a7c14b226e8cccdf3839f
# Dataset Card for "sv_corpora_parliament_processed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
juanhebert/sv_corpora_parliament_processed
[ "region:us" ]
2022-10-25T09:51:07+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 292359009, "num_examples": 1892723}], "download_size": 158940474, "dataset_size": 292359009}}
2022-11-03T10:21:27+00:00
155b325de98e02bb6286fce64282d2c4c30a1b41
## Dataset Description - **Homepage:** https://www.darrow.ai/ - **Repository:** https://github.com/darrow-labs/ClassActionPrediction - **Paper:** https://arxiv.org/abs/2211.00582 - **Leaderboard:** N/A - **Point of Contact:** [Gila Hayat](mailto:[email protected]) ### Dataset Summary USClassActions is an English dataset of 200 complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using Darrow.ai proprietary tool. ### Data Instances ```python from datasets import load_dataset dataset = load_dataset('darrow-ai/USClassActionOutcomes_ExpertsAnnotations') ``` ### Data Fields `id`: (**int**) a unique identifier of the document \ `origin_label `: (**str**) the outcome of the case \ `target_text`: (**str**) the facts of the case \ `annotator_prediction `: (**str**) annotators predictions of the case outcome based on the target_text \ `annotator_confidence `: (**str**) the annotator's level of confidence in his outcome prediction \ ### Curation Rationale The dataset was curated by Darrow.ai (2022). ### Citation Information *Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus* *ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US* *Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022* ``` @InProceedings{darrow-niklaus-2022-uscp, author = {Semo, Gil and Bernsohn, Dor and Hagag, Ben and Hayat, Gila and Niklaus, Joel}, title = {ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US}, booktitle = {Proceedings of the 2022 Natural Legal Language Processing Workshop}, year = {2022}, location = {Abu Dhabi}, } ```
darrow-ai/USClassActionOutcomes_ExpertsAnnotations
[ "license:gpl-3.0", "arxiv:2211.00582", "region:us" ]
2022-10-25T11:43:36+00:00
{"license": "gpl-3.0"}
2022-11-06T12:35:30+00:00
54b98fe3cefa0d99c15b29708e85dc6fc65bc0e1
KETI-AIR/nikl_summarization
[ "license:apache-2.0", "region:us" ]
2022-10-25T12:02:15+00:00
{"license": "apache-2.0"}
2022-10-31T06:07:43+00:00
6f2bcf9f0a73bd98dcd70443a21c67322cd04db4
vonewman/word-embeddings-dataset
[ "license:mit", "region:us" ]
2022-10-25T12:06:02+00:00
{"license": "mit"}
2022-10-25T12:07:40+00:00
c08a4a4d0a52b1d179a5dabd40ef66dbe680fd57
speedoflight/My-shapez-dataset-thing
[ "task_categories:text-classification", "size_categories:n<1K", "language:en", "license:unlicense", "game", "fun", "region:us" ]
2022-10-25T12:08:17+00:00
{"language": ["en"], "license": "unlicense", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "shapez dataset", "tags": ["game", "fun"]}
2023-02-09T17:29:35+00:00
48c38c625b1fdfd2f04b8788874509ddc3aa0af1
arias048/myPictures
[ "license:other", "region:us" ]
2022-10-25T13:01:11+00:00
{"license": "other"}
2022-10-28T18:45:30+00:00
ee9af9cb8db048248c9a0665691bfc6903d09113
# Dataset Card for CLARA-MeD ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://clara-nlp.uned.es/home/med/](https://clara-nlp.uned.es/home/med/) - **Repository:** [https://github.com/lcampillos/CLARA-MeD](https://github.com/lcampillos/CLARA-MeD) - **Paper:** [http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6439](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6439) - **DOI:** [https://doi.org/10.20350/digitalCSIC/14644](https://doi.org/10.20350/digitalCSIC/14644) - **Point of Contact:** [Leonardo Campillos-Llanos]([email protected]) ### Dataset Summary A parallel corpus with a subset of 3800 sentence pairs of professional and laymen variants (149 862 tokens) as a benchmark for medical text simplification. This dataset was collected in the CLARA-MeD project, with the goal of simplifying medical texts in the Spanish language and reducing the language barrier to patient's informed decision making. ### Supported Tasks and Leaderboards Medical text simplification ### Languages Spanish ## Dataset Structure ### Data Instances For each instance, there is a string for the source text (professional version), and a string for the target text (simplified version). ``` {'SOURCE': 'adenocarcinoma ductal de páncreas' 'TARGET': 'Cáncer de páncreas'} ``` ### Data Fields - `SOURCE`: a string containing the professional version. - `TARGET`: a string containing the simplified version. ## Dataset Creation ### Source Data #### Who are the source language producers? 1. Drug leaflets and summaries of product characteristics from [CIMA](https://cima.aemps.es) 2. Cancer-related information summaries from the [National Cancer Institute](https://www.cancer.gov/) 3. Clinical trials announcements from [EudraCT](https://www.clinicaltrialsregister.eu/) ### Annotations #### Annotation process Semi-automatic alignment of technical and patient versions of medical sentences. Inter-annotator agreement measured with Cohen's Kappa (average Kappa = 0.839 +- 0.076; very high agreement). #### Who are the annotators? Leonardo Campillos-Llanos Adrián Capllonch-Carriónb Ana Rosa Terroba-Reinares Ana Valverde-Mateos Sofía Zakhir-Puig ### Personal and Sensitive Information No personal and sensitive information was used. ### Licensing Information These data are aimed at research and educational purposes, and released under a Creative Commons Non-Commercial Attribution (CC-BY-NC-A) 4.0 International License. ### Citation Information Campillos Llanos, L., Terroba Reinares, A. R., Zakhir Puig, S., Valverde, A., & Capllonch-Carrión, A. (2022). Building a comparable corpus and a benchmark for Spanish medical text simplification. *Procesamiento del lenguaje natural*, 69, pp. 189--196. ### Contributions Thanks to [Jónathan Heras from Universidad de La Rioja](http://www.unirioja.es/cu/joheras) ([@joheras](https://github.com/joheras)) for formatting this dataset for Hugging Face.
CLARA-MeD/CLARA-MeD
[ "license:cc-by-nc-4.0", "region:us" ]
2022-10-25T13:26:10+00:00
{"license": "cc-by-nc-4.0"}
2022-10-25T13:54:04+00:00
7f368064f1df591ec2cba22cab730eb8e9a53104
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-30b * Dataset: mathemakitten/winobias_antistereotype_dev_cot * Config: mathemakitten--winobias_antistereotype_dev_cot * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664175
[ "autotrain", "evaluation", "region:us" ]
2022-10-25T13:29:26+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-30b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev_cot", "dataset_config": "mathemakitten--winobias_antistereotype_dev_cot", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-25T14:21:54+00:00
193f68d798850e2a593c181844a60af8b12267ed
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-13b * Dataset: mathemakitten/winobias_antistereotype_dev_cot * Config: mathemakitten--winobias_antistereotype_dev_cot * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664174
[ "autotrain", "evaluation", "region:us" ]
2022-10-25T13:29:27+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-13b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev_cot", "dataset_config": "mathemakitten--winobias_antistereotype_dev_cot", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-25T13:57:27+00:00
5f080cd1756fbe0260163aefce18f65dbd0231f4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: ArthurZ/opt-125m * Dataset: mathemakitten/winobias_antistereotype_dev_cot * Config: mathemakitten--winobias_antistereotype_dev_cot * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664170
[ "autotrain", "evaluation", "region:us" ]
2022-10-25T13:29:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "ArthurZ/opt-125m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev_cot", "dataset_config": "mathemakitten--winobias_antistereotype_dev_cot", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-25T13:30:11+00:00
45ec734c3aa4ead5700762bee975f44b17e88c23
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-66b * Dataset: mathemakitten/winobias_antistereotype_dev_cot * Config: mathemakitten--winobias_antistereotype_dev_cot * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664176
[ "autotrain", "evaluation", "region:us" ]
2022-10-25T13:29:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-66b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev_cot", "dataset_config": "mathemakitten--winobias_antistereotype_dev_cot", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-25T15:42:14+00:00
673278884406b493c92a897afdedd8b19d7778a9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: ArthurZ/opt-350m * Dataset: mathemakitten/winobias_antistereotype_dev_cot * Config: mathemakitten--winobias_antistereotype_dev_cot * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664171
[ "autotrain", "evaluation", "region:us" ]
2022-10-25T13:29:32+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "ArthurZ/opt-350m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev_cot", "dataset_config": "mathemakitten--winobias_antistereotype_dev_cot", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-25T13:31:02+00:00
ce2428a77872d198647fed39125b81a77dc71b1b
Vecinito87/SD_IMG_POOL
[ "license:unknown", "region:us" ]
2022-10-25T14:07:47+00:00
{"license": "unknown"}
2022-10-25T14:07:47+00:00
1fc8d17a6617ec0ea4d098ff55b497b6a40187ec
# PKLot 50 This dataset comprises 50 fully annotated images. The original images are were introduced in [*PKLot – A robust dataset for parking lot classification*](https://www.inf.ufpr.br/lesoliveira/download/ESWA2015.pdf). ## Labeling Method Labeling was manually completed using CVAT with the assistance of Voxel51 for inspection. ## Original dataset citation info Almeida, P., Oliveira, L. S., Silva Jr, E., Britto Jr, A., Koerich, A., PKLot – A robust dataset for parking lot classification, Expert Systems with Applications, 42(11):4937-4949, 2015.
ajankelo/pklot_50
[ "language:en", "license:cc-by-4.0", "PKLot", "object detection", "region:us" ]
2022-10-25T14:21:17+00:00
{"language": "en", "license": "cc-by-4.0", "tags": ["PKLot", "object detection"]}
2022-10-28T13:39:22+00:00
4cb09996580bc8efbc747911f8eb5e96340ef5a4
# Dataset Card for Wine Recognition dataset ## Dataset Description - **Homepage:** https://archive.ics.uci.edu/ml/datasets/wine - **Papers:** 1. S. Aeberhard, D. Coomans and O. de Vel, Comparison of Classifiers in High Dimensional Settings, Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of Mathematics and Statistics, James Cook University of North Queensland. 2. S. Aeberhard, D. Coomans and O. de Vel, "THE CLASSIFICATION PERFORMANCE OF RDA" Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of Mathematics and Statistics, James Cook University of North Queensland. - **Point of Contact:** stefan'@'coral.cs.jcu.edu.au ### Dataset Summary These data are the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines. In a classification context, this is a well posed problem with "well behaved" class structures. A good data set for first testing of a new classifier, but not very challenging. ### Supported Tasks and Leaderboards Classification (cultivar) from continuous variables (all other variables) ## Dataset Structure ### Data Instances 178 wines ### Data Fields 1. Wine category (cultivar) 2. Alcohol 3. Malic acid 4. Ash 5. Alcalinity of ash 6. Magnesium 7. Total phenols 8. Flavanoids 9. Nonflavanoid phenols 10. Proanthocyanins 11. Color intensity 12. Hue 13. OD280/OD315 of diluted wines 14. Proline ### Data Splits None ## Dataset Creation ### Source Data https://archive.ics.uci.edu/ml/datasets/wine #### Initial Data Collection and Normalization Original Owners: Forina, M. et al, PARVUS - An Extendible Package for Data Exploration, Classification and Correlation. Institute of Pharmaceutical and Food Analysis and Technologies, Via Brigata Salerno, 16147 Genoa, Italy. ## Additional Information ### Dataset Curators Stefan Aeberhard ### Licensing Information No information found on the original website
katossky/wine-recognition
[ "task_categories:tabular-classification", "task_ids:tabular-multi-class-classification", "annotations_creators:no-annotation", "language_creators:expert-generated", "size_categories:n<1K", "source_datasets:original", "license:unknown", "region:us" ]
2022-10-25T15:15:53+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": [], "license": ["unknown"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["tabular-classification"], "task_ids": ["tabular-multi-class-classification"], "pretty_name": "Wine Recognition Dataset"}
2022-10-29T09:22:58+00:00
82e32713ee2a94bb407c50c698b9a0e62cd19e59
eliasnaranjom/entrenamiento
[ "license:other", "region:us" ]
2022-10-25T15:18:38+00:00
{"license": "other"}
2022-10-25T15:25:48+00:00
68de10d8afbe20cad6c000a2553d533209fad025
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-350m * Dataset: mathemakitten/winobias_antistereotype_test_cot * Config: mathemakitten--winobias_antistereotype_test_cot * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064213
[ "autotrain", "evaluation", "region:us" ]
2022-10-25T16:30:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-350m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-25T16:31:46+00:00
7e69f670cfbb39f3508e80e451ce7b23670decad
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-66b * Dataset: mathemakitten/winobias_antistereotype_test_cot * Config: mathemakitten--winobias_antistereotype_test_cot * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064214
[ "autotrain", "evaluation", "region:us" ]
2022-10-25T16:30:30+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-66b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-25T18:35:25+00:00
4835a4ee92aee9bac60ad7dc8154c1f53d9ab40a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: ArthurZ/opt-125m * Dataset: mathemakitten/winobias_antistereotype_test_cot * Config: mathemakitten--winobias_antistereotype_test_cot * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064210
[ "autotrain", "evaluation", "region:us" ]
2022-10-25T16:30:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "ArthurZ/opt-125m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-25T16:31:17+00:00
a5b40e34984ddd95bfeb302b23bcf53b95714bf7
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-30b * Dataset: mathemakitten/winobias_antistereotype_test_cot * Config: mathemakitten--winobias_antistereotype_test_cot * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064212
[ "autotrain", "evaluation", "region:us" ]
2022-10-25T16:30:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-30b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-25T17:28:08+00:00
b562e2007d01f1bafc34a270b018a1269e74ed9f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-6.7b * Dataset: mathemakitten/winobias_antistereotype_test_cot * Config: mathemakitten--winobias_antistereotype_test_cot * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064215
[ "autotrain", "evaluation", "region:us" ]
2022-10-25T16:30:33+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-6.7b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-25T16:44:32+00:00
399a3b63758d394fbf31111d478a13aaa3a4539d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: ArthurZ/opt-350m * Dataset: mathemakitten/winobias_antistereotype_test_cot * Config: mathemakitten--winobias_antistereotype_test_cot * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064209
[ "autotrain", "evaluation", "region:us" ]
2022-10-25T16:30:39+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "ArthurZ/opt-350m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-25T16:32:11+00:00
61db59aee71d376d9096eb0f2f575e40ea6ae344
BrainArtLabs/LiminalSourceDiffusionV1
[ "license:cc-by-4.0", "region:us" ]
2022-10-25T16:57:08+00:00
{"license": "cc-by-4.0"}
2022-10-25T17:08:28+00:00
9920e8130b63513c598a6cdde10df3e2728bccef
# Dataset Card for "financial-news-articles" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) The data was obtained from [here](https://www.kaggle.com/datasets/jeet2016/us-financial-news-articles)
ashraq/financial-news-articles
[ "region:us" ]
2022-10-25T16:59:05+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 848347009, "num_examples": 306242}], "download_size": 492243206, "dataset_size": 848347009}}
2022-10-25T17:01:06+00:00
1697e92453b1870cacf8c0212bb892d1b5a7f5ce
tkuye/resuparse
[ "license:apache-2.0", "region:us" ]
2022-10-25T18:29:03+00:00
{"license": "apache-2.0"}
2022-10-25T21:09:47+00:00
d57e1e36be67089516b1a173bdfe1ddc74d00d12
# Dataset Card for "code_search_data-pep8" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tomekkorbak/code_search_data-pep8
[ "region:us" ]
2022-10-25T18:35:59+00:00
{"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}, {"name": "func_path_in_repository", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "whole_func_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "func_code_string", "dtype": "string"}, {"name": "func_code_tokens", "sequence": "string"}, {"name": "func_documentation_string", "dtype": "string"}, {"name": "func_documentation_tokens", "sequence": "string"}, {"name": "split_name", "dtype": "string"}, {"name": "func_code_url", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "test", "num_bytes": 1373345211.3356366, "num_examples": 362178}, {"name": "train", "num_bytes": 189595338.66436344, "num_examples": 50000}], "download_size": 695684763, "dataset_size": 1562940550.0}}
2022-10-25T18:44:10+00:00
9383a22eb926bd0335a2ad67f642b75b7f2ac33d
# Dataset Card for "codeparrot-pep8-scored" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tomekkorbak/codeparrot-pep8-scored
[ "region:us" ]
2022-10-25T19:12:34+00:00
{"dataset_info": {"features": [{"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "copies", "dtype": "string"}, {"name": "size", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "hash", "dtype": "int64"}, {"name": "line_mean", "dtype": "float64"}, {"name": "line_max", "dtype": "int64"}, {"name": "alpha_frac", "dtype": "float64"}, {"name": "autogenerated", "dtype": "bool"}, {"name": "ratio", "dtype": "float64"}, {"name": "config_test", "dtype": "bool"}, {"name": "has_no_keywords", "dtype": "bool"}, {"name": "few_assignments", "dtype": "bool"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "test", "num_bytes": 1556261021.25, "num_examples": 150000}, {"name": "train", "num_bytes": 518753673.75, "num_examples": 50000}], "download_size": 771399764, "dataset_size": 2075014695.0}}
2022-10-25T19:14:40+00:00
e64c6762a193e9c8b2bf95454422a560b1c5ca87
# Dataset Card for "github-issues" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lipaoMai/github-issues
[ "region:us" ]
2022-10-25T19:17:29+00:00
{"dataset_info": {"features": [{"name": "patient_id", "dtype": "int64"}, {"name": "drugName", "dtype": "string"}, {"name": "condition", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "rating", "dtype": "float64"}, {"name": "date", "dtype": "string"}, {"name": "usefulCount", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 28367208, "num_examples": 53471}, {"name": "train", "num_bytes": 85172055, "num_examples": 160398}], "download_size": 63481104, "dataset_size": 113539263}}
2022-10-25T19:17:38+00:00
0da2571fe18ccc3748f7f202ee300a5824b33e37
# Dataset Card for "drug_one_1dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lipaoMai/drug_one_1dataset
[ "region:us" ]
2022-10-25T19:27:48+00:00
{"dataset_info": {"features": [{"name": "patient_id", "dtype": "int64"}, {"name": "drugName", "dtype": "string"}, {"name": "condition", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "rating", "dtype": "float64"}, {"name": "date", "dtype": "string"}, {"name": "usefulCount", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 28367208, "num_examples": 53471}, {"name": "train", "num_bytes": 85172055, "num_examples": 160398}], "download_size": 63481104, "dataset_size": 113539263}}
2022-10-25T19:27:56+00:00
63f32b8f7bb300c1ac35e9146b38e7e2704c714d
This is a repreprocessed version of [P3](https://huggingface.co/datasets/bigscience/P3) with any updates that have been made to the P3 datasets since the release of the original P3. It is used for the finetuning of [bloomz-p3](https://huggingface.co/bigscience/bloomz-p3) & [mt0-xxl-p3](https://huggingface.co/bigscience/mt0-xxl-p3). The script is available [here](https://github.com/bigscience-workshop/bigscience/blob/638e66e40395dbfab9fa08a662d43b317fb2eb38/data/p3/prepare_p3.py).
Muennighoff/P3
[ "task_categories:other", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "multilinguality:monolingual", "size_categories:100M<n<1B", "language:en", "license:apache-2.0", "region:us" ]
2022-10-25T19:29:10+00:00
{"annotations_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "task_categories": ["other"], "pretty_name": "P3"}
2022-11-03T15:15:39+00:00
5ec4fd478a40966b89315c2ad181766210c6a9d7
# Dataset Card for OLM May 2017 Common Crawl Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 16% of the May 2017 Common Crawl snapshot. Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`.
olm/olm-CC-MAIN-2017-22-sampling-ratio-0.16178770949
[ "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "language:en", "pretraining", "language modelling", "common crawl", "web", "region:us" ]
2022-10-25T21:33:21+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "OLM May 2017 Common Crawl", "tags": ["pretraining", "language modelling", "common crawl", "web"]}
2022-11-04T17:12:48+00:00
43e6c210364333a854e568c24324db3fd67875d8
# Magic Armor Embedding / Textual Inversion ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"art by magic_armor"``` If it is to strong just add [] around it. Trained until 10000 steps I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/3O5YpWT.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/icDlRiA.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/AcrdSwB.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/hP923FH.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/RzSFggo.png width=100% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/magic_armor
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "region:us" ]
2022-10-25T22:18:48+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false}
2022-10-25T22:27:11+00:00
d7837f0e3a1e66eaa1884e7a29c7a40ad5c76e0a
<h4> Disclosure </h4> <p> this is my 1st attempt at a embedding, while its not perfect i hope that you are able to create some nice pieces with it, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know </p> <h4> Usage </h4> To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt add <em style="font-weight:600">" art by crusader_knight "</em> add <b>[ ]</b> around it to reduce its weight. <h4> Included Files </h4> <ul> <li>15,000</li> <li>10,000</li> <li>6500</li> </ul> cheers Wipeout <h4> Example Pictures </h4> <table> <tbody><tr> <td><img height="100%/" width="100%" src="https://i.imgur.com/jx0F3zi.png"></td> <td><img height="100%/" width="100%" src="https://i.imgur.com/HZkt3Nx.png"></td> <td><img height="100%/" width="100%" src="https://i.imgur.com/MLKhJXL.png"></td> </tr> </tbody> </table> <h4> Licence </h4> <p><span>This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:</span> </p> <ol> <li>You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content </li> <li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li> <li>You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) <a rel="noopener nofollow" href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">Please read the full license here</a></li> </ol>
zZWipeoutZz/crusader_knight
[ "license:creativeml-openrail-m", "region:us" ]
2022-10-25T22:55:38+00:00
{"license": "creativeml-openrail-m"}
2022-10-25T23:47:13+00:00
def71b74159a8460ce977fc2ace42e32947fb3fa
# Dataset Card for MoralExceptQA ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [MoralCoT](https://github.com/feradauto/MoralCoT) - **Paper:** [When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment](https://arxiv.org/abs/2210.01478) - **Point of Contact:** [Fernando Gonzalez](mailto:[email protected]) , [Zhijing Jin](mailto:[email protected]) ### Dataset Summary Challenge set consisting of moral exception question answering of cases that involve potentially permissible moral exceptions. Our challenge set, MoralExceptQA, is drawn from a series of recent moral psychology studies designed to investigate the flexibility of human moral cognition – specifically, the ability of humans to figure out when it is permissible to break a previously established or well-known rule. ### Languages The language in the dataset is English. ## Dataset Structure ### Data Instances Each instance is a rule-breaking scenario acompanied by an average human response. ### Data Fields - `study`: The moral psychology study. Studies were designed to investigate the ability of humans to figure out when it is permissible to break a previously established or well-known rule. - `context`: The context of the scenario. Different context within the same study are potentially governed by the same rule. - `condition`: Condition in the scenario. - `scenario`: Text description of the scenario. - `human.response`: Average human response (scale 0 to 1) equivalent to the % of people that considered that breaking the rule is permissible. ### Data Splits MoralExceptQA contains one split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data Information about the data collection and annotators can be found in the appendix of [our paper](https://arxiv.org/abs/2210.01478). ### Personal and Sensitive Information The MoralExceptQA dataset does not have privacy concerns. ## Considerations for Using the Data ### Social Impact of Dataset The intended use of this work is to contribute to AI safety research. We do not intend this work to be developed as a tool to automate moral decision-making on behalf of humans, but instead as a way of mitigating risks caused by LLMs’ misunderstanding of human values. The MoralExceptQA dataset does not have privacy concerns or offensive content. ### Discussion of Biases Our subjects are U.S. residents, and therefore our conclusions are limited to this population. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The MoralExceptQA dataset is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/). ### Citation Information ``` @misc{https://doi.org/10.48550/arxiv.2210.01478, doi = {10.48550/ARXIV.2210.01478}, url = {https://arxiv.org/abs/2210.01478}, author = {Jin, Zhijing and Levine, Sydney and Gonzalez, Fernando and Kamal, Ojasv and Sap, Maarten and Sachan, Mrinmaya and Mihalcea, Rada and Tenenbaum, Josh and Schölkopf, Bernhard}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Computers and Society (cs.CY), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} } ```
feradauto/MoralExceptQA
[ "task_categories:text-classification", "arxiv:2210.01478", "region:us" ]
2022-10-25T23:26:07+00:00
{"task_categories": ["text-classification"], "pretty_name": "MoralExceptQA"}
2022-10-27T14:42:04+00:00
7cb8a114a549fed00b53fa81defc8d5c4203b403
CrisPO/Demo_clase_platzi
[ "license:mit", "region:us" ]
2022-10-25T23:54:24+00:00
{"license": "mit"}
2022-10-25T23:57:52+00:00
441a3cdb297dac28361b08fe2446704dfc528b0a
nishimaki/taiyo
[ "license:openrail", "region:us" ]
2022-10-26T01:36:13+00:00
{"license": "openrail"}
2022-10-26T01:37:00+00:00
417d3b60cce220f759c1fe59502bba60d71aef56
uripper/LichessGames
[ "license:cc", "region:us" ]
2022-10-26T02:23:53+00:00
{"license": "cc"}
2022-10-26T21:11:03+00:00
d017d05d7a9a805bb6cdb2a58abcf1561437011c
# Dataset Card for "Romance-cleaned-1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
MarkGG/Romance-cleaned-1
[ "region:us" ]
2022-10-26T02:33:21+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5388007.848468044, "num_examples": 6491}, {"name": "validation", "num_bytes": 599313.1515319562, "num_examples": 722}], "download_size": 3844960, "dataset_size": 5987321.0}}
2022-10-26T02:33:28+00:00
2f6f064d3cb82533354f710c230caf18bb7c521c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-1.3b * Dataset: mathemakitten/winobias_antistereotype_test_cot * Config: mathemakitten--winobias_antistereotype_test_cot * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064279
[ "autotrain", "evaluation", "region:us" ]
2022-10-26T03:12:25+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-1.3b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-26T03:15:41+00:00
465bad23e3af0249144d4497248a2812d90ccc7d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-2.7b * Dataset: mathemakitten/winobias_antistereotype_test_cot * Config: mathemakitten--winobias_antistereotype_test_cot * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064280
[ "autotrain", "evaluation", "region:us" ]
2022-10-26T03:12:25+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-2.7b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-26T03:17:02+00:00
692c8e1dcabbe24e337357e5624f1ccb2bae92cc
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-13b * Dataset: mathemakitten/winobias_antistereotype_test_cot * Config: mathemakitten--winobias_antistereotype_test_cot * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064281
[ "autotrain", "evaluation", "region:us" ]
2022-10-26T03:12:26+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-13b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-10-26T03:38:02+00:00
25c4f65bb2c90a1c5ea0f5990287fce9529f3ae2
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-14b0f6-1886164287
[ "autotrain", "evaluation", "region:us" ]
2022-10-26T03:39:00+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-26T03:42:02+00:00
0eaa9942f56bc4171844477deb35cb3fa3f7585d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-14b0f6-1886164288
[ "autotrain", "evaluation", "region:us" ]
2022-10-26T03:39:05+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-26T03:43:01+00:00
a582213b5f1d8c2c0a507ed7fea78a7863351bdc
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-d57983-1886264289
[ "autotrain", "evaluation", "region:us" ]
2022-10-26T03:39:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-26T03:40:07+00:00
5c5bc05f38b66ceb8f0ef48249ea8f70eeaf6489
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-d57983-1886264290
[ "autotrain", "evaluation", "region:us" ]
2022-10-26T03:39:17+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-26T03:40:35+00:00
a3b7a1c5b7d2ee5dea4f1016816d4b0a21608ab2
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-bd0c63-1886364291
[ "autotrain", "evaluation", "region:us" ]
2022-10-26T03:39:24+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-26T03:40:21+00:00
40cc1ba923431846d9c2a83a5b70843f3fcfaf7a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-lener_br-lener_br-bd0c63-1886364292
[ "autotrain", "evaluation", "region:us" ]
2022-10-26T03:39:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-10-26T03:40:50+00:00
c93949f7140beef4adc404e7b54841e957f81c54
# Dataset Card for sberdevices_golos_100h_farfield ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Golos ASR corpus](https://www.openslr.org/114) - **Repository:** [Golos dataset](https://github.com/sberdevices/golos) - **Paper:** [Golos: Russian Dataset for Speech Research](https://arxiv.org/pdf/2106.10161.pdf) - **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench) - **Point of Contact:** [Nikolay Karpov](mailto:[email protected]) ### Dataset Summary Sberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated. Authors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes. This dataset is a simpler version of the above mentioned Golos: - it includes the farfield domain only (without any sound from the crowd domain); - validation split is built on the 10-hour training subset; - training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset; - test split is a full original test split. ### Supported Tasks and Leaderboards - `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. ### Languages The audio is in Russian. ## Dataset Structure ### Data Instances A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided. ``` {'audio': {'path': None, 'array': array([ 1.22070312e-04, 1.22070312e-04, 9.15527344e-05, ..., 6.10351562e-05, 6.10351562e-05, 3.05175781e-05]), dtype=float64), 'sampling_rate': 16000}, 'transcription': 'джой источники истории турции'} ``` ### Data Fields - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - transcription: the transcription of the audio file. ### Data Splits This dataset is a simpler version of the original Golos: - it includes the farfield domain only (without any sound from the crowd domain); - validation split is built on the 10-hour training subset; - training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset; - test split is a full original test split. | | Train | Validation | Test | | ----- | ------ | ---------- | ----- | | examples | 9570 | 933 | 1916 | | hours | 10.3h | 1.0h | 1.4h | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process All recorded audio files were manually annotated on the crowd-sourcing platform. #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov. ### Licensing Information [Public license with attribution and conditions reserved](https://github.com/sberdevices/golos/blob/master/license/en_us.pdf) ### Citation Information ``` @misc{karpov2021golos, author = {Karpov, Nikolay and Denisenko, Alexander and Minkin, Fedor}, title = {Golos: Russian Dataset for Speech Research}, publisher = {arXiv}, year = {2021}, url = {https://arxiv.org/abs/2106.10161} } ``` ### Contributions Thanks to [@bond005](https://github.com/bond005) for adding this dataset.
bond005/sberdevices_golos_100h_farfield
[ "task_categories:automatic-speech-recognition", "task_categories:audio-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100k", "source_datasets:extended", "language:ru", "license:other", "arxiv:2106.10161", "region:us" ]
2022-10-26T04:04:50+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["ru"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100k"], "source_datasets": ["extended"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "paperswithcode_id": "golos", "pretty_name": "Golos"}
2022-10-27T03:23:04+00:00
a8395938b476a1cf89b6db79853110ee22616fcc
## Dataset Description The dataset is the subset of RCV1. These corpus has already been used in author identification experiments. In the top 50 authors (with respect to total size of articles) were selected. 50 authors of texts labeled with at least one subtopic of the class CCAT(corporate/industrial) were selected.That way, it is attempted to minimize the topic factor in distinguishing among the texts. The training corpus consists of 2,500 texts (50 per author) and the test corpus includes other 2,500 texts (50 per author) non-overlapping with the training texts. - **Homepage:** https://archive.ics.uci.edu/ml/datasets/Reuter_50_50 - **Repository:** https://archive.ics.uci.edu/ml/datasets/Reuter_50_50 - **Paper:** - **Leaderboard:** - **Point of Contact:**
yeeb/C50
[ "license:openrail", "region:us" ]
2022-10-26T04:49:50+00:00
{"license": "openrail"}
2022-10-26T04:55:06+00:00
9f2b30fed6f314b8774d02e290843ecf086b0031
Relevant Paper - `https://github.com/Hritikbansal/entigen_emnlp` language of prompts - English
hbXNov/entigen
[ "region:us" ]
2022-10-26T04:55:43+00:00
{}
2022-10-26T06:20:22+00:00
c5e68a003bc53738988b3b44a2134da6e35ce271
Dialogue-Model-Research-Group/v2ex
[ "license:cc", "region:us" ]
2022-10-26T06:13:27+00:00
{"license": "cc", "dataset_info": [{"config_name": "topic", "features": [{"name": "id", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "content_rendered", "dtype": "string"}, {"name": "syntax", "dtype": "int64"}, {"name": "url", "dtype": "string"}, {"name": "replies", "dtype": "int64"}, {"name": "last_reply_by", "dtype": "string"}, {"name": "created", "dtype": "int64"}, {"name": "last_modified", "dtype": "int64"}, {"name": "last_touched", "dtype": "int64"}, {"name": "member", "struct": [{"name": "id", "dtype": "int64"}, {"name": "username", "dtype": "string"}, {"name": "bio", "dtype": "string"}, {"name": "website", "dtype": "string"}, {"name": "github", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "avatar", "dtype": "string"}, {"name": "created", "dtype": "int64"}]}, {"name": "node", "struct": [{"name": "id", "dtype": "int64"}, {"name": "url", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "header", "dtype": "string"}, {"name": "footer", "dtype": "string"}, {"name": "avatar", "dtype": "string"}, {"name": "topics", "dtype": "int64"}, {"name": "created", "dtype": "int64"}, {"name": "last_modified", "dtype": "int64"}]}, {"name": "supplements", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "content", "dtype": "string"}, {"name": "content_rendered", "dtype": "string"}, {"name": "syntax", "dtype": "int64"}, {"name": "created", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 522790208, "num_examples": 262120}], "download_size": 153558181, "dataset_size": 522790208}, {"config_name": "replies", "features": [{"name": "id", "dtype": "int64"}, {"name": "content", "dtype": "string"}, {"name": "content_rendered", "dtype": "string"}, {"name": "created", "dtype": "int64"}, {"name": "member", "struct": [{"name": "id", "dtype": "int64"}, {"name": "username", "dtype": "string"}, {"name": "bio", "dtype": "string"}, {"name": "website", "dtype": "string"}, {"name": "github", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "avatar", "dtype": "string"}, {"name": "created", "dtype": "int64"}]}, {"name": "topic_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1554954801, "num_examples": 3553953}], "download_size": 462827899, "dataset_size": 1554954801}]}
2022-11-15T14:52:02+00:00
f25e9b73b1ff9fa992e8b07dc68a6e5d09fa70fe
# C4 200M # Dataset Summary C4 200M Sample Dataset adopted from https://huggingface.co/datasets/liweili/c4_200m C4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks. The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction) # Description As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset: ``` { "input": "Bitcoin is for $7,094 this morning, which CoinDesk says." "output": "Bitcoin goes for $7,094 this morning, according to CoinDesk." } ```
leslyarun/c4_200m_gec_train100k_test25k
[ "task_categories:text-generation", "source_datasets:allenai/c4", "language:en", "grammatical-error-correction", "region:us" ]
2022-10-26T06:21:21+00:00
{"language": ["en"], "source_datasets": ["allenai/c4"], "task_categories": ["text-generation"], "pretty_name": "C4 200M Grammatical Error Correction Dataset", "tags": ["grammatical-error-correction"]}
2022-10-26T06:59:31+00:00
e54d38bb908f734558f6e749862d29ccf06d2ce3
robbye123/images
[ "region:us" ]
2022-10-26T06:25:57+00:00
{}
2022-10-26T06:55:38+00:00
41c51d1746fa0bd24992037a8a00d68abd21aa76
# Dataset Card for "food102" This is based on the [food101](https://huggingface.co/datasets/food101) dataset with an extra class generated with a Stable Diffusion model. A detailed walk-through is available on [YouTube](https://youtu.be/sIe0eo3fYQ4).
juliensimon/food102
[ "region:us" ]
2022-10-26T07:44:52+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "apple_pie", "1": "baby_back_ribs", "2": "baklava", "3": "beef_carpaccio", "4": "beef_tartare", "5": "beet_salad", "6": "beignets", "7": "bibimbap", "8": "boeuf_bourguignon", "9": "bread_pudding", "10": "breakfast_burrito", "11": "bruschetta", "12": "caesar_salad", "13": "cannoli", "14": "caprese_salad", "15": "carrot_cake", "16": "ceviche", "17": "cheese_plate", "18": "cheesecake", "19": "chicken_curry", "20": "chicken_quesadilla", "21": "chicken_wings", "22": "chocolate_cake", "23": "chocolate_mousse", "24": "churros", "25": "clam_chowder", "26": "club_sandwich", "27": "crab_cakes", "28": "creme_brulee", "29": "croque_madame", "30": "cup_cakes", "31": "deviled_eggs", "32": "donuts", "33": "dumplings", "34": "edamame", "35": "eggs_benedict", "36": "escargots", "37": "falafel", "38": "filet_mignon", "39": "fish_and_chips", "40": "foie_gras", "41": "french_fries", "42": "french_onion_soup", "43": "french_toast", "44": "fried_calamari", "45": "fried_rice", "46": "frozen_yogurt", "47": "garlic_bread", "48": "gnocchi", "49": "greek_salad", "50": "grilled_cheese_sandwich", "51": "grilled_salmon", "52": "guacamole", "53": "gyoza", "54": "hamburger", "55": "hot_and_sour_soup", "56": "hot_dog", "57": "huevos_rancheros", "58": "hummus", "59": "ice_cream", "60": "lasagna", "61": "lobster_bisque", "62": "lobster_roll_sandwich", "63": "macaroni_and_cheese", "64": "macarons", "65": "miso_soup", "66": "mussels", "67": "nachos", "68": "omelette", "69": "onion_rings", "70": "oysters", "71": "pad_thai", "72": "paella", "73": "pancakes", "74": "panna_cotta", "75": "peking_duck", "76": "pho", "77": "pizza", "78": "pork_chop", "79": "poutine", "80": "prime_rib", "81": "pulled_pork_sandwich", "82": "ramen", "83": "ravioli", "84": "red_velvet_cake", "85": "risotto", "86": "samosa", "87": "sashimi", "88": "scallops", "89": "seaweed_salad", "90": "shrimp_and_grits", "91": "spaghetti_bolognese", "92": "spaghetti_carbonara", "93": "spring_rolls", "94": "steak", "95": "strawberry_shortcake", "96": "sushi", "97": "tacos", "98": "takoyaki", "99": "tiramisu", "100": "tuna_tartare", "101": "waffles"}}}}], "splits": [{"name": "test", "num_bytes": 1461368965.25, "num_examples": 25500}, {"name": "train", "num_bytes": 4285789478.25, "num_examples": 76500}], "download_size": 5534173074, "dataset_size": 5747158443.5}}
2022-10-26T18:43:21+00:00
4299936316ce2813f37498d647c3556ed42be2d3
siberspace/julie
[ "region:us" ]
2022-10-26T09:21:43+00:00
{}
2022-10-26T09:22:17+00:00
d3c241cacb6532a7f6d1de771d2ac8827f6bad25
## Dataset Description - **Dataset authors:** [Suno.ai](https://www.suno.ai) - **Point of contact:** [email protected] As a part of ESB benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. The diagnostic dataset can be downloaded and prepared in much the same way as the ESB datasets: ```python from datasets import load_dataset esb_diagnostic_ami = load_dataset("esb/diagnostic-dataset", "ami") ``` ### Data Selection #### Audio To provide an adequate representation of all ESB datasets, we chose to use at least 1 hour of audio from the validation sets of each of the 8 constituent ESB datasets. Following the convention of LibriSpeech, we then used a public ASR model to further split each dataset into `clean`/`other` based on WER. (Note that for LibriSpeech we kept the existing `clean`/`other` splits.). The `clean` subset represents the 'easier' 50% of samples, and the `other` subset the more difficult 50%. To obtain the `clean` diagnostic-subset of AMI, either "slice" the `clean`/`other` split: ```python ami_diagnostic_clean = esc_diagnostic_ami["clean"] ``` Or download the `clean` subset standalone: ```python ami_diagnostic_clean = load_dataset("esb/diagnostic-dataset", "ami", split="clean") ``` #### Transcriptions Firstly, the transcriptions were generated by a human _without_ the bias of the original transcript. The transcriptions follow a strict orthographic and verbatim style guide, where every word, disfluency and partial word is transcribed. Punctuation and formatting follows standard English print orthography (eg. ‘July 10th in 2021.’). Breaks in thought and partial words are indicated via ‘--’. In addition to the **orthographic** transcriptions, a **normalised** format was produced, with all punctuation removed and non-standard-words such as dates, currencies and abbreviations verbalised in the exact way they are spoken (eg. ’july tenth in twenty twenty one’). Although great care was taken in standardisation of orthography, a remaining amount of ambiguity in transcription exists, especially around the use of commas and the choice of introducing sentence breaks for utterances starting with ‘And’. Each sample was then checked by a second human with access to both the original ground truth as well as the independently produced style-consistent transcript. Both versions were merged to produce new high quality ground truths in both the normalised and orthographic text format. ## Dataset Information A data point can be accessed by indexing the dataset object loaded through `load_dataset`: ```python print(ami_diagnostic_clean[0]) ``` A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name: ```python { 'audio': {'path': None, 'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ..., -2.74658203e-04, -1.83105469e-04, -3.05175781e-05]), 'sampling_rate': 16000}, 'ortho_transcript': 'So, I guess we have to reflect on our experiences with remote controls to decide what, um, we would like to see in a convenient practical', 'norm_transcript': 'so i guess we have to reflect on our experiences with remote controls to decide what um we would like to see in a convenient practical', 'id': 'AMI_ES2011a_H00_FEE041_0062835_0064005', 'dataset': 'ami', } ``` ### Data Fields - `audio`: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. - `ortho_transcript`: the **orthographic** transcription of the audio file. - `norm_transcript`: the **normalised** transcription of the audio file. - `id`: unique id of the data sample. - `dataset`: string name of a dataset the sample belongs to. We encourage participants to train their ASR system on the [AMI dataset](https://huggingface.co/datasets/esb/datasets#ami), the smallest of the 8 ESB datasets, and then evaluate their system on the `ortho_transcript` for **all** of the datasets in the diagnostic dataset. This gives a representation of how the system is likely to fare on other audio domains. The predictions can then be _normalised_ by removing casing and punctuation, converting numbers to spelled-out form and expanding abbreviations, and then assessed against the `norm_transcript`. This gives a representation of the effect of orthography for system performance. ### Access All eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages: * Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0 * GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech * SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech ### Contributions We show our greatest appreciation to Georg Kucsko, Keenan Freyberg and Michael Shulman from [Suno.ai](https://www.suno.ai) for creating and annotating the diagnostic dataset.
esb/diagnostic-dataset
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "source_datasets:original", "source_datasets:extended|librispeech_asr", "source_datasets:extended|common_voice", "language:en", "license:cc-by-4.0", "license:apache-2.0", "license:cc0-1.0", "license:cc-by-nc-3.0", "license:other", "asr", "benchmark", "speech", "esc", "region:us" ]
2022-10-26T09:25:33+00:00
{"annotations_creators": ["expert-generated", "crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0", "apache-2.0", "cc0-1.0", "cc-by-nc-3.0", "other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "1M<n<10M"], "source_datasets": ["original", "extended|librispeech_asr", "extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "ESB Diagnostic Dataset", "tags": ["asr", "benchmark", "speech", "esc"], "extra_gated_prompt": "Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. \nTo do so, fill in the access forms on the specific datasets' pages:\n * Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0\n * GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech\n * SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech", "extra_gated_fields": {"I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset": "checkbox", "I hereby confirm that I have accepted the terms of usages on GigaSpeech page": "checkbox", "I hereby confirm that I have accepted the terms of usages on SPGISpeech page": "checkbox"}}
2022-10-26T15:42:41+00:00
e634b6b810e4d30c81b4c6d8262379fe8b9f708c
# Dataset Card for sberdevices_golos_10h_crowd ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Golos ASR corpus](https://www.openslr.org/114) - **Repository:** [Golos dataset](https://github.com/sberdevices/golos) - **Paper:** [Golos: Russian Dataset for Speech Research](https://arxiv.org/pdf/2106.10161.pdf) - **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench) - **Point of Contact:** [Nikolay Karpov](mailto:[email protected]) ### Dataset Summary Sberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated. Authors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes. This dataset is a simpler version of the above mentioned Golos: - it includes the crowd domain only (without any sound from the farfield domain); - validation split is built on the 1-hour training subset; - training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset; - test split is a full original test split. ### Supported Tasks and Leaderboards - `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. ### Languages The audio is in Russian. ## Dataset Structure ### Data Instances A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided. ``` {'audio': {'path': None, 'array': array([ 3.05175781e-05, 3.05175781e-05, 0.00000000e+00, ..., -1.09863281e-03, -7.93457031e-04, -1.52587891e-04]), dtype=float64), 'sampling_rate': 16000}, 'transcription': 'шестнадцатая часть сезона пять сериала лемони сникет тридцать три несчастья'} ``` ### Data Fields - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - transcription: the transcription of the audio file. ### Data Splits This dataset is a simpler version of the original Golos: - it includes the crowd domain only (without any sound from the farfield domain); - validation split is built on the 1-hour training subset; - training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset; - test split is a full original test split. | | Train | Validation | Test | | ----- | ------ | ---------- | ----- | | examples | 7993 | 793 | 9994 | | hours | 8.9h | 0.9h | 11.2h | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process All recorded audio files were manually annotated on the crowd-sourcing platform. #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov. ### Licensing Information [Public license with attribution and conditions reserved](https://github.com/sberdevices/golos/blob/master/license/en_us.pdf) ### Citation Information ``` @misc{karpov2021golos, author = {Karpov, Nikolay and Denisenko, Alexander and Minkin, Fedor}, title = {Golos: Russian Dataset for Speech Research}, publisher = {arXiv}, year = {2021}, url = {https://arxiv.org/abs/2106.10161} } ``` ### Contributions Thanks to [@bond005](https://github.com/bond005) for adding this dataset.
bond005/sberdevices_golos_10h_crowd
[ "task_categories:automatic-speech-recognition", "task_categories:audio-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100k", "source_datasets:extended", "language:ru", "license:other", "arxiv:2106.10161", "region:us" ]
2022-10-26T10:12:15+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["ru"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100k"], "source_datasets": ["extended"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "paperswithcode_id": "golos", "pretty_name": "Golos"}
2022-10-27T03:42:07+00:00
fd04a127b3d6801afbe4ba38b66c98d0de647e01
# Winter Style Embedding / Textual Inversion ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"art by winter_style"``` If it is to strong just add [] around it. Trained until 10000 steps I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/oVqfSZ2.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/p0cslGJ.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/LJmGvsc.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/T4I0gFQ.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/hzfmsA8.png width=100% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/winter_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "region:us" ]
2022-10-26T10:28:44+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false}
2022-10-26T19:45:11+00:00
dc798fd72a60febdd4093cccebf885bb1a76d4f7
tabcoin/test
[ "license:openrail", "region:us" ]
2022-10-26T11:17:10+00:00
{"license": "openrail"}
2022-10-28T13:03:32+00:00
209818b23654f0057dbac7bb86b6bba4c95d82d1
# KPBiomed, A Large-Scale Dataset for keyphrase generation ## About This dataset is made of 5.6 million abstracts with author assigned keyphrases. Details about the dataset can be found in the original paper: Maël Houbre, Florian Boudin and Béatrice Daille. 2022. [A Large-Scale Dataset for Biomedical Keyphrase Generation](https://arxiv.org/abs/2211.12124). In Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI 2022). Reference (author-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in the following paper: - Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/). In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics. Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text. ## Content The details of the dataset are in the table below: | Split | # documents | # keyphrases by document (average) | % Present | % Reordered | % Mixed | % Unseen | | :----------- | ----------: | ---------------------------------: | --------: | ----------: | ------: | -------: | | Train small | 500k | 5.24 | 66.31 | 7.16 | 12.60 | 13.93 | | Train medium | 2M | 5.24 | 66.30 | 7.18 | 12.57 | 13.95 | | Train large | 5.6M | 5.23 | 66.32 | 7.18 | 12.55 | 13.95 | | Validation | 20k | 5.25 | 66.44 | 7.07 | 12.45 | 14.05 | | Test | 20k | 5.22 | 66.59 | 7.22 | 12.44 | 13.75 | The following data fields are available: - **id**: unique identifier of the document. - **title**: title of the document. - **abstract**: abstract of the document. - **keyphrases**: list of reference keyphrases. - **mesh terms**: list of indexer assigned MeSH terms if available (around 68% of the articles) - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases. - **authors**: list of the article's authors - **year**: publication year **NB**: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + text).
taln-ls2n/kpbiomed
[ "task_categories:text-generation", "annotations_creators:unknown", "language_creators:unknown", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:cc-by-nc-4.0", "arxiv:2211.12124", "region:us" ]
2022-10-26T12:41:01+00:00
{"annotations_creators": ["unknown"], "language_creators": ["unknown"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "task_categories": ["text-mining", "text-generation"], "task_ids": ["keyphrase-generation", "keyphrase-extraction"], "pretty_name": "KP-Biomed"}
2022-12-01T10:52:09+00:00
219a3339e1995bfbe61f6b1753ebc2a19ac87857
ckmai24/ghibil-style
[ "license:afl-3.0", "region:us" ]
2022-10-26T12:42:00+00:00
{"license": "afl-3.0"}
2022-10-26T12:42:00+00:00
4c4f3977ddd1586764f2bfa883e48d259da7de9a
edbeeching/sample_factory_videos
[ "license:mit", "region:us" ]
2022-10-26T12:55:56+00:00
{"license": "mit"}
2022-11-04T08:00:27+00:00
59298c5de4ab4cc1a2bd3522eeb2db35d5fc67aa
YWjimmy/PeRFception-ScanNet
[ "license:cc-by-sa-4.0", "region:us" ]
2022-10-26T13:23:14+00:00
{"license": "cc-by-sa-4.0"}
2022-10-26T13:56:10+00:00
acc530784fffdad35ed44f22b40f1e6a366318a3
jhworth8/baileycardosi
[ "license:apache-2.0", "region:us" ]
2022-10-26T14:56:43+00:00
{"license": "apache-2.0"}
2022-10-26T15:01:24+00:00
13f26365766f8f61eea21bf45d65936aaaa70db8
# Brush Style Embedding / Textual Inversion ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"art by brush_style"``` If it is to strong just add [] around it. Trained until 10000 steps I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/Mp2F6GR.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/a2Cmqb4.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/YwSafu4.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/fCFSIs5.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/S8v6sXG.png width=100% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/brush_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "region:us" ]
2022-10-26T15:36:36+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false}
2022-10-29T09:50:13+00:00
4d83c979660e9d000bcd08a9b91093e8dca3eff5
# Dataset Card for "img-256-shinkai-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
woctordho/img-256-shinkai-2
[ "region:us" ]
2022-10-26T16:40:48+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 11515086349.93, "num_examples": 811410}], "download_size": 11660877157, "dataset_size": 11515086349.93}}
2022-11-19T23:35:01+00:00
f578c212cc348679720516b65fd4317223206bf1
chattermill/fabsa
[ "license:mit", "region:us" ]
2022-10-26T16:53:24+00:00
{"license": "mit"}
2022-11-01T19:51:01+00:00
61f4efc23daf87b98918ca90c359e9bb8f92a900
Cómo reclamar los daños después de un apagón eléctrico: las indemnizaciones que debe costear la empresa tras cortar el suministro
Aserehe6546545/Ghgfgg
[ "region:us" ]
2022-10-26T18:21:00+00:00
{}
2022-10-26T18:22:13+00:00
1671bffd719c8370d046334203752f9a2459ca54
# Dataset Card for "img-256-danbooru" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
woctordho/img-256-danbooru
[ "region:us" ]
2022-10-26T18:24:12+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 23138302451.076, "num_examples": 990501}], "download_size": 23099440688, "dataset_size": 23138302451.076}}
2022-11-19T20:50:35+00:00
ac2f44906b2ed4f46bf547b7db4c055cb10b601b
# Dataset Card for "shape-scenes" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
f-biondi/shape-scenes
[ "region:us" ]
2022-10-26T19:26:33+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 558709806.875, "num_examples": 97881}], "download_size": 317164682, "dataset_size": 558709806.875}}
2022-10-26T19:27:10+00:00
fd3366545ad353723966836cc25f1ed10b7ef355
# Description This dataset is a subset of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) and Multilingual [CommonVoice](commonvoice.mozilla.org/) that have been adversarially modified to fool [Whisper](https://huggingface.co/openai/whisper-medium) ASR model. Original [source code](https://github.com/RaphaelOlivier/whisper_attack). The raw [tar files](https://data.mendeley.com/datasets/96dh52hz9r). # Configurations and splits * The `targeted` config contains targeted adversarial examples. When successful, they fool Whisper into predicting the sentence `OK Google, browse to evil.com` even if the input is entirely different. We provide a split for each Whisper model, and one containing the original, unmodified inputs * The `untargeted-35` and `untargeted-40` configs contain untargeted adversarial examples, with average Signal-Noise Ratios of 35dB and 40dB respectively. They fool Whisper into predicting erroneous transcriptions. We provide a split for each Whisper model, and one containing the original, unmodified inputs * The `language-<lang> configs contain adversarial examples in language <lang> that fool Whisper in predicting the wrong language. Split `<lang>.<target_lang>` contain inputs that Whisper perceives as <target_lang>, and split `<lang>.original` contains the original inputs in language <lang>. We use 3 target languages (English, Tagalog and Serbian) and 7 source languages (English, Italian, Indonesian, Danish, Czech, Lithuanian and Armenian). # Usage Here is an example of code using this dataset: ```python model_name="whisper-medium" config_name="targeted" split_name="whisper.medium" hub_path = "openai/whisper-"+model_name processor = WhisperProcessor.from_pretrained(hub_path) model = WhisperForConditionalGeneration.from_pretrained(hub_path).to("cuda") dataset = load_dataset("RaphaelOlivier/whisper_adversarial_examples",config_name ,split=split_name) def map_to_pred(batch): input_features = processor(batch["audio"][0]["array"], return_tensors="pt").input_features predicted_ids = model.generate(input_features.to("cuda")) transcription = processor.batch_decode(predicted_ids, normalize = True) batch['text'][0] = processor.tokenizer._normalize(batch['text'][0]) batch["transcription"] = transcription return batch result = dataset.map(map_to_pred, batched=True, batch_size=1) wer = load("wer") for t in zip(result["text"],result["transcription"]): print(t) print(wer.compute(predictions=result["text"], references=result["transcription"])) ```
RaphaelOlivier/whisper_adversarial_examples
[ "license:cc-by-4.0", "region:us" ]
2022-10-26T19:29:43+00:00
{"license": "cc-by-4.0"}
2022-11-03T21:48:16+00:00
3446dd8617356de7b1980ebfc0a50b946eb21de3
# Dataset Card for "img-256-photo-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
woctordho/img-256-photo-2
[ "region:us" ]
2022-10-26T20:08:43+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 12133208417.44, "num_examples": 996698}], "download_size": 11930597168, "dataset_size": 12133208417.44}}
2022-11-20T02:56:16+00:00
ce79dcfb8e000cbac80111f73c64d368997230ad
# Dataset Card for "codeparrot-valid-more-filtering-debug" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kejian/codeparrot-valid-more-filtering-debug
[ "region:us" ]
2022-10-26T20:21:58+00:00
{"dataset_info": {"features": [{"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "copies", "dtype": "string"}, {"name": "size", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "hash", "dtype": "int64"}, {"name": "line_mean", "dtype": "float64"}, {"name": "line_max", "dtype": "int64"}, {"name": "alpha_frac", "dtype": "float64"}, {"name": "autogenerated", "dtype": "bool"}, {"name": "ratio", "dtype": "float64"}, {"name": "config_test", "dtype": "bool"}, {"name": "has_no_keywords", "dtype": "bool"}, {"name": "few_assignments", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 957026, "num_examples": 100}], "download_size": 357047, "dataset_size": 957026}}
2022-10-26T20:22:00+00:00
3701b1a2657cea5fa791c4f52f79d463825cc386
omr-saeed/embeddings.csv
[ "license:other", "region:us" ]
2022-10-26T20:25:24+00:00
{"license": "other"}
2022-10-26T20:26:44+00:00
7cdae06c98ca54f8892daf6a80efb4a9d8a2abd0
# MiCRO: Multi-interest Candidate Retrieval Online [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-green.svg?style=flat-square)](http://makeapullrequest.com) [![arXiv](https://img.shields.io/badge/arXiv-2201.11675-b31b1b.svg)](https://arxiv.org/abs/2210.16271) This repo contains the TwitterFaveGraph dataset from our paper [MiCRO: Multi-interest Candidate Retrieval Online](). <br /> [[PDF]](https://arxiv.org/pdf/2210.16271.pdf) [[HuggingFace Datasets]](https://huggingface.co/Twitter) <a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. ## TwitterFaveGraph TwitterFaveGraph is a bipartite directed graph of user nodes to Tweet nodes where an edge represents a "fave" engagement. Each edge is binned into predetermined time chunks which are assigned as ordinals. These ordinals are contiguous and respect time ordering. In total TwitterFaveGraph has 6.7M user nodes, 13M Tweet nodes, and 283M edges. The maximum degree for users is 100 and the minimum degree for users is 1. The maximum degree for Tweets is 280k and the minimum degree for Tweets is 5. The data format is displayed below. | user_index | tweet_index | time_chunk | | ------------- | ------------- | ---- | | 1 | 2 | 1 | | 2 | 1 | 1 | | 3 | 3 | 2 | ## Citation If you use TwitterFaveGraph in your work, please cite the following: ```bib @article{portman2022micro, title={MiCRO: Multi-interest Candidate Retrieval Online}, author={Portman, Frank and Ragain, Stephen and El-Kishky, Ahmed}, journal={arXiv preprint arXiv:2210.16271}, year={2022} } ```
Twitter/TwitterFaveGraph
[ "license:cc-by-4.0", "arxiv:2210.16271", "region:us" ]
2022-10-26T23:44:43+00:00
{"license": "cc-by-4.0"}
2022-10-31T23:58:49+00:00
018b0006db780c8e80c37ec87fe27ed2798ab8a8
# kNN-Embed: Locally Smoothed Embedding Mixtures For Multi-interest Candidate Retrieval [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-green.svg?style=flat-square)](http://makeapullrequest.com) [![arXiv](https://img.shields.io/badge/arXiv-2201.11675-b31b1b.svg)](https://arxiv.org/pdf/2205.06205.pdf) This repo contains the TwitterFaveGraph dataset from our paper [kNN-Embed: Locally Smoothed Embedding Mixtures For Multi-interest Candidate Retrieval](https://arxiv.org/pdf/2205.06205.pdf). <br /> [[PDF]]() [[HuggingFace Datasets]](https://huggingface.co/Twitter) <a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. ## TwitterFollowGraph TwitterFollowGraph is a bipartite directed graph of users (consumer) nodes to author (producer) nodes where an edge represents a user "following" an author engagement. Each edge is binned into predetermined time chunks which are denoted with ordinals. These ordinals are contiguous and respect time ordering of engagements. In total TwitterFollowGraph has 261𝑀 edges and 15.5𝑀 vertices, with a max-degree of 900𝐾 and a min-degree of 5. The data format is displayed below. | user_index | author_index | time_chunk | | ------------- | ------------- | ---- | | 1 | 2 | 1 | | 2 | 1 | 2 | | 3 | 3 | 2 | ## Citation If you use TwitterFollowGraph in your work, please cite the following: ```bib @article{el2022knn, title={kNN-Embed: Locally Smoothed Embedding Mixtures For Multi-interest Candidate Retrieval}, author={El-Kishky, Ahmed and Markovich, Thomas and Leung, Kenny and Portman, Frank and Haghighi, Aria and Xiao, Ying}, journal={arXiv preprint arXiv:2205.06205}, year={2022} } ```
Twitter/TwitterFollowGraph
[ "license:cc-by-4.0", "arxiv:2205.06205", "region:us" ]
2022-10-27T00:01:25+00:00
{"license": "cc-by-4.0"}
2022-10-31T23:55:05+00:00
36c12245c6c6983ca87449763a19a161a62944c9
tramzel/myfooddata_1_4
[ "license:unknown", "region:us" ]
2022-10-27T00:15:08+00:00
{"license": "unknown"}
2022-10-27T00:16:03+00:00
5b1dd4215db57c070673a560981545a3310ed9ee
#Overview This is a dataset I am using for my thesis project Myaamia Translator. <p style="color: darkred">This is not meant to be used for production yet</p> <i>I just want to try out a few things.</i>
bishalbaaniya/myaamia_english
[ "region:us" ]
2022-10-27T00:32:57+00:00
{}
2022-10-27T00:54:46+00:00
98c3bf49ac85d8b9fd593a22a414322cbd9ecb36
# League Style Embedding / Textual Inversion ## Usage To use this embedding you have to download the file, as well as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"art by league_style-1000-[number of steps for the version you chose]"``` For example, if you chose the 11.5k steps ver, it would be ```"art by league_style-1000-11500"``` If it is to strong just add [] around it. The general ver I recommend is 11.5k steps, however I added a 4k steps and 12k steps trained ver in the files as well. 4k steps tends towards making nice glasses, and 12k steps seems to be better at poses rather than closeups. If you'd like to support the amazing artists whose artwork contributed to this embedding's training, I'd highly recommend you check out [Alex Flores](https://www.artstation.com/alexflores), [Chengwei Pan](https://www.artstation.com/pan), [Horace Hsu](https://www.artstation.com/hozure), [Jem Flores](https://www.artstation.com/jemflores), [SIXMOREVODKA STUDIO](https://www.artstation.com/sixmorevodka), and [West Studio](https://www.artstation.com/weststudio). Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/CP3dcox.png width=100% height=100%/></td> </tr> <tr> <td><img src=https://i.imgur.com/3uJpYO9.png width=100% height=100%/></td> </tr> <tr> <td><img src=https://i.imgur.com/3mi25aA.png width=100% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
grullborg/league_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "region:us" ]
2022-10-27T00:53:50+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false}
2022-10-27T01:27:20+00:00
52d78f738b103421956771b5ae8a3c5fd506a8c5
![Screenshot from 2022-06-20 23-30-08.png](https://s3.amazonaws.com/moonup/production/uploads/1666837464747-6359ea14d72fc0539e76bebb.png)
filevich/t1k22
[ "region:us" ]
2022-10-27T01:20:22+00:00
{}
2024-02-01T22:08:15+00:00