sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
909fe651163c6c47ebde62b98b744ca52306c21f
zm87/dushu
[ "license:openrail", "region:us" ]
2023-05-11T04:29:37+00:00
{"license": "openrail"}
2023-05-11T04:30:20+00:00
a1d8ac23f05ae4b2ada5b45589fe089cfc41918b
Embeddings generated from english text corpus file. Model used: sentence-transformers/all-MiniLM-L6-v2
krinal/embeddings_state_of_union
[ "license:apache-2.0", "region:us" ]
2023-05-11T04:55:38+00:00
{"license": "apache-2.0"}
2023-05-11T08:50:04+00:00
b5de58d24479ebcb0abcbc8ba39bc52165f30750
SIRIS-Lab/actytode
[ "license:mit", "region:us" ]
2023-05-11T05:29:41+00:00
{"license": "mit"}
2023-05-11T05:39:25+00:00
83c9cf1c0ac1b77dd82f6c0a388509663fe7fb4b
# Dataset Card for "sidewalk-imagery1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
G12345/sidewalk-imagery1
[ "region:us" ]
2023-05-11T05:44:35+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3139971.0, "num_examples": 10}], "download_size": 3141481, "dataset_size": 3139971.0}}
2023-05-11T05:44:56+00:00
3dbcb4263fd63a96d6659c7ad6b86af622e66828
# Dataset Card for MADBase ## Dataset Description - **Homepage:** https://datacenter.aucegypt.edu/shazeem/ - **Repository:** - **Paper:** A Two-Stage System for Arabic Handwritten Digit Recognition Tested on a New Large Database. EA El-Sherif, S Abdelazeem Artificial intelligence and pattern recognition, 237-242 - **Leaderboard:** - **Point of Contact:** Ezzat [email protected] ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Arabic ## Dataset Structure ### Data Instances { 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F5EE5B427A0>, 'label': 1, } ### Data Fields image: A PIL.Image.Image object containing the 28x28 image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0] label: an integer between 0 and 9 representing the digit. ### Data Splits The data is split into training and test set. The training set contains, as in mnist dataset, 60,000 images and the test set 10,000 images. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is publickly available for research. Any work that uses this dataset should cite the work below in Citation Information. ### Citation Information ``` @inproceedings{el2007two, title={A Two-Stage System for Arabic Handwritten Digit Recognition Tested on a New Large Database.}, author={El-Sherif, Ezzat Ali and Abdelazeem, Sherif}, booktitle={Artificial intelligence and pattern recognition}, pages={237--242}, year={2007} } ``` ### Contributions [More Information Needed]
MagedSaeed/MADBase
[ "task_categories:image-classification", "size_categories:10K<n<100K", "language:ar", "region:us" ]
2023-05-11T05:52:41+00:00
{"language": ["ar"], "size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "pretty_name": "Arabic Handwritten Digits Images Dataset", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 16186819.125, "num_examples": 59999}, {"name": "test", "num_bytes": 2695549.125, "num_examples": 9999}], "download_size": 15361996, "dataset_size": 18882368.25}}
2023-05-17T10:39:28+00:00
d72c9a21492fe5db98cebb029f4fb0ec20e78e30
tasksource/jigsaw
[ "license:apache-2.0", "region:us" ]
2023-05-11T06:08:33+00:00
{"license": "apache-2.0"}
2023-05-11T06:08:51+00:00
88ebeccf8a7c57143bd55125254033927e5a0f3c
From "MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering" (Pal et al.), MedMCQA is a "multiple-choice question answering (MCQA) dataset designed to address real-world medical entrance exam questions." The dataset "...has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity." The following is an example from the dataset: Question: In a patient of heart disease antibiotic prophylaxis for dental extraction is: A. Amoxicillin. B. Imipenem. C. Gentamicin. D. Erythromycin. Answer: A Paper: https://arxiv.org/abs/2203.14371 Code: https://github.com/MedMCQA/MedMCQA ``` @InProceedings{pmlr-v174-pal22a, title = {MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering}, author = {Pal, Ankit and Umapathi, Logesh Kumar and Sankarasubbu, Malaikannan}, booktitle = {Proceedings of the Conference on Health, Inference, and Learning}, pages = {248--260}, year = {2022}, editor = {Flores, Gerardo and Chen, George H and Pollard, Tom and Ho, Joyce C and Naumann, Tristan}, volume = {174}, series = {Proceedings of Machine Learning Research}, month = {07--08 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v174/pal22a/pal22a.pdf}, url = {https://proceedings.mlr.press/v174/pal22a.html}, abstract = {This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. More than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity. Each sample contains a question, correct answer(s), and other options which requires a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects & topics. A detailed explanation of the solution, along with the above information, is provided in this study.} } ```
lighteval/med_mcqa
[ "arxiv:2203.14371", "region:us" ]
2023-05-11T06:18:58+00:00
{}
2023-05-16T08:28:33+00:00
a9c08131653ed770c547fe8971f4e81a759ad685
# Dataset Card for "medmcqa-neg-answer" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
joey234/medmcqa-neg-answer
[ "region:us" ]
2023-05-11T07:18:22+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "opa", "dtype": "string"}, {"name": "opb", "dtype": "string"}, {"name": "opc", "dtype": "string"}, {"name": "opd", "dtype": "string"}, {"name": "cop", "dtype": {"class_label": {"names": {"0": "a", "1": "b", "2": "c", "3": "d"}}}}, {"name": "choice_type", "dtype": "string"}, {"name": "exp", "dtype": "string"}, {"name": "subject_name", "dtype": "string"}, {"name": "topic_name", "dtype": "string"}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 2341249, "num_examples": 4183}], "download_size": 1571269, "dataset_size": 2341249}}
2023-05-15T03:43:37+00:00
a393b8bc697c0fd993629e0f3d12e0ccf73c7ef9
# Dataset Card for "pokemon-512-valid" A cleaned + upsampled-to-512px-square version of https://www.kaggle.com/datasets/djilax/pkmn-image-dataset, suitable for training high-resolution unconditional image generators. source from [madebyollin/pokemon-512](https://huggingface.co/datasets/madebyollin/pokemon-512) 80% train_dataset + 10% test_dataset + 10% valid_dataset I use the following code to split it ```python from datasets import load_dataset, DatasetDict,Dataset images_dataset = load_dataset('madebyollin/pokemon-512', split="train") # 80% train_dataset + 20% train_testvalid train_testvalid = images_dataset.train_test_split(test_size=0.2,shuffle=True,seed=2000) # 10% test_dataset + 10% valid_dataset test_valid = train_testvalid['test'].train_test_split(test_size=0.5,shuffle=True,seed=2000) train_dev_test_dataset = DatasetDict({ 'train': train_testvalid['train'], 'test': test_valid['train'], 'validation': test_valid['test']}) print(train_dev_test_dataset) train_dataset = train_dev_test_dataset["train"] test_dataset = train_dev_test_dataset["test"] valid_dataset = train_dev_test_dataset["validation"] train_dataset.to_parquet("./data/train_dataset.parquet") test_dataset.to_parquet("./data/test_dataset.parquet") valid_dataset.to_parquet("./data/valid_dataset.parquet") ``` I customed a "train_unconditional.py" from diffusers,logging "validation_loss" while training, and added a module to caculate the FID score by using test_dataset.
miluELK/pokemon-512-valid
[ "region:us" ]
2023-05-11T08:12:51+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}]}}
2023-05-12T06:41:54+00:00
c44ea805883b9659a7193b0b8be739177cd6cfd8
DukeG/Kaggle_venv
[ "license:other", "region:us" ]
2023-05-11T08:19:25+00:00
{"license": "other"}
2024-02-01T01:44:23+00:00
7b39cc0f7374c7d5ad06b1bb7b04261936c1dca4
nlp-thedeep/humsetbias
[ "task_categories:text-classification", "task_categories:text-retrieval", "task_categories:token-classification", "task_ids:multi-label-classification", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "license:apache-2.0", "humanitarian", "research", "analytical-framework", "multilabel", "humset", "humbert", "bias", "gender-bias", "country-bias", "region:us" ]
2023-05-11T08:28:01+00:00
{"language_creators": ["expert-generated"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification", "text-retrieval", "token-classification"], "task_ids": ["multi-label-classification"], "pretty_name": "HumSetBias", "tags": ["humanitarian", "research", "analytical-framework", "multilabel", "humset", "humbert", "bias", "gender-bias", "country-bias"]}
2023-05-11T10:12:05+00:00
7100bba65de6af96129ee615b9a79d4cc23bbd84
# Dataset Card for "oasst_hh_shp_hellaswag_webgpt_rm_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
reciprocate/oasst_hh_shp_hellaswag_webgpt_rm_dataset
[ "region:us" ]
2023-05-11T08:44:47+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "replies", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 395107894.0, "num_examples": 264534}, {"name": "test", "num_bytes": 5859289.0, "num_examples": 2874}], "download_size": 232712113, "dataset_size": 400967183.0}}
2023-05-12T06:51:54+00:00
d39216bae675df42525c3814446d27ffe1ba715b
banloada/banloda
[ "license:other", "region:us" ]
2023-05-11T09:01:09+00:00
{"license": "other"}
2023-05-12T01:14:17+00:00
9900213c8af85dc85f3534290da1dcdde81aad04
zhovice dataset # use ```python dataset = load_dataset('SeanSleat/zhvoice', data_dir='/path/to/aishu') ``` # install 需要 sox 支持mp3 需要 将zhmagicdata.zip 3个文件合并
SeanSleat/zhvoice
[ "region:us" ]
2023-05-11T09:10:04+00:00
{}
2023-05-11T09:11:01+00:00
ae8354db69cf249d145234bf2d304a05b093e2fb
# PoC (Patents with One Citation) dataset This dataset is useful for training or evaluating models that predict patent-to-patent similarity, such as those used for patent searching. It was developed and used for the training of an ML model that powers the [PQAI](https://search.projectpq.ai/) search engine. ## Details The dataset contains 90,013 samples. Each sample contains: - a subject patent (`sp`) - its only citation (`cit`) - its CPC code (`cpc`) - a list of 10 patents (`sims`) that are similar to `sp` (in that they share the CPC code) and published before `sp` Every line of the dataset is a JSON parsable string (`.jsonl` format), which upon parsing given an array of this format: ``` [pn, cit, cpc, [...sims]] ``` ## Task Given the subject patent `sp` the task is to assign a similarity score to each patent `[cit, ...sims]`. Ideally, the score should be maximum for `cit`. ## Metrics It's a ranking task, so the following metrics make the most sense: - DCG/NDCG - Accuracy
pqai/PoC
[ "license:mit", "region:us" ]
2023-05-11T09:34:23+00:00
{"license": "mit"}
2023-05-11T09:48:23+00:00
79b6b7ead2dc33b6319351b468a5fa6b7e8706ea
SilpaCS/Alzheimer
[ "task_categories:image-classification", "size_categories:1K<n<10K", "language:en", "region:us" ]
2023-05-11T10:12:11+00:00
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"]}
2023-05-11T10:19:09+00:00
0b43bc761fc216b40b36d10f915777848bbbbc81
imhidayat/firstDataSet
[ "task_categories:text-classification", "language:en", "language:de", "language:ar", "language:it", "language:fa", "license:openrail", "region:us" ]
2023-05-11T10:24:46+00:00
{"language": ["en", "de", "ar", "it", "fa"], "license": "openrail", "task_categories": ["text-classification"]}
2023-05-11T11:30:38+00:00
8083dbe64f82e17b6be070a02b9239e9b847b6ce
# Dataset Card for "tedlium-data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sanchit-gandhi/tedlium-data
[ "region:us" ]
2023-05-11T10:35:59+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "gender", "dtype": {"class_label": {"names": {"0": "unknown", "1": "female", "2": "male"}}}}, {"name": "file", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 52384399934.125, "num_examples": 268263}, {"name": "validation", "num_bytes": 197798071.0, "num_examples": 591}, {"name": "test", "num_bytes": 352803076.375, "num_examples": 1469}], "download_size": 52658646425, "dataset_size": 52935001081.5}}
2023-05-11T11:18:03+00:00
7a52dc55fb57889341955ac0f3528c858ec0143d
# Dataset Card for "rm-static-format-oa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
reciprocate/hh-static-rm-format-oa
[ "region:us" ]
2023-05-11T11:45:47+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "rejected", "dtype": "string"}, {"name": "selected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 92536237, "num_examples": 76256}, {"name": "test", "num_bytes": 6219778, "num_examples": 5103}], "download_size": 57673726, "dataset_size": 98756015}}
2023-05-11T12:42:18+00:00
8b37df93e92b04341eaabc74acdf618ceb5736f2
[See below for English](#bangor-transcription-bank) # Banc Trawsgrifiadau Bangor Dyma fanc o 35 awr 39 munud a 53 eiliad o segmentau o leferydd naturiol dros hanner cant o gyfranwyr ar ffurf ffeiliau mp3, ynghyd â thrawsgrifiadau 'verbatim' cyfatebol o’r lleferydd ar ffurf ffeil .tsv. Mae'r mwyafrif o'r lleferydd yn leferydd digymell, naturiol. Dosbarthwn y deunydd hwn o dan drwydded agored CC0. ## Pwrpas Pwrpas y trawsgrifiadau hyn yw gweithredu fel data hyfforddi ar gyfer modelau adnabod lleferydd, gan gynnwys [ein modelau wav2vec](https://github.com/techiaith/docker-wav2vec2-cy). Ar gyfer y diben hwnnw, mae gofyn am drawsgrifiadau mwy verbatim o'r hyn a ddywedwyd na'r hyn a welir mewn trawsgrifiadau traddodiadol ac mewn isdeitlau, felly datblygwyd confensiwn arbennig ar gyfer y gwaith trawsgrifio ([gweler isod](#confensiynau_trawsgrifio)). Gydag ein modelau wav2vec, caiff cydran ychwnaegol, sef 'model iaith' ei defnyddio ar ôl y model adnabod lleferydd i safoni mwy ar allbwn y model iaith i fod yn debycach i drawsgrifiadau traddodiadol ac isdeitlau. Rydyn ni wedi darparu 3 ffeil .tsv, sef clips.tsv, train.tsv a test.tsv. Mae clips.tsv yn cynnwys ein trawsgrifiadau i gyd. Crëwyd train.tsv a test.tsv er mewn darparu setiau 'safonol' sy'n caniatáu i ddefnyddwyr allu gymharu modelau gan wahanol hyfforddwyr yn deg,hynny yw fe'u crëwyd at bwrpas meincnodi. Mae train.tsv yn cynnwys 80% o'n trawsgrifiadau, a test.tsv yn cynnwys y 20% sy'n weddill. Dyma enghraifft o gynnwys y data: ``` audio_filename audio_filesize transcript duration f86a046fd0964e0386d8c1363907183d.mp3 898272 *post industrial* yym a gyda yy dwi'n ca'l deud 5092 f0c2310fdca34faaa83beca5fa7ed212.mp3 809720 sut i ymdopio felly, wedyn erbyn hyn mae o nôl yn y cartra 4590 3eec3feefe254c9790739c22dd63c089.mp3 1335392 Felly ma' hon hefyd yn ddogfen fydd yn trosglwyddo gyda'r plant bobol ifanc o un cam i'r llall ac hefyd erbyn hyn i'r coleg 'lly. 7570 ``` Ceir pedair colofn yn y ffeiliau .tsv. Y cyntaf yw enw’r ffeil sain. Maint y ffeil sain yw’r ail. Y trawsgrifiad ei hun sydd yn y drydedd golofn. Hyd y clip sain sydd yn yr olaf. Dyma'r wybodaeth am y colofnau. | Maes| Esboniad | | ------ | ------ | | `audio_filename`| Enw'r ffeil sain o fewn y ffolder 'clips'| | `audio_filesize` | Maint y ffeil| | `transcript` | Trawsgrifiad | | `duration` | Hyd amser y clip mewn milliseconds. | ## Cyfieithu Is-set Rydyn ni hefyd wedi cyfieithu 500 o'n trawsgrifiadau i'r Saesneg a chyhoeddi'r cyfieithiadau gyda'u trawsgrifiadau gwreiddiol yn y ffeil translations.tsv. Dyma enghraifft o gynnwys y data: ``` mp3_filename Original Translation 8d6b7347cae6092930aa9b436045e33d.mp3 fel oedden ni odd yym <anadlu> odd pob pennod yn troi mewn i Ben-Hur rywfaint ag yn yy, odd hi'n eitha anodd as we were um <breath> every episode turned into Ben-Hur, somewhat, and was er, it was quite difficult ce526eaf61557b8e3eb53eb1a2f55076.mp3 pan ddechreuon ni'r podlediad yma y bwriad odd i ga'l un pennod bob bythefnos <anadlu> ond yy, wrth i ni fynd ymlaen when we started this podcast the intention was to have one episode every two weeks <breath> but er, as we go on ``` Ceir tair colofn yn y ffeil translation.tsv. Y cyntaf yw enw’r ffeil sain. Y trawsgrifiad Cymraeg sydd yn yr ail golofn. Y cyfieithiad Saesneg sydd yn yr olaf. Dyma'r wybodaeth am y colofnau. | Maes| Esboniad | | ------ | ------ | | `mp3_filename`| Enw'r ffeil sain o fewn y ffolder 'clips'| | `Original` | Y trawsgrifiad Cymraeg| | `Translation` | Y cyfieithiad Saesneg| ## Y Broses o Greu’r Adnodd Casglwyd y ffeiliau sain yn bennaf o bodlediadau Cymraeg gyda chaniatâd eu perchnogion yn ogystal â'r cyfranwyr unigol. Rydym yn ddiolchgar tu hwnt i’r bobl yna. Yn ogystal, crewyd rhywfaint o sgriptiau ar batrwm eitemau newyddion ac erthyglau a'u darllen gan ymchwilwyr yr Uned Technolegau Iaith er mwyn sicrhau bod cynnwys o'r math hwnnw yn y banc. Gyrrwyd y ffeiliau sain trwy ein trawsgrifiwr awtomataidd mewnol i segmentu’r sain a chreu trawsgrifiadau amrwd. Defnyddiwyd pecyn trawsgrifio Elan 6.4 (ar gael o https://archive.mpi.nl/tla/elan) gan drawsgrifwyr profiadol i wrando ar a chywiro’r trawsgrifiad amrwd. ## Nodyn Ynghylch Anonymeiddio’r Cynnwys Er tegwch i’r cyfranwyr, rydyn ni wedi anonymeiddio’r trawsgrifiadau. Penderfynwyd anonymeiddio nid yn unig enwau pobl unigol, ond hefyd unrhyw Wybodaeth Bersonol Adnabyddadwy (PII) gan gynnwys, ond nid yn gyfunedig i: * Rhif ffôn * Teitlau swyddi/galwedigaethau * Gweithleoedd * Enwau mannau cyhoeddus * Lleoliad daearyddol * Dyddiadau/amseroedd Wrth drawsgrifio marciwyd pob segment oedd yn cynnwys PII gyda’r tag \<PII>, yna wnaethom hidlo allan pob segment oedd yn cynnwys tag \<PII> er mwyn sicrhau nad oedd unrhyw wybodaeth bersonol yn cael eu cyhoeddi fel rhan o’r adnodd hwn. Rydym hefyd wedi newid trefn trawsgrifiadau i fod ar hap, felly nid ydynt wedi'u cyhoeddi yn y drefn y maent yn eu ymddangos yn y ffeiliau sain gwreiddiol. <a name="confensiynau_trawsgrifio"></a> ## Confensiynau Trawsgrifio Datblygwyd y confensiynau trawsgrifio hyn er mwyn sicrhau fod y trawsgrifiadau nid yn unig yn verbatim ond hefyd yn gyson. Fe’u datblygwyd trwy gyfeirio at gonfensiynau a ddefnyddir gan yr Uned yn y gorffennol, confensiynau eraill megis y rhai a defnyddiwyd yng nghorpora CorCenCC, Siarad, CIG1 a CIG2, a hefyd trwy broses o ddatblygu parhaol wrth i’r tîm ymgymryd â’r dasg o drawsgrifio. **NODWCH** - gan ein bod wedi datblygu’r egwyddorion trawsgrifio yn rhannol wrth ymgymryd â’r dasg o drawsgrifio nid yw’r trawsgrifiadau cynnar o reidrwydd yn dilyn yr egwyddorion cant y cant. Bwriadwn wirio’r trawsgrifiadau wedi i ni fireinio’r confensiynau. ### Collnodau Ni ddefnyddiwyd collnodau i marcio pob un llythyren a hepgorwyd gan siaradwyr. Er enghraifft, _gwitho_ (sef ynganiad o _gweithio_) sy’n gywir, nid _gw’ith’o_ Yn hytrach, defnyddiwyd collnodau i wahaniaethu rhwng gwahanol eiriau oedd yn cael eu sillafu'r union yr un fath fel arall. Er enghraifft rydym yn defnyddio collnod o flaen _’ma_ (sef _yma_) i wahaniaethu rhyngddo â _ma’_ (sef _mae_), _gor’o’_ i wahaniaethu rhwng _gorfod_ a ffurf trydydd person unigol amser dibynnol presennol _gori_, a _pwysa’_ i wahaniaethu rhwng ffurf luosog _pwys_ a nifer o ffurfiau berfol posib _pwyso_. Fodd bynnag, ceir eithriad i’r rheol hon, a hynny pan fo sillafu gair heb gollnod yn newid sŵn y llythyren cyn neu ar ôl y collnod, ac felly _Cymra’g_ sy’n gywir, nid _Cymrag_. ### Tagiau Wrth drawsgrifio, defnyddiwyd y tagiau hyn i recordio elfennau oedd y tu hwnt i leferydd yr unigolion: * \<anadlu> * \<anadlu i mewn yn sydyn> * \<aneglur> * \<cerddoriaeth> * \<chwerthin> * \<chwibanu> * \<chwythu allan> * \<clapio> * \<clirio gwddf> * \<cusanu> * \<distawrwydd> * \<ochneidio> * \<PII> * \<peswch> * \<sniffian> * \<twtian> Rhagwelwn y bydd y rhestr hon yn chwyddo wrth i ni drawsgrifio mwy o leferydd ac wrth i ni daro ar draws mwy o elfennau sydd y tu hwnt i leferydd unigolion. ### Synau nad ydynt yn eiriol Ymdrechwyd i drawsgrifio synau nad ydynt yn eiriol yn gyson. Er enghraifft, defnyddiwyd _yy_ bob tro (yn hytrach nag _yrr_, _yr_ neu _err_ neu gymysgedd o’r rheiny) i gynrychioli neu adlewyrchu’r sŵn a wnaethpwyd pan oedd siaradwr yn ceisio meddwl neu oedi wrth siarad. Defnyddiwyd y canlynol wrth drawsgrifio: * yy * yym * hmm * m-hm Eto, rhagwelwn y bydd y rhestr hon yn chwyddo wrth i ni drawsgrifio mwy o leferydd ac wrth i ni daro ar draws mwy o synau nad ydynt yn eiriol. ### Geiriau Saesneg Rydym wedi amgylchynu bob gair neu ymadrodd Saesneg gyda sêr, er enghraifft: > Dwi’n deall **\*sort of\***. ### Cymreigio berfenwau Pan fo siaradwyr yn defnyddio geiriau Saesneg fel berfenwau (trwy ychwanegu _io_ ar ddiwedd y gair er enghraifft) rydym wedi ymdrechu i sillafu’r gair gan ddefnyddio confensiynau sillafu Cymreig yn hytrach nag ychwanegu _io_ at sillafiad Saesneg o’r gair. Er enghraifft rydym wedi trawsgrifio _heitio_ yn hytrach na _hateio_, a _lyfio_ yn hytrach na _loveio_. ### Cywiro cam-siarad I sicrhau ein bod ni’n glynu at egwyddorion trawsgrifio verbatim penderfynwyd na ddylem gywiro cam-siarad neu gam-ynganu siaradwyr. Er enghraifft, yn y frawddeg ganlynol: > enfawr fel y diffyg o fwyd yym **efallu** cam-drin mae'n amlwg mai’r gair _efallai_ sydd dan sylw mewn gwirionedd, ond fe’i trawsgrifiwyd fel ei glywir. ### Atalnodi Defnyddiwyd atalnodau llawn, marciau cwestiwn ac ebychnodau wrth drawsgrifio’r lleferydd. Rydym wedi amgylchynu bob gair neu ymadrodd sydd wedi ei dyfynnu gyda _”_, er enghraifft: > Dywedodd hi **”Dwi’n mynd”** ond aeth hi ddim. ### Nodyn ynghylch ein defnydd o gomas Gan mai confensiwn ysgrifenedig yw coma yn y bôn, ni ddefnyddiwyd comas cymaint wrth drawsgrifio. Byddai defnyddio coma lle y disgwylir i’w weld mewn testun ysgrifenedig ddim o reidrwydd wedi adlewyrchu lleferydd yr unigolyn. Dylid cadw hynny mewn cof wrth ddarllen y trawsgrifiadau. ### Sillafu llythrennau Sillafwyd llythrennau unigol yn hytrach na thrawsgrifio’r llythrennau unigol yn unig. Hynny yw, hyn sy’n gywir: > Roedd ganddo **ow si di** **ac nid:** > Roedd ganddo **O C D** **na chwaith:** > Roedd ganddo **OCD** ### Rhifau Trawsgrifiwyd rhifau fel geiriau yn hytrach na digidau, hynny yw hyn sy’n gywir: > Y flwyddyn dwy fil ac ugain **ac nid:** > Y flwyddyn 2020 ### Gorffen gair ar ei hanner Marciwyd gair oedd wedi ei orffen ar ei hanner gyda `-`. Er enghraifft: > Ma’n rhaid i mi **ca-** cael diod. ### Gorffen brawddeg ar ei hanner/ailddechrau brawddeg Marciwyd brawddeg oedd wedi ei gorffen ar ei hanner gyda `...`. Er enghraifft: > Ma’n rhaid i mi ca’l... Ma’ rhaid i mi brynu diod. ### Siaradwr yn torri ar draws siaradwr arall Ceir yn y data llawer o enghreifftiau o siaradwr yn torri ar draws y prif leferydd gan ddefnyddio synau nad ydynt yn eiriol, geiriau neu ymadroddion (megis _m-hm_, _ie_, _ydi_, _yn union_ ac ati). Pan oedd y ddau siaradwr i'w clywed yn glir ag ar wahân, rhoddwyd `...` ar ddiwedd rhan gyntaf y lleferydd toredig, a `...` arall ar ddechrau ail ran y lleferydd toredig, fel yn yr enghraifft ganlynol: > Ond y peth yw... M-hm. ...mae’r ddau yn wir Pan nad oedd y ddau siaradwyr i'w clywed yn glir ag ar wahân, fe hepgorwyd y lleferydd o’r data. ### Rhegfeydd Dylid nodi ein bod ni heb hepgor rhegfeydd wrth drawsgrifio. ## Y Dyfodol Wrth ddefnyddio’r banc trawsgrifiadau dylid cadw mewn cof mai fersiwn cychwynnol ydyw. Bwriadwn fireinio a chysoni ein trawsgrifiadau ymhellach, ac ychwanegu mwy fyth o drawsgrifiadau i’r banc yn rheolaidd dros y flwyddyn nesaf ## Cyfyngiadau Er mwyn parchu'r cyfrannwyr, wrth lwytho'r data hwn i lawr rydych yn cytuno i beidio â cheisio adnabod y siaradwyr yn y data. ## Diolchiadau Diolchwn i'r cyfrannwyr am eu caniatâd i ddefnyddio'u lleferydd. Rydym hefyd yn ddiolchgar i Lywodraeth Cymru am ariannu’r gwaith hwn fel rhan o broject Technoleg Testun, Lleferydd a Chyfieithu ar gyfer yr Iaith Gymraeg. --- # Bangor Transcription Bank This resource is a bank of 35 hours 39 minutes and 53 seconds of segments of natural speech from over 50 contributors in mp3 file format, together with corresponding 'verbatim' transcripts of the speech in .tsv file format. The majority of the speech is spontaneous, natural speech. We distribute this material under a CC0 open license. ## Purpose The purpose of these transcripts is to act as training data for speech recognition models, including [our wav2vec models](https://github.com/techiaith/docker-wav2vec2-cy). For that purpose, transcriptions are more verbatim than what is seen in traditional transcriptions and than what is required for subtitling purposes, thus a bespoke set of conventions has been developed for the transcription work ([see below](#transcription_conventions) ). Our wav2vec models use an auxiliary component, namely a 'language model', to further standardize the speech recognition model’s output in order that it be more similar to traditional transcriptions and subtitles. We have provided 3 .tsv files, namely clips.tsv, train.tsv and test.tsv. clips.tsv contains all of our transcripts. train.tsv and test.tsv were created to provide 'standard' sets that allow users to compare models trained by different trainers fairly, i.e. they were created as a 'benchmark'. train.tsv contains 80% of our transcripts, and test.tsv contains the remaining 20%. Here is an example of the data content: ``` audio_filename audio_filesize transcript duration f86a046fd0964e0386d8c1363907183d.mp3 898272 *post industrial* yym a gyda yy dwi'n ca'l deud 5092 f0c2310fdca34faaa83beca5fa7ed212.mp3 809720 sut i ymdopio felly, wedyn erbyn hyn mae o nôl yn y cartra 4590 3eec3feefe254c9790739c22dd63c089.mp3 1335392 Felly ma' hon hefyd yn ddogfen fydd yn trosglwyddo gyda'r plant bobol ifanc o un cam i'r llall ac hefyd erbyn hyn i'r coleg 'lly. 7570 ``` There are four columns in the .tsv files. The first is the name of the audio file. The second is the size of the audio file. The transcript itself appears in the third column. The length of the audio clip appears in the last. Here is the information about the columns. | Field| Explanation | | ------ | ------ | | `audio_filename`| The name of the audio file within the 'clips' folder| | `audio_filesize` | The size of the file | | `transcript` | Transcript | | `duration` | Duration of the clip in milliseconds. | ### Translation of a Sub-set We have also translated 500 of our transcripts into English and published the translations together with their original transcripts in the translations.tsv file. Here is an example of the data content: ``` mp3_filename Original Translation 8d6b7347cae6092930aa9b436045e33d.mp3 fel oedden ni odd yym <anadlu> odd pob pennod yn troi mewn i Ben-Hur rywfaint ag yn yy, odd hi'n eitha anodd as we were um <breath> every episode turned into Ben-Hur, somewhat, and was er, it was quite difficult ce526eaf61557b8e3eb53eb1a2f55076.mp3 pan ddechreuon ni'r podlediad yma y bwriad odd i ga'l un pennod bob bythefnos <anadlu> ond yy, wrth i ni fynd ymlaen when we started this podcast the intention was to have one episode every two weeks <breath> but er, as we go on ``` There are three columns in the translation.tsv file. The first is the name of the audio file. The Welsh transcription is in the second column. The English translation is in the last. Here is the information about the columns. | Field| Explanation | | ------ | ------ | | `mp3_filename`| The name of the audio file within the 'clips' folder| | `Original` | The Welsh transcription| | `Translation` | The English translation| ## The Process of Creating the Resource The audio files were mainly collected from Welsh podcasts, after having gained the consent of the podcast owners and individual contributors to do so. We are extremely grateful to those people. In addition, some scripts were created which mimicked the pattern of news items and articles. These scripts were then read by Language Technologies Unit researchers in order to ensure that content of that type was included in the bank. The audio files were run through our in-house automated transcriber to segment the audio and create raw transcripts. Using Elan 6.4 (available from https://archive.mpi.nl/tla/elan), experienced transcribers listened to and corrected the raw transcript. ## A Note About Content Anonymization Out of respect to the contributors, we have anonymised all transcripts. It was decided to anonymize not only the names of individual people, but also any other Personally Identifiable Information (PII) including, but not limited to: * Phone number * Job titles/occupations * Workplaces * Names of public places * Geographical location * Dates/times When transcribing, all segments containing PII were marked with the \<PII> tag, we then filtered out all segments containing a \<PII> tag to ensure no personal information was published as part of this resource. We have also randomized the order of the segments so that they are not published in the order they appeared in the original audio files. <a name="transcription_conventions"></a> ## Transcription Conventions These transcription conventions were developed to ensure that the transcriptions were not only verbatim but also consistent. They were developed by referring to conventions used by the Unit in the past, conventions such as those used in the CorCenCC, Siarad, CIG1 and CIG2 corpora, and also through a process of ongoing development as the team undertook the task of transcription. **NOTE** - as we have partially developed the conventions at the same time as undertaking the task of transcription the early transcriptions may not follow the latest principles faithfully. We intend to check the transcripts after we have refined the conventions. ### Apostrophes Apostrophes were not used to mark every single letter omitted by speakers. For example, _gwitho_ (which is a pronunciation of _gweithio_) is correct, not _gw’ith'o_. Rather, apostrophes were used to distinguish between different words that were otherwise spelled identically. For example we use an apostrophe in front of _'ma_ (a pronunciation of _yma_) to distinguish it from _ma'_ (a pronunciation of _mae_), _gor'o'_ to distinguish between _gorfod_ and the third person singular form of the present dependent tense _gori_, and _pwysa'_ to distinguish between the plural form of _pwys_ and a number of possible verb forms of _pwyso_. However, there is an exception to this rule, that being when spelling a word without an apostrophe would change the sound of the letter before or after the apostrophe, thus _Cymra'g_ is correct, not _Cymrag_. ### Tags When transcribing, these tags were used to record elements that were external to the speech of the individuals: * \<anadlu> * \<anadlu i mewn yn sydyn> * \<aneglur> * \<cerddoriaeth> * \<chwerthin> * \<chwibanu> * \<chwythu allan> * \<clapio> * \<clirio gwddf> * \<cusanu> * \<distawrwydd> * \<ochneidio> * \<PII> * \<peswch> * \<sniffian> * \<twtian> We anticipate that this list will grow as we transcribe more speech and as we come across more elements that are external to the speech of individuals. ### Non-verbal sounds Efforts were made to transcribe non-verbal sounds consistently. For example, _yy_ was always used (rather than _yrr_, _yr_ or _err_, or a mixture of those) to represent or reflect the sound made when a speaker was trying to think or paused in speaking. The following were used in transcription: * yy * yym * hmm * m-hm Again, we anticipate that this list will grow as we transcribe more speech and as we encounter more non-verbal sounds. ### English words We have surrounded each English word or phrase with asterixis, for example: > Dwi’n deall **\*sort of\***. ### Adapting English words as Welsh language infinitives When speakers use English words as infinitives (by adding _io_ at the end of the word for example) we have endeavoured to spell the word using Welsh spelling conventions rather than adding _io_ to the English spelling of the word. For example we have transcribed _heitio_ instead of _hateio_, and _lyfio_ instead of _loveio_. ### Correction of mis-pronunciations To ensure that we adhere to the principles of verbatim transcription it was decided that we should not correct speakers' mis-pronunciations. For example, in the following sentence: > enfawr fel y diffyg o fwyd yym **efallu** cam-drin it is clear that _efallai_ is the intended word, but it is transcribed as it is heard. ### Punctuation Full stops, question marks and exclamation marks were used when transcribing the speech. We have surrounded all quoted words or phrases with _”_, for example: > Dywedodd hi **”Dwi’n mynd”** ond aeth hi ddim. ### A note about our use of commas As a comma is essentially a convention used for written text, commas were not used prolifically in transcription. Using a comma where one would expected to see it in a written text during transcription would not necessarily have reflected the individual's speech. This should be borne in mind when reading the transcripts. ### Individual letters Individual letters were spelled out rather than being transcribed as individual letters. That is, this is correct: > Roedd ganddo **ow si di** **not:** > Roedd ganddo **O C D** **nor:** > Roedd ganddo **OCD** ### Numbers Numbers were transcribed as words rather than digits, thus this is correct: > Y flwyddyn dwy fil ac ugain **rather than:** > Y flwyddyn 2020 ### Half-finished words Half-finished words are marked with a `-`. For example: > Ma’n rhaid i mi **ca-** cael diod. ### Half-finished/restarted sentences Half-finished sentences are marked with a `...`. For example: > Ma’n rhaid i mi ca’l... Ma’ rhaid i mi brynu diod. ### Speaker interruptions There are many examples of a speaker interrupting another speaker by using non-verbal sounds, words or phrases (such as _m-hm_, _ie_, _ydi_, _yn union_ etc.) in the data. When the two speakers could be heard clearly and distinctly, a `...` was placed at the end of the first part of the broken speech, and another `...` at the beginning of the second part of the broken speech, as in the following example: > Ond y peth yw... M-hm. ...mae’r ddau yn wir When the two speakers could not be heard clearly and distinctly, the speech was omitted from the data. ### Swearwords It should be noted that we have not omitted swearwords when transcribing. ## The future That this is an initial version of the transcript bank should be borne in mind when using this resource. We intend to refine and harmonize our transcripts further, and add yet more transcripts to the bank regularly over the next year. ## Restrictions In order to respect the contributors, by downloading this data you agree not to attempt to identify the speakers in the data. ## Acknowledgements We thank the contributors for their permission to use their speech. We are also grateful to the Welsh Government for funding this work as part of the Text, Speech and Translation Technology project for the Welsh Language.
techiaith/banc-trawsgrifiadau-bangor
[ "size_categories:10K<n<100K", "language:cy", "license:cc0-1.0", "verbatim transcriptions", "speech recognition", "region:us" ]
2023-05-11T12:08:07+00:00
{"language": ["cy"], "license": "cc0-1.0", "size_categories": ["10K<n<100K"], "pretty_name": "Banc Trawsgrifiadau Bangor", "tags": ["verbatim transcriptions", "speech recognition"], "dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "clips", "num_bytes": 678448153.375, "num_examples": 28277}, {"name": "train", "num_bytes": 543955916.375, "num_examples": 22621}, {"name": "test", "num_bytes": 134492237.0, "num_examples": 5656}], "download_size": 1345245508, "dataset_size": 1356896306.75}, "configs": [{"config_name": "default", "data_files": [{"split": "clips", "path": "data/clips-*"}, {"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-15T13:36:45+00:00
cb9c2fb6b2e322e70386cfd21356a631af711ce4
# Dataset Card for FT Speech ## Dataset Description - **Repository:** <https://ftspeech.github.io/> - **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected]) - **Size of downloaded dataset files:** 101.78 GB - **Size of the generated dataset:** 214.15 GB - **Total amount of disk used:** 315.93 GB ### Dataset Summary This dataset is an upload of the [FT Speech dataset](https://ftspeech.github.io/). The training, validation and test splits are the original ones. ### Supported Tasks and Leaderboards Training automatic speech recognition is the intended task for this dataset. No leaderboard is active at this point. ### Languages The dataset is available in Danish (`da`). ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** 101.78 GB - **Size of the generated dataset:** 214.15 GB - **Total amount of disk used:** 315.93 GB An example from the dataset looks as follows. ``` { 'utterance_id': 'S001_20151_M012_P00034-2', 'speaker_gender': 'F', 'sentence': 'alle de fem tekniske justeringer der er en del af lovforslaget', 'speaker_id': 'S001', 'audio': { 'path': 'S001_20151_M012_P00034-2.wav', 'array': array([-3.75366211e-03, -5.27954102e-03, -3.87573242e-03, ..., 9.15527344e-05, -1.52587891e-04, 5.79833984e-04]), 'sampling_rate': 16000 } } ``` ### Data Fields The data fields are the same among all splits. - `utterance_id`: a `string` feature. - `speaker_gender`: a `string` feature. - `sentence`: a `string` feature. - `speaker_id`: a `string` feature. - `audio`: an `Audio` feature. ### Dataset Statistics There are 995,677 samples in the training split, 2,601 in the dev_balanced split, 7,595 in the dev_other split, 5,534 in the test_balanced and 5,837 in the test_other split. #### Speakers There are 374 unique speakers in the training dataset, 20 unique speakers in the validation dataset and 40 unique speakers in the test dataset. None of the dataset splits share any speakers. #### Gender Distribution ![ftspeech-gender-distribution.png](https://cdn-uploads.huggingface.co/production/uploads/60d368a613f774189902f555/0h_L7-riNfQbKFdYWgy01.png) #### Transcription Length Distribution ![ftspeech-length-distribution.png](https://cdn-uploads.huggingface.co/production/uploads/60d368a613f774189902f555/z1MqsvACrY_8XNXAx0UcD.png) ## Dataset Creation ### Curation Rationale There are not many large-scale ASR datasets in Danish. ### Source Data The data constitutes public recordings of sessions from the Danish Parliament, along with manual transcriptions. ## Additional Information ### Dataset Curators Andreas Kirkedal, Marija Stepanović and Barbara Plank curated the dataset as part of their FT Speech paper (see citation below). [Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra Institute](https://alexandra.dk/) reorganised the dataset and uploaded it to the Hugging Face Hub. ### Licensing Information The dataset is licensed under [this custom license](https://www.ft.dk/da/aktuelt/tv-fra-folketinget/deling-og-rettigheder). ### Citation ``` @inproceedings{ftspeech, author = {Kirkedal, Andreas and Stepanović, Marija and Plank, Barbara}, title = {{FT Speech: Danish Parliament Speech Corpus}}, booktitle = {Proc. Interspeech 2020}, year = {2020} } ```
alexandrainst/ftspeech
[ "task_categories:automatic-speech-recognition", "size_categories:100K<n<1M", "language:da", "license:other", "region:us" ]
2023-05-11T12:08:57+00:00
{"language": ["da"], "license": "other", "size_categories": ["100K<n<1M"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "FT Speech", "dataset_info": {"features": [{"name": "utterance_id", "dtype": "string"}, {"name": "speaker_gender", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 209434570129.268, "num_examples": 995677}, {"name": "dev_balanced", "num_bytes": 579692770.829, "num_examples": 2601}, {"name": "dev_other", "num_bytes": 1725502342.095, "num_examples": 7595}, {"name": "test_balanced", "num_bytes": 1158740779.222, "num_examples": 5534}, {"name": "test_other", "num_bytes": 1254987645.527, "num_examples": 5837}], "download_size": 101776974871, "dataset_size": 214153493666.941}}
2023-10-01T09:29:09+00:00
5b7e4225e70b82f8ae86ec36d8bb0dfb5a9ed6a9
# Dataset Card for "batch_indexing_machine_multitask" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Circularmachines/batch_indexing_machine_multitask
[ "region:us" ]
2023-05-11T12:11:59+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "0", "dtype": "float64"}, {"name": "1", "dtype": "float64"}, {"name": "2", "dtype": "float64"}, {"name": "3", "dtype": "float64"}, {"name": "4", "dtype": "float64"}, {"name": "5", "dtype": "float64"}, {"name": "6", "dtype": "float64"}, {"name": "7", "dtype": "float64"}, {"name": "8", "dtype": "float64"}, {"name": "9", "dtype": "float64"}, {"name": "10", "dtype": "float64"}, {"name": "11", "dtype": "float64"}, {"name": "12", "dtype": "float64"}, {"name": "13", "dtype": "float64"}, {"name": "14", "dtype": "float64"}, {"name": "15", "dtype": "float64"}, {"name": "16", "dtype": "float64"}, {"name": "17", "dtype": "float64"}, {"name": "18", "dtype": "float64"}, {"name": "19", "dtype": "float64"}, {"name": "20", "dtype": "float64"}, {"name": "21", "dtype": "float64"}, {"name": "22", "dtype": "float64"}, {"name": "23", "dtype": "float64"}, {"name": "24", "dtype": "float64"}, {"name": "25", "dtype": "float64"}, {"name": "26", "dtype": "float64"}, {"name": "27", "dtype": "float64"}, {"name": "28", "dtype": "float64"}, {"name": "29", "dtype": "float64"}, {"name": "30", "dtype": "float64"}, {"name": "31", "dtype": "float64"}, {"name": "32", "dtype": "float64"}, {"name": "33", "dtype": "float64"}, {"name": "34", "dtype": "float64"}, {"name": "35", "dtype": "float64"}, {"name": "36", "dtype": "float64"}, {"name": "37", "dtype": "float64"}, {"name": "38", "dtype": "float64"}, {"name": "39", "dtype": "float64"}, {"name": "40", "dtype": "float64"}, {"name": "41", "dtype": "float64"}, {"name": "42", "dtype": "float64"}, {"name": "43", "dtype": "float64"}, {"name": "44", "dtype": "float64"}, {"name": "45", "dtype": "float64"}, {"name": "46", "dtype": "float64"}, {"name": "47", "dtype": "float64"}, {"name": "48", "dtype": "float64"}, {"name": "49", "dtype": "float64"}, {"name": "50", "dtype": "float64"}, {"name": "51", "dtype": "float64"}, {"name": "52", "dtype": "float64"}, {"name": "53", "dtype": "float64"}, {"name": "54", "dtype": "float64"}, {"name": "55", "dtype": "float64"}, {"name": "56", "dtype": "float64"}, {"name": "57", "dtype": "float64"}, {"name": "58", "dtype": "float64"}, {"name": "59", "dtype": "float64"}, {"name": "60", "dtype": "float64"}, {"name": "61", "dtype": "float64"}, {"name": "62", "dtype": "float64"}, {"name": "63", "dtype": "float64"}, {"name": "64", "dtype": "float64"}, {"name": "65", "dtype": "float64"}, {"name": "66", "dtype": "float64"}, {"name": "67", "dtype": "float64"}, {"name": "68", "dtype": "float64"}, {"name": "69", "dtype": "float64"}, {"name": "70", "dtype": "float64"}, {"name": "71", "dtype": "float64"}, {"name": "72", "dtype": "float64"}, {"name": "73", "dtype": "float64"}, {"name": "74", "dtype": "float64"}, {"name": "75", "dtype": "float64"}, {"name": "76", "dtype": "float64"}, {"name": "77", "dtype": "float64"}, {"name": "78", "dtype": "float64"}, {"name": "79", "dtype": "float64"}, {"name": "80", "dtype": "float64"}, {"name": "81", "dtype": "float64"}, {"name": "82", "dtype": "float64"}, {"name": "83", "dtype": "float64"}, {"name": "84", "dtype": "float64"}, {"name": "85", "dtype": "float64"}, {"name": "86", "dtype": "float64"}, {"name": "87", "dtype": "float64"}, {"name": "88", "dtype": "float64"}, {"name": "89", "dtype": "float64"}, {"name": "90", "dtype": "float64"}, {"name": "91", "dtype": "float64"}, {"name": "92", "dtype": "float64"}, {"name": "93", "dtype": "float64"}, {"name": "94", "dtype": "float64"}, {"name": "95", "dtype": "float64"}, {"name": "96", "dtype": "float64"}, {"name": "97", "dtype": "float64"}, {"name": "98", "dtype": "float64"}, {"name": "99", "dtype": "float64"}, {"name": "100", "dtype": "float64"}, {"name": "101", "dtype": "float64"}, {"name": "102", "dtype": "float64"}, {"name": "103", "dtype": "float64"}, {"name": "104", "dtype": "float64"}, {"name": "105", "dtype": "float64"}, {"name": "106", "dtype": "float64"}, {"name": "107", "dtype": "float64"}, {"name": "108", "dtype": "float64"}, {"name": "109", "dtype": "float64"}, {"name": "110", "dtype": "float64"}, {"name": "111", "dtype": "float64"}, {"name": "112", "dtype": "float64"}, {"name": "113", "dtype": "float64"}, {"name": "114", "dtype": "float64"}, {"name": "115", "dtype": "float64"}, {"name": "116", "dtype": "float64"}, {"name": "117", "dtype": "float64"}, {"name": "118", "dtype": "float64"}, {"name": "119", "dtype": "float64"}, {"name": "120", "dtype": "float64"}, {"name": "121", "dtype": "float64"}, {"name": "122", "dtype": "float64"}, {"name": "123", "dtype": "float64"}, {"name": "124", "dtype": "float64"}, {"name": "125", "dtype": "float64"}, {"name": "126", "dtype": "float64"}, {"name": "127", "dtype": "float64"}, {"name": "128", "dtype": "float64"}, {"name": "129", "dtype": "float64"}, {"name": "130", "dtype": "float64"}, {"name": "131", "dtype": "float64"}, {"name": "132", "dtype": "float64"}, {"name": "133", "dtype": "float64"}, {"name": "134", "dtype": "float64"}, {"name": "135", "dtype": "float64"}, {"name": "136", "dtype": "float64"}, {"name": "137", "dtype": "float64"}, {"name": "138", "dtype": "float64"}, {"name": "139", "dtype": "float64"}, {"name": "140", "dtype": "float64"}, {"name": "141", "dtype": "float64"}, {"name": "142", "dtype": "float64"}, {"name": "143", "dtype": "float64"}, {"name": "144", "dtype": "float64"}, {"name": "145", "dtype": "float64"}, {"name": "146", "dtype": "float64"}, {"name": "147", "dtype": "float64"}, {"name": "148", "dtype": "float64"}, {"name": "149", "dtype": "float64"}, {"name": "150", "dtype": "float64"}, {"name": "151", "dtype": "float64"}, {"name": "152", "dtype": "float64"}, {"name": "153", "dtype": "float64"}, {"name": "154", "dtype": "float64"}, {"name": "155", "dtype": "float64"}, {"name": "156", "dtype": "float64"}, {"name": "157", "dtype": "float64"}, {"name": "158", "dtype": "float64"}, {"name": "159", "dtype": "float64"}, {"name": "160", "dtype": "float64"}, {"name": "161", "dtype": "float64"}, {"name": "162", "dtype": "float64"}, {"name": "163", "dtype": "float64"}, {"name": "164", "dtype": "float64"}, {"name": "165", "dtype": "float64"}, {"name": "166", "dtype": "float64"}, {"name": "167", "dtype": "float64"}, {"name": "168", "dtype": "float64"}, {"name": "169", "dtype": "float64"}, {"name": "170", "dtype": "float64"}, {"name": "171", "dtype": "float64"}, {"name": "172", "dtype": "float64"}, {"name": "173", "dtype": "float64"}, {"name": "174", "dtype": "float64"}, {"name": "175", "dtype": "float64"}, {"name": "176", "dtype": "float64"}, {"name": "177", "dtype": "float64"}, {"name": "178", "dtype": "float64"}, {"name": "179", "dtype": "float64"}, {"name": "180", "dtype": "float64"}, {"name": "181", "dtype": "float64"}, {"name": "182", "dtype": "float64"}, {"name": "183", "dtype": "float64"}, {"name": "184", "dtype": "float64"}, {"name": "185", "dtype": "float64"}, {"name": "186", "dtype": "float64"}, {"name": "187", "dtype": "float64"}, {"name": "188", "dtype": "float64"}, {"name": "189", "dtype": "float64"}, {"name": "190", "dtype": "float64"}, {"name": "191", "dtype": "float64"}, {"name": "192", "dtype": "float64"}, {"name": "193", "dtype": "float64"}, {"name": "194", "dtype": "float64"}, {"name": "195", "dtype": "float64"}, {"name": "196", "dtype": "float64"}, {"name": "197", "dtype": "float64"}, {"name": "198", "dtype": "float64"}, {"name": "199", "dtype": "float64"}, {"name": "200", "dtype": "float64"}, {"name": "201", "dtype": "float64"}, {"name": "202", "dtype": "float64"}, {"name": "203", "dtype": "float64"}, {"name": "204", "dtype": "float64"}, {"name": "205", "dtype": "float64"}, {"name": "206", "dtype": "float64"}, {"name": "207", "dtype": "float64"}, {"name": "208", "dtype": "float64"}, {"name": "209", "dtype": "float64"}, {"name": "210", "dtype": "float64"}, {"name": "211", "dtype": "float64"}, {"name": "212", "dtype": "float64"}, {"name": "213", "dtype": "float64"}, {"name": "214", "dtype": "float64"}, {"name": "215", "dtype": "float64"}, {"name": "216", "dtype": "float64"}, {"name": "217", "dtype": "float64"}, {"name": "218", "dtype": "float64"}, {"name": "219", "dtype": "float64"}, {"name": "220", "dtype": "float64"}, {"name": "221", "dtype": "float64"}, {"name": "222", "dtype": "float64"}, {"name": "223", "dtype": "float64"}, {"name": "224", "dtype": "float64"}, {"name": "225", "dtype": "float64"}, {"name": "226", "dtype": "float64"}, {"name": "227", "dtype": "float64"}, {"name": "228", "dtype": "float64"}, {"name": "229", "dtype": "float64"}, {"name": "230", "dtype": "float64"}, {"name": "231", "dtype": "float64"}, {"name": "232", "dtype": "float64"}, {"name": "233", "dtype": "float64"}, {"name": "234", "dtype": "float64"}, {"name": "235", "dtype": "float64"}, {"name": "236", "dtype": "float64"}, {"name": "237", "dtype": "float64"}, {"name": "238", "dtype": "float64"}, {"name": "239", "dtype": "float64"}, {"name": "240", "dtype": "float64"}, {"name": "241", "dtype": "float64"}, {"name": "242", "dtype": "float64"}, {"name": "243", "dtype": "float64"}, {"name": "244", "dtype": "float64"}, {"name": "245", "dtype": "float64"}, {"name": "246", "dtype": "float64"}, {"name": "247", "dtype": "float64"}, {"name": "248", "dtype": "float64"}, {"name": "249", "dtype": "float64"}, {"name": "250", "dtype": "float64"}, {"name": "251", "dtype": "float64"}, {"name": "252", "dtype": "float64"}, {"name": "253", "dtype": "float64"}, {"name": "254", "dtype": "float64"}, {"name": "255", "dtype": "float64"}, {"name": "256", "dtype": "float64"}, {"name": "257", "dtype": "float64"}, {"name": "258", "dtype": "float64"}, {"name": "259", "dtype": "float64"}, {"name": "260", "dtype": "float64"}, {"name": "261", "dtype": "float64"}, {"name": "262", "dtype": "float64"}, {"name": "263", "dtype": "float64"}, {"name": "264", "dtype": "float64"}, {"name": "265", "dtype": "float64"}, {"name": "266", "dtype": "float64"}, {"name": "267", "dtype": "float64"}, {"name": "268", "dtype": "float64"}, {"name": "269", "dtype": "float64"}, {"name": "270", "dtype": "float64"}, {"name": "271", "dtype": "float64"}, {"name": "272", "dtype": "float64"}, {"name": "273", "dtype": "float64"}, {"name": "274", "dtype": "float64"}, {"name": "275", "dtype": "float64"}, {"name": "276", "dtype": "float64"}, {"name": "277", "dtype": "float64"}, {"name": "278", "dtype": "float64"}, {"name": "279", "dtype": "float64"}, {"name": "280", "dtype": "float64"}, {"name": "281", "dtype": "float64"}, {"name": "282", "dtype": "float64"}, {"name": "283", "dtype": "float64"}, {"name": "284", "dtype": "float64"}, {"name": "285", "dtype": "float64"}, {"name": "286", "dtype": "float64"}, {"name": "287", "dtype": "float64"}, {"name": "288", "dtype": "float64"}, {"name": "289", "dtype": "float64"}, {"name": "290", "dtype": "float64"}, {"name": "291", "dtype": "float64"}, {"name": "292", "dtype": "float64"}, {"name": "293", "dtype": "float64"}, {"name": "294", "dtype": "float64"}, {"name": "295", "dtype": "float64"}, {"name": "296", "dtype": "float64"}, {"name": "297", "dtype": "float64"}, {"name": "298", "dtype": "float64"}, {"name": "299", "dtype": "float64"}, {"name": "300", "dtype": "float64"}, {"name": "301", "dtype": "float64"}, {"name": "302", "dtype": "float64"}, {"name": "303", "dtype": "float64"}, {"name": "304", "dtype": "float64"}, {"name": "305", "dtype": "float64"}, {"name": "306", "dtype": "float64"}, {"name": "307", "dtype": "float64"}, {"name": "308", "dtype": "float64"}, {"name": "309", "dtype": "float64"}, {"name": "310", "dtype": "float64"}, {"name": "311", "dtype": "float64"}, {"name": "312", "dtype": "float64"}, {"name": "313", "dtype": "float64"}, {"name": "314", "dtype": "float64"}, {"name": "315", "dtype": "float64"}, {"name": "316", "dtype": "float64"}, {"name": "317", "dtype": "float64"}, {"name": "318", "dtype": "float64"}, {"name": "319", "dtype": "float64"}, {"name": "320", "dtype": "float64"}, {"name": "321", "dtype": "float64"}, {"name": "322", "dtype": "float64"}, {"name": "323", "dtype": "float64"}, {"name": "324", "dtype": "float64"}, {"name": "325", "dtype": "float64"}, {"name": "326", "dtype": "float64"}, {"name": "327", "dtype": "float64"}, {"name": "328", "dtype": "float64"}, {"name": "329", "dtype": "float64"}, {"name": "330", "dtype": "float64"}, {"name": "331", "dtype": "float64"}, {"name": "332", "dtype": "float64"}, {"name": "333", "dtype": "float64"}, {"name": "334", "dtype": "float64"}, {"name": "335", "dtype": "float64"}, {"name": "336", "dtype": "float64"}, {"name": "337", "dtype": "float64"}, {"name": "338", "dtype": "float64"}, {"name": "339", "dtype": "float64"}, {"name": "340", "dtype": "float64"}, {"name": "341", "dtype": "float64"}, {"name": "342", "dtype": "float64"}, {"name": "343", "dtype": "float64"}, {"name": "344", "dtype": "float64"}, {"name": "345", "dtype": "float64"}, {"name": "346", "dtype": "float64"}, {"name": "347", "dtype": "float64"}, {"name": "348", "dtype": "float64"}, {"name": "349", "dtype": "float64"}, {"name": "350", "dtype": "float64"}, {"name": "351", "dtype": "float64"}, {"name": "352", "dtype": "float64"}, {"name": "353", "dtype": "float64"}, {"name": "354", "dtype": "float64"}, {"name": "355", "dtype": "float64"}, {"name": "356", "dtype": "float64"}, {"name": "357", "dtype": "float64"}, {"name": "358", "dtype": "float64"}, {"name": "359", "dtype": "float64"}, {"name": "360", "dtype": "float64"}, {"name": "361", "dtype": "float64"}, {"name": "362", "dtype": "float64"}, {"name": "363", "dtype": "float64"}, {"name": "364", "dtype": "float64"}, {"name": "365", "dtype": "float64"}, {"name": "366", "dtype": "float64"}, {"name": "367", "dtype": "float64"}, {"name": "368", "dtype": "float64"}, {"name": "369", "dtype": "float64"}, {"name": "370", "dtype": "float64"}, {"name": "371", "dtype": "float64"}, {"name": "372", "dtype": "float64"}, {"name": "373", "dtype": "float64"}, {"name": "374", "dtype": "float64"}, {"name": "375", "dtype": "float64"}, {"name": "376", "dtype": "float64"}, {"name": "377", "dtype": "float64"}, {"name": "378", "dtype": "float64"}, {"name": "379", "dtype": "float64"}, {"name": "380", "dtype": "float64"}, {"name": "381", "dtype": "float64"}, {"name": "382", "dtype": "float64"}, {"name": "383", "dtype": "float64"}, {"name": "384", "dtype": "float64"}, {"name": "385", "dtype": "float64"}, {"name": "386", "dtype": "float64"}, {"name": "387", "dtype": "float64"}, {"name": "388", "dtype": "float64"}, {"name": "389", "dtype": "float64"}, {"name": "390", "dtype": "float64"}, {"name": "391", "dtype": "float64"}, {"name": "392", "dtype": "float64"}, {"name": "393", "dtype": "float64"}, {"name": "394", "dtype": "float64"}, {"name": "395", "dtype": "float64"}, {"name": "396", "dtype": "float64"}, {"name": "397", "dtype": "float64"}, {"name": "398", "dtype": "float64"}, {"name": "399", "dtype": "float64"}, {"name": "400", "dtype": "float64"}, {"name": "401", "dtype": "float64"}, {"name": "402", "dtype": "float64"}, {"name": "403", "dtype": "float64"}, {"name": "404", "dtype": "float64"}, {"name": "405", "dtype": "float64"}, {"name": "406", "dtype": "float64"}, {"name": "407", "dtype": "float64"}, {"name": "408", "dtype": "float64"}, {"name": "409", "dtype": "float64"}, {"name": "410", "dtype": "float64"}, {"name": "411", "dtype": "float64"}, {"name": "412", "dtype": "float64"}, {"name": "413", "dtype": "float64"}, {"name": "414", "dtype": "float64"}, {"name": "415", "dtype": "float64"}, {"name": "416", "dtype": "float64"}, {"name": "417", "dtype": "float64"}, {"name": "418", "dtype": "float64"}, {"name": "419", "dtype": "float64"}, {"name": "420", "dtype": "float64"}, {"name": "421", "dtype": "float64"}, {"name": "422", "dtype": "float64"}, {"name": "423", "dtype": "float64"}, {"name": "424", "dtype": "float64"}, {"name": "425", "dtype": "float64"}, {"name": "426", "dtype": "float64"}, {"name": "427", "dtype": "float64"}, {"name": "428", "dtype": "float64"}, {"name": "429", "dtype": "float64"}, {"name": "430", "dtype": "float64"}, {"name": "431", "dtype": "float64"}, {"name": "432", "dtype": "float64"}, {"name": "433", "dtype": "float64"}, {"name": "434", "dtype": "float64"}, {"name": "435", "dtype": "float64"}, {"name": "436", "dtype": "float64"}, {"name": "437", "dtype": "float64"}, {"name": "438", "dtype": "float64"}, {"name": "439", "dtype": "float64"}, {"name": "440", "dtype": "float64"}, {"name": "441", "dtype": "float64"}, {"name": "442", "dtype": "float64"}, {"name": "443", "dtype": "float64"}, {"name": "444", "dtype": "float64"}, {"name": "445", "dtype": "float64"}, {"name": "446", "dtype": "float64"}, {"name": "447", "dtype": "float64"}, {"name": "448", "dtype": "float64"}, {"name": "449", "dtype": "float64"}, {"name": "450", "dtype": "float64"}, {"name": "451", "dtype": "float64"}, {"name": "452", "dtype": "float64"}, {"name": "453", "dtype": "float64"}, {"name": "454", "dtype": "float64"}, {"name": "455", "dtype": "float64"}, {"name": "456", "dtype": "float64"}, {"name": "457", "dtype": "float64"}, {"name": "458", "dtype": "float64"}, {"name": "459", "dtype": "float64"}, {"name": "460", "dtype": "float64"}, {"name": "461", "dtype": "float64"}, {"name": "462", "dtype": "float64"}, {"name": "463", "dtype": "float64"}, {"name": "464", "dtype": "float64"}, {"name": "465", "dtype": "float64"}, {"name": "466", "dtype": "float64"}, {"name": "467", "dtype": "float64"}, {"name": "468", "dtype": "float64"}, {"name": "469", "dtype": "float64"}, {"name": "470", "dtype": "float64"}, {"name": "471", "dtype": "float64"}, {"name": "472", "dtype": "float64"}, {"name": "473", "dtype": "float64"}, {"name": "474", "dtype": "float64"}, {"name": "475", "dtype": "float64"}, {"name": "476", "dtype": "float64"}, {"name": "477", "dtype": "float64"}, {"name": "478", "dtype": "float64"}, {"name": "479", "dtype": "float64"}, {"name": "480", "dtype": "float64"}, {"name": "481", "dtype": "float64"}, {"name": "482", "dtype": "float64"}, {"name": "483", "dtype": "float64"}, {"name": "484", "dtype": "float64"}, {"name": "485", "dtype": "float64"}, {"name": "486", "dtype": "float64"}, {"name": "487", "dtype": "float64"}, {"name": "488", "dtype": "float64"}, {"name": "489", "dtype": "float64"}, {"name": "490", "dtype": "float64"}, {"name": "491", "dtype": "float64"}, {"name": "492", "dtype": "float64"}, {"name": "493", "dtype": "float64"}, {"name": "494", "dtype": "float64"}, {"name": "495", "dtype": "float64"}, {"name": "496", "dtype": "float64"}, {"name": "497", "dtype": "float64"}, {"name": "498", "dtype": "float64"}, {"name": "499", "dtype": "float64"}, {"name": "500", "dtype": "float64"}, {"name": "501", "dtype": "float64"}, {"name": "502", "dtype": "float64"}, {"name": "503", "dtype": "float64"}, {"name": "504", "dtype": "float64"}, {"name": "505", "dtype": "float64"}, {"name": "506", "dtype": "float64"}, {"name": "507", "dtype": "float64"}, {"name": "508", "dtype": "float64"}, {"name": "509", "dtype": "float64"}, {"name": "510", "dtype": "float64"}, {"name": "511", "dtype": "float64"}, {"name": "512", "dtype": "float64"}, {"name": "513", "dtype": "float64"}, {"name": "514", "dtype": "float64"}, {"name": "515", "dtype": "float64"}, {"name": "516", "dtype": "float64"}, {"name": "517", "dtype": "float64"}, {"name": "518", "dtype": "float64"}, {"name": "519", "dtype": "float64"}, {"name": "520", "dtype": "float64"}, {"name": "521", "dtype": "float64"}, {"name": "522", "dtype": "float64"}, {"name": "523", "dtype": "float64"}, {"name": "524", "dtype": "float64"}, {"name": "525", "dtype": "float64"}, {"name": "526", "dtype": "float64"}, {"name": "527", "dtype": "float64"}, {"name": "528", "dtype": "float64"}, {"name": "529", "dtype": "float64"}, {"name": "530", "dtype": "float64"}, {"name": "531", "dtype": "float64"}, {"name": "532", "dtype": "float64"}, {"name": "533", "dtype": "float64"}, {"name": "534", "dtype": "float64"}, {"name": "535", "dtype": "float64"}, {"name": "536", "dtype": "float64"}, {"name": "537", "dtype": "float64"}, {"name": "538", "dtype": "float64"}, {"name": "539", "dtype": "float64"}, {"name": "540", "dtype": "float64"}, {"name": "541", "dtype": "float64"}, {"name": "542", "dtype": "float64"}, {"name": "543", "dtype": "float64"}, {"name": "544", "dtype": "float64"}, {"name": "545", "dtype": "float64"}, {"name": "546", "dtype": "float64"}, {"name": "547", "dtype": "float64"}, {"name": "548", "dtype": "float64"}, {"name": "549", "dtype": "float64"}, {"name": "550", "dtype": "float64"}, {"name": "551", "dtype": "float64"}, {"name": "552", "dtype": "float64"}, {"name": "553", "dtype": "float64"}, {"name": "554", "dtype": "float64"}, {"name": "555", "dtype": "float64"}, {"name": "556", "dtype": "float64"}, {"name": "557", "dtype": "float64"}, {"name": "558", "dtype": "float64"}, {"name": "559", "dtype": "float64"}, {"name": "560", "dtype": "float64"}, {"name": "561", "dtype": "float64"}, {"name": "562", "dtype": "float64"}, {"name": "563", "dtype": "float64"}, {"name": "564", "dtype": "float64"}, {"name": "565", "dtype": "float64"}, {"name": "566", "dtype": "float64"}, {"name": "567", "dtype": "float64"}, {"name": "568", "dtype": "float64"}, {"name": "569", "dtype": "float64"}, {"name": "570", "dtype": "float64"}, {"name": "571", "dtype": "float64"}, {"name": "572", "dtype": "float64"}, {"name": "573", "dtype": "float64"}, {"name": "574", "dtype": "float64"}, {"name": "575", "dtype": "float64"}, {"name": "576", "dtype": "float64"}, {"name": "577", "dtype": "float64"}, {"name": "578", "dtype": "float64"}, {"name": "579", "dtype": "float64"}, {"name": "580", "dtype": "float64"}, {"name": "581", "dtype": "float64"}, {"name": "582", "dtype": "float64"}, {"name": "583", "dtype": "float64"}, {"name": "584", "dtype": "float64"}, {"name": "585", "dtype": "float64"}, {"name": "586", "dtype": "float64"}, {"name": "587", "dtype": "float64"}, {"name": "588", "dtype": "float64"}, {"name": "589", "dtype": "float64"}, {"name": "590", "dtype": "float64"}, {"name": "591", "dtype": "float64"}, {"name": "592", "dtype": "float64"}, {"name": "593", "dtype": "float64"}, {"name": "594", "dtype": "float64"}, {"name": "595", "dtype": "float64"}, {"name": "596", "dtype": "float64"}, {"name": "597", "dtype": "float64"}, {"name": "598", "dtype": "float64"}, {"name": "599", "dtype": "float64"}, {"name": "600", "dtype": "float64"}, {"name": "601", "dtype": "float64"}, {"name": "602", "dtype": "float64"}, {"name": "603", "dtype": "float64"}, {"name": "604", "dtype": "float64"}, {"name": "605", "dtype": "float64"}, {"name": "606", "dtype": "float64"}, {"name": "607", "dtype": "float64"}, {"name": "608", "dtype": "float64"}, {"name": "609", "dtype": "float64"}, {"name": "610", "dtype": "float64"}, {"name": "611", "dtype": "float64"}, {"name": "612", "dtype": "float64"}, {"name": "613", "dtype": "float64"}, {"name": "614", "dtype": "float64"}, {"name": "615", "dtype": "float64"}, {"name": "616", "dtype": "float64"}, {"name": "617", "dtype": "float64"}, {"name": "618", "dtype": "float64"}, {"name": "619", "dtype": "float64"}, {"name": "620", "dtype": "float64"}, {"name": "621", "dtype": "float64"}, {"name": "622", "dtype": "float64"}, {"name": "623", "dtype": "float64"}, {"name": "624", "dtype": "float64"}, {"name": "625", "dtype": "float64"}, {"name": "626", "dtype": "float64"}, {"name": "627", "dtype": "float64"}, {"name": "628", "dtype": "float64"}, {"name": "629", "dtype": "float64"}, {"name": "630", "dtype": "float64"}, {"name": "631", "dtype": "float64"}, {"name": "632", "dtype": "float64"}, {"name": "633", "dtype": "float64"}, {"name": "634", "dtype": "float64"}, {"name": "635", "dtype": "float64"}, {"name": "636", "dtype": "float64"}, {"name": "637", "dtype": "float64"}, {"name": "638", "dtype": "float64"}, {"name": "639", "dtype": "float64"}, {"name": "640", "dtype": "float64"}, {"name": "641", "dtype": "float64"}, {"name": "642", "dtype": "float64"}, {"name": "643", "dtype": "float64"}, {"name": "644", "dtype": "float64"}, {"name": "645", "dtype": "float64"}, {"name": "646", "dtype": "float64"}, {"name": "647", "dtype": "float64"}, {"name": "648", "dtype": "float64"}, {"name": "649", "dtype": "float64"}, {"name": "650", "dtype": "float64"}, {"name": "651", "dtype": "float64"}, {"name": "652", "dtype": "float64"}, {"name": "653", "dtype": "float64"}, {"name": "654", "dtype": "float64"}, {"name": "655", "dtype": "float64"}, {"name": "656", "dtype": "float64"}, {"name": "657", "dtype": "float64"}, {"name": "658", "dtype": "float64"}, {"name": "659", "dtype": "float64"}, {"name": "660", "dtype": "float64"}, {"name": "661", "dtype": "float64"}, {"name": "662", "dtype": "float64"}, {"name": "663", "dtype": "float64"}, {"name": "664", "dtype": "float64"}, {"name": "665", "dtype": "float64"}, {"name": "666", "dtype": "float64"}, {"name": "667", "dtype": "float64"}, {"name": "668", "dtype": "float64"}, {"name": "669", "dtype": "float64"}, {"name": "670", "dtype": "float64"}, {"name": "671", "dtype": "float64"}, {"name": "672", "dtype": "float64"}, {"name": "673", "dtype": "float64"}, {"name": "674", "dtype": "float64"}, {"name": "675", "dtype": "float64"}, {"name": "676", "dtype": "float64"}, {"name": "677", "dtype": "float64"}, {"name": "678", "dtype": "float64"}, {"name": "679", "dtype": "float64"}, {"name": "680", "dtype": "float64"}, {"name": "681", "dtype": "float64"}, {"name": "682", "dtype": "float64"}, {"name": "683", "dtype": "float64"}, {"name": "684", "dtype": "float64"}, {"name": "685", "dtype": "float64"}, {"name": "686", "dtype": "float64"}, {"name": "687", "dtype": "float64"}, {"name": "688", "dtype": "float64"}, {"name": "689", "dtype": "float64"}, {"name": "690", "dtype": "float64"}, {"name": "691", "dtype": "float64"}, {"name": "692", "dtype": "float64"}, {"name": "693", "dtype": "float64"}, {"name": "694", "dtype": "float64"}, {"name": "695", "dtype": "float64"}, {"name": "696", "dtype": "float64"}, {"name": "697", "dtype": "float64"}, {"name": "698", "dtype": "float64"}, {"name": "699", "dtype": "float64"}, {"name": "700", "dtype": "float64"}, {"name": "701", "dtype": "float64"}, {"name": "702", "dtype": "float64"}, {"name": "703", "dtype": "float64"}, {"name": "704", "dtype": "float64"}, {"name": "705", "dtype": "float64"}, {"name": "706", "dtype": "float64"}, {"name": "707", "dtype": "float64"}, {"name": "708", "dtype": "float64"}, {"name": "709", "dtype": "float64"}, {"name": "710", "dtype": "float64"}, {"name": "711", "dtype": "float64"}, {"name": "712", "dtype": "float64"}, {"name": "713", "dtype": "float64"}, {"name": "714", "dtype": "float64"}, {"name": "715", "dtype": "float64"}, {"name": "716", "dtype": "float64"}, {"name": "717", "dtype": "float64"}, {"name": "718", "dtype": "float64"}, {"name": "719", "dtype": "float64"}, {"name": "720", "dtype": "float64"}, {"name": "721", "dtype": "float64"}, {"name": "722", "dtype": "float64"}, {"name": "723", "dtype": "float64"}, {"name": "724", "dtype": "float64"}, {"name": "725", "dtype": "float64"}, {"name": "726", "dtype": "float64"}, {"name": "727", "dtype": "float64"}, {"name": "728", "dtype": "float64"}, {"name": "729", "dtype": "float64"}, {"name": "730", "dtype": "float64"}, {"name": "731", "dtype": "float64"}, {"name": "732", "dtype": "float64"}, {"name": "733", "dtype": "float64"}, {"name": "734", "dtype": "float64"}, {"name": "735", "dtype": "float64"}, {"name": "736", "dtype": "float64"}, {"name": "737", "dtype": "float64"}, {"name": "738", "dtype": "float64"}, {"name": "739", "dtype": "float64"}, {"name": "740", "dtype": "float64"}, {"name": "741", "dtype": "float64"}, {"name": "742", "dtype": "float64"}, {"name": "743", "dtype": "float64"}, {"name": "744", "dtype": "float64"}, {"name": "745", "dtype": "float64"}, {"name": "746", "dtype": "float64"}, {"name": "747", "dtype": "float64"}, {"name": "748", "dtype": "float64"}, {"name": "749", "dtype": "float64"}, {"name": "750", "dtype": "float64"}, {"name": "751", "dtype": "float64"}, {"name": "752", "dtype": "float64"}, {"name": "753", "dtype": "float64"}, {"name": "754", "dtype": "float64"}, {"name": "755", "dtype": "float64"}, {"name": "756", "dtype": "float64"}, {"name": "757", "dtype": "float64"}, {"name": "758", "dtype": "float64"}, {"name": "759", "dtype": "float64"}, {"name": "760", "dtype": "float64"}, {"name": "761", "dtype": "float64"}, {"name": "762", "dtype": "float64"}, {"name": "763", "dtype": "float64"}, {"name": "764", "dtype": "float64"}, {"name": "765", "dtype": "float64"}, {"name": "766", "dtype": "float64"}, {"name": "767", "dtype": "float64"}, {"name": "768", "dtype": "float64"}, {"name": "769", "dtype": "float64"}, {"name": "770", "dtype": "float64"}, {"name": "771", "dtype": "float64"}, {"name": "772", "dtype": "float64"}, {"name": "773", "dtype": "float64"}, {"name": "774", "dtype": "float64"}, {"name": "775", "dtype": "float64"}, {"name": "776", "dtype": "float64"}, {"name": "777", "dtype": "float64"}, {"name": "778", "dtype": "float64"}, {"name": "779", "dtype": "float64"}, {"name": "780", "dtype": "float64"}, {"name": "781", "dtype": "float64"}, {"name": "782", "dtype": "float64"}, {"name": "783", "dtype": "float64"}, {"name": "784", "dtype": "float64"}, {"name": "785", "dtype": "float64"}, {"name": "786", "dtype": "float64"}, {"name": "787", "dtype": "float64"}, {"name": "788", "dtype": "float64"}, {"name": "789", "dtype": "float64"}, {"name": "790", "dtype": "float64"}, {"name": "791", "dtype": "float64"}, {"name": "792", "dtype": "float64"}, {"name": "793", "dtype": "float64"}, {"name": "794", "dtype": "float64"}, {"name": "795", "dtype": "float64"}, {"name": "796", "dtype": "float64"}, {"name": "797", "dtype": "float64"}, {"name": "798", "dtype": "float64"}, {"name": "799", "dtype": "float64"}, {"name": "800", "dtype": "float64"}, {"name": "801", "dtype": "float64"}, {"name": "802", "dtype": "float64"}, {"name": "803", "dtype": "float64"}, {"name": "804", "dtype": "float64"}, {"name": "805", "dtype": "float64"}, {"name": "806", "dtype": "float64"}, {"name": "807", "dtype": "float64"}, {"name": "808", "dtype": "float64"}, {"name": "809", "dtype": "float64"}, {"name": "810", "dtype": "float64"}, {"name": "811", "dtype": "float64"}, {"name": "812", "dtype": "float64"}, {"name": "813", "dtype": "float64"}, {"name": "814", "dtype": "float64"}, {"name": "815", "dtype": "float64"}, {"name": "816", "dtype": "float64"}, {"name": "817", "dtype": "float64"}, {"name": "818", "dtype": "float64"}, {"name": "819", "dtype": "float64"}, {"name": "820", "dtype": "float64"}, {"name": "821", "dtype": "float64"}, {"name": "822", "dtype": "float64"}, {"name": "823", "dtype": "float64"}, {"name": "824", "dtype": "float64"}, {"name": "825", "dtype": "float64"}, {"name": "826", "dtype": "float64"}, {"name": "827", "dtype": "float64"}, {"name": "828", "dtype": "float64"}, {"name": "829", "dtype": "float64"}, {"name": "830", "dtype": "float64"}, {"name": "831", "dtype": "float64"}, {"name": "832", "dtype": "float64"}, {"name": "833", "dtype": "float64"}, {"name": "834", "dtype": "float64"}, {"name": "835", "dtype": "float64"}, {"name": "836", "dtype": "float64"}, {"name": "837", "dtype": "float64"}, {"name": "838", "dtype": "float64"}, {"name": "839", "dtype": "float64"}, {"name": "840", "dtype": "float64"}, {"name": "841", "dtype": "float64"}, {"name": "842", "dtype": "float64"}, {"name": "843", "dtype": "float64"}, {"name": "844", "dtype": "float64"}, {"name": "845", "dtype": "float64"}, {"name": "846", "dtype": "float64"}, {"name": "847", "dtype": "float64"}, {"name": "848", "dtype": "float64"}, {"name": "849", "dtype": "float64"}, {"name": "850", "dtype": "float64"}, {"name": "851", "dtype": "float64"}, {"name": "852", "dtype": "float64"}, {"name": "853", "dtype": "float64"}, {"name": "854", "dtype": "float64"}, {"name": "855", "dtype": "float64"}, {"name": "856", "dtype": "float64"}, {"name": "857", "dtype": "float64"}, {"name": "858", "dtype": "float64"}, {"name": "859", "dtype": "float64"}, {"name": "860", "dtype": "float64"}, {"name": "861", "dtype": "float64"}, {"name": "862", "dtype": "float64"}, {"name": "863", "dtype": "float64"}, {"name": "864", "dtype": "float64"}, {"name": "865", "dtype": "float64"}, {"name": "866", "dtype": "float64"}, {"name": "867", "dtype": "float64"}, {"name": "868", "dtype": "float64"}, {"name": "869", "dtype": "float64"}, {"name": "870", "dtype": "float64"}, {"name": "871", "dtype": "float64"}, {"name": "872", "dtype": "float64"}, {"name": "873", "dtype": "float64"}, {"name": "874", "dtype": "float64"}, {"name": "875", "dtype": "float64"}, {"name": "876", "dtype": "float64"}, {"name": "877", "dtype": "float64"}, {"name": "878", "dtype": "float64"}, {"name": "879", "dtype": "float64"}, {"name": "880", "dtype": "float64"}, {"name": "881", "dtype": "float64"}, {"name": "882", "dtype": "float64"}, {"name": "883", "dtype": "float64"}, {"name": "884", "dtype": "float64"}, {"name": "885", "dtype": "float64"}, {"name": "886", "dtype": "float64"}, {"name": "887", "dtype": "float64"}, {"name": "888", "dtype": "float64"}, {"name": "889", "dtype": "float64"}, {"name": "890", "dtype": "float64"}, {"name": "891", "dtype": "float64"}, {"name": "892", "dtype": "float64"}, {"name": "893", "dtype": "float64"}, {"name": "894", "dtype": "float64"}, {"name": "895", "dtype": "float64"}, {"name": "896", "dtype": "float64"}, {"name": "897", "dtype": "float64"}, {"name": "898", "dtype": "float64"}, {"name": "899", "dtype": "float64"}, {"name": "900", "dtype": "float64"}, {"name": "901", "dtype": "float64"}, {"name": "902", "dtype": "float64"}, {"name": "903", "dtype": "float64"}, {"name": "904", "dtype": "float64"}, {"name": "905", "dtype": "float64"}, {"name": "906", "dtype": "float64"}, {"name": "907", "dtype": "float64"}, {"name": "908", "dtype": "float64"}, {"name": "909", "dtype": "float64"}, {"name": "910", "dtype": "float64"}, {"name": "911", "dtype": "float64"}, {"name": "912", "dtype": "float64"}, {"name": "913", "dtype": "float64"}, {"name": "914", "dtype": "float64"}, {"name": "915", "dtype": "float64"}, {"name": "916", "dtype": "float64"}, {"name": "917", "dtype": "float64"}, {"name": "918", "dtype": "float64"}, {"name": "919", "dtype": "float64"}, {"name": "920", "dtype": "float64"}, {"name": "921", "dtype": "float64"}, {"name": "922", "dtype": "float64"}, {"name": "923", "dtype": "float64"}, {"name": "924", "dtype": "float64"}, {"name": "925", "dtype": "float64"}, {"name": "926", "dtype": "float64"}, {"name": "927", "dtype": "float64"}, {"name": "928", "dtype": "float64"}, {"name": "929", "dtype": "float64"}, {"name": "930", "dtype": "float64"}, {"name": "931", "dtype": "float64"}, {"name": "932", "dtype": "float64"}, {"name": "933", "dtype": "float64"}, {"name": "934", "dtype": "float64"}, {"name": "935", "dtype": "float64"}, {"name": "936", "dtype": "float64"}, {"name": "937", "dtype": "float64"}, {"name": "938", "dtype": "float64"}, {"name": "939", "dtype": "float64"}, {"name": "940", "dtype": "float64"}, {"name": "941", "dtype": "float64"}, {"name": "942", "dtype": "float64"}, {"name": "943", "dtype": "float64"}, {"name": "944", "dtype": "float64"}, {"name": "945", "dtype": "float64"}, {"name": "946", "dtype": "float64"}, {"name": "947", "dtype": "float64"}, {"name": "948", "dtype": "float64"}, {"name": "949", "dtype": "float64"}, {"name": "950", "dtype": "float64"}, {"name": "951", "dtype": "float64"}, {"name": "952", "dtype": "float64"}, {"name": "953", "dtype": "float64"}, {"name": "954", "dtype": "float64"}, {"name": "955", "dtype": "float64"}, {"name": "956", "dtype": "float64"}, {"name": "957", "dtype": "float64"}, {"name": "958", "dtype": "float64"}, {"name": "959", "dtype": "float64"}, {"name": "960", "dtype": "float64"}, {"name": "961", "dtype": "float64"}, {"name": "962", "dtype": "float64"}, {"name": "963", "dtype": "float64"}, {"name": "964", "dtype": "float64"}, {"name": "965", "dtype": "float64"}, {"name": "966", "dtype": "float64"}, {"name": "967", "dtype": "float64"}, {"name": "968", "dtype": "float64"}, {"name": "969", "dtype": "float64"}, {"name": "970", "dtype": "float64"}, {"name": "971", "dtype": "float64"}, {"name": "972", "dtype": "float64"}, {"name": "973", "dtype": "float64"}, {"name": "974", "dtype": "float64"}, {"name": "975", "dtype": "float64"}, {"name": "976", "dtype": "float64"}, {"name": "977", "dtype": "float64"}, {"name": "978", "dtype": "float64"}, {"name": "979", "dtype": "float64"}, {"name": "980", "dtype": "float64"}, {"name": "981", "dtype": "float64"}, {"name": "982", "dtype": "float64"}, {"name": "983", "dtype": "float64"}, {"name": "984", "dtype": "float64"}, {"name": "985", "dtype": "float64"}, {"name": "986", "dtype": "float64"}, {"name": "987", "dtype": "float64"}, {"name": "988", "dtype": "float64"}, {"name": "989", "dtype": "float64"}, {"name": "990", "dtype": "float64"}, {"name": "991", "dtype": "float64"}, {"name": "992", "dtype": "float64"}, {"name": "993", "dtype": "float64"}, {"name": "994", "dtype": "float64"}, {"name": "995", "dtype": "float64"}, {"name": "996", "dtype": "float64"}, {"name": "997", "dtype": "float64"}, {"name": "998", "dtype": "float64"}, {"name": "999", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1758550332.368, "num_examples": 57948}], "download_size": 1328372140, "dataset_size": 1758550332.368}}
2023-05-11T18:34:02+00:00
478caeb97e8cc8ae9d1370eabb463f81080a38d3
This is the same dataset as [`DeveloperOats/DBPedia_Classes`](https://huggingface.co/datasets/DeveloperOats/DBPedia_Classes). The only differences are 1. Addition of a unique identifier, `uid` 1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers - `all-mpnet-base-v2` - `multi-qa-mpnet-base-dot-v1` - `all-MiniLM-L12-v2`
pietrolesci/DBPedia_Classes_indexed
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:en", "region:us" ]
2023-05-11T12:26:36+00:00
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"]}
2023-05-11T12:38:04+00:00
c27f0ecfbc36f21b3f09281378b5f74d377e3e11
CrazyVincent/criteo
[ "region:us" ]
2023-05-11T12:30:01+00:00
{}
2023-05-11T12:30:13+00:00
ba6b3c3c6f6401632b3a9416a42382cbd54e1f57
### As seen on https://huggingface.co/blog/assisted-generation --- title: "Assisted Generation: a new direction toward low-latency text generation" thumbnail: /blog/assets/assisted-generation/thumbnail.png authors: - user: joaogante --- # Assisted Generation: a new direction toward low-latency text generation <!-- {blog_metadata} --> <!-- {authors} --> Large language models are all the rage these days, with many companies investing significant resources to scale them up and unlock new capabilities. However, as humans with ever-decreasing attention spans, we also dislike their slow response times. Latency is critical for a good user experience, and smaller models are often used despite their lower quality (e.g. in [code completion](https://ai.googleblog.com/2022/07/ml-enhanced-code-completion-improves.html)). Why is text generation so slow? What’s preventing you from deploying low-latency large language models without going bankrupt? In this blog post, we will revisit the bottlenecks for autoregressive text generation and introduce a new decoding method to tackle the latency problem. You’ll see that by using our new method, assisted generation, you can reduce latency up to 10x in commodity hardware! ## Understanding text generation latency The core of modern text generation is straightforward to understand. Let’s look at the central piece, the ML model. Its input contains a text sequence, which includes the text generated so far, and potentially other model-specific components (for instance, Whisper also has an audio input). The model takes the input and runs a forward pass: the input is fed to the model and passed sequentially along its layers until the unnormalized log probabilities for the next token are predicted (also known as logits). A token may consist of entire words, sub-words, or even individual characters, depending on the model. The [illustrated GPT-2](https://jalammar.github.io/illustrated-gpt2/) is a great reference if you’d like to dive deeper into this part of text generation. <!-- [GIF 1 -- FWD PASS] --> <figure class="image table text-center m-0 w-full"> <video style="max-width: 90%; margin: auto;" autoplay loop muted playsinline src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_1_1080p.mov" ></video> </figure> A model forward pass gets you the logits for the next token, which you can freely manipulate (e.g. set the probability of undesirable words or sequences to 0). The following step in text generation is to select the next token from these logits. Common strategies include picking the most likely token, known as greedy decoding, or sampling from their distribution, also called multinomial sampling. Chaining model forward passes with next token selection iteratively gets you text generation. This explanation is the tip of the iceberg when it comes to decoding methods; please refer to [our blog post on text generation](https://huggingface.co/blog/how-to-generate) for an in-depth exploration. <!-- [GIF 2 -- TEXT GENERATION] --> <figure class="image table text-center m-0 w-full"> <video style="max-width: 90%; margin: auto;" autoplay loop muted playsinline src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_2_1080p.mov" ></video> </figure> From the description above, the latency bottleneck in text generation is clear: running a model forward pass for large models is slow, and you may need to do hundreds of them in a sequence. But let’s dive deeper: why are forward passes slow? Forward passes are typically dominated by matrix multiplications and, after a quick visit to the [corresponding wikipedia section](https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Communication-avoiding_and_distributed_algorithms), you can tell that memory bandwidth is the limitation in this operation (e.g. from the GPU RAM to the GPU compute cores). In other words, *the bottleneck in the forward pass comes from loading the model layer weights into the computation cores of your device, not from performing the computations themselves*. At the moment, you have three main avenues you can explore to get the most out of text generation, all tackling the performance of the model forward pass. First, you have the hardware-specific model optimizations. For instance, your device may be compatible with [Flash Attention](https://github.com/HazyResearch/flash-attention), which speeds up the attention layer through a reorder of the operations, or [INT8 quantization](https://huggingface.co/blog/hf-bitsandbytes-integration), which reduces the size of the model weights. Second, when you know you’ll get concurrent text generation requests, you can batch the inputs and massively increase the throughput with a small latency penalty. The model layer weights loaded into the device are now used on several input rows in parallel, which means that you’ll get more tokens out for approximately the same memory bandwidth burden. The catch with batching is that you need additional device memory (or to offload the memory somewhere) – at the end of this spectrum, you can see projects like [FlexGen](https://github.com/FMInference/FlexGen) which optimize throughput at the expense of latency. ```python # Example showcasing the impact of batched generation. Measurement device: RTX3090 from transformers import AutoModelForCausalLM, AutoTokenizer import time tokenizer = AutoTokenizer.from_pretrained("distilgpt2") model = AutoModelForCausalLM.from_pretrained("distilgpt2").to("cuda") inputs = tokenizer(["Hello world"], return_tensors="pt").to("cuda") def print_tokens_per_second(batch_size): new_tokens = 100 cumulative_time = 0 # warmup model.generate( **inputs, do_sample=True, max_new_tokens=new_tokens, num_return_sequences=batch_size ) for _ in range(10): start = time.time() model.generate( **inputs, do_sample=True, max_new_tokens=new_tokens, num_return_sequences=batch_size ) cumulative_time += time.time() - start print(f"Tokens per second: {new_tokens * batch_size * 10 / cumulative_time:.1f}") print_tokens_per_second(1) # Tokens per second: 418.3 print_tokens_per_second(64) # Tokens per second: 16266.2 (~39x more tokens per second) ``` Finally, if you have multiple devices available to you, you can distribute the workload using [Tensor Parallelism](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_many#tensor-parallelism) and obtain lower latency. With Tensor Parallelism, you split the memory bandwidth burden across multiple devices, but you now have to consider inter-device communication bottlenecks in addition to the monetary cost of running multiple devices. The benefits depend largely on the model size: models that easily fit on a single consumer device see very limited benefits. Taking the results from this [DeepSpeed blog post](https://www.microsoft.com/en-us/research/blog/deepspeed-accelerating-large-scale-model-inference-and-training-via-system-optimizations-and-compression/), you see that you can spread a 17B parameter model across 4 GPUs to reduce the latency by 1.5x (Figure 7). These three types of improvements can be used in tandem, resulting in [high throughput solutions](https://github.com/huggingface/text-generation-inference). However, after applying hardware-specific optimizations, there are limited options to reduce latency – and the existing options are expensive. Let’s fix that! ## Language decoder forward pass, revisited You’ve read above that each model forward pass yields the logits for the next token, but that’s actually an incomplete description. During text generation, the typical iteration consists in the model receiving as input the latest generated token, plus cached internal computations for all other previous inputs, returning the next token logits. Caching is used to avoid redundant computations, resulting in faster forward passes, but it’s not mandatory (and can be used partially). When caching is disabled, the input contains the entire sequence of tokens generated so far and the output contains the logits corresponding to the next token for *all positions* in the sequence! The logits at position N correspond to the distribution for the next token if the input consisted of the first N tokens, ignoring all subsequent tokens in the sequence. In the particular case of greedy decoding, if you pass the generated sequence as input and apply the argmax operator to the resulting logits, you will obtain the generated sequence back. ```python from transformers import AutoModelForCausalLM, AutoTokenizer tok = AutoTokenizer.from_pretrained("distilgpt2") model = AutoModelForCausalLM.from_pretrained("distilgpt2") inputs = tok(["The"], return_tensors="pt") generated = model.generate(**inputs, do_sample=False, max_new_tokens=10) forward_confirmation = model(generated).logits.argmax(-1) # We exclude the opposing tips from each sequence: the forward pass returns # the logits for the next token, so it is shifted by one position. print(generated[:-1].tolist() == forward_confirmation[1:].tolist()) # True ``` This means that you can use a model forward pass for a different purpose: in addition to feeding some tokens to predict the next one, you can also pass a sequence to the model and double-check whether the model would generate that same sequence (or part of it). <!-- [GIF 3 -- FWD CONFIRMATION] --> <figure class="image table text-center m-0 w-full"> <video style="max-width: 90%; margin: auto;" autoplay loop muted playsinline src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_3_1080p.mov" ></video> </figure> Let’s consider for a second that you have access to a magical latency-free oracle model that generates the same sequence as your model, for any given input. For argument’s sake, it can’t be used directly, it’s limited to being an assistant to your generation procedure. Using the property described above, you could use this assistant model to get candidate output tokens followed by a forward pass with your model to confirm that they are indeed correct. In this utopian scenario, the latency of text generation would be reduced from `O(n)` to `O(1)`, with `n` being the number of generated tokens. For long generations, we're talking about several orders of magnitude. Walking a step towards reality, let's assume the assistant model has lost its oracle properties. Now it’s a latency-free model that gets some of the candidate tokens wrong, according to your model. Due to the autoregressive nature of the task, as soon as the assistant gets a token wrong, all subsequent candidates must be invalidated. However, that does not prevent you from querying the assistant again, after correcting the wrong token with your model, and repeating this process iteratively. Even if the assistant fails a few tokens, text generation would have an order of magnitude less latency than in its original form. Obviously, there are no latency-free assistant models. Nevertheless, it is relatively easy to find a model that approximates some other model’s text generation outputs – smaller versions of the same architecture trained similarly often fit this property. Moreover, when the difference in model sizes becomes significant, the cost of using the smaller model as an assistant becomes an afterthought after factoring in the benefits of skipping a few forward passes! You now understand the core of _assisted generation_. ## Greedy decoding with assisted generation Assisted generation is a balancing act. You want the assistant to quickly generate a candidate sequence while being as accurate as possible. If the assistant has poor quality, your get the cost of using the assistant model with little to no benefits. On the other hand, optimizing the quality of the candidate sequences may imply the use of slow assistants, resulting in a net slowdown. While we can't automate the selection of the assistant model for you, we’ve included an additional requirement and a heuristic to ensure the time spent with the assistant stays in check. First, the requirement – the assistant must have the exact same tokenizer as your model. If this requirement was not in place, expensive token decoding and re-encoding steps would have to be added. Furthermore, these additional steps would have to happen on the CPU, which in turn may need slow inter-device data transfers. Fast usage of the assistant is critical for the benefits of assisted generation to show up. Finally, the heuristic. By this point, you have probably noticed the similarities between the movie Inception and assisted generation – you are, after all, running text generation inside text generation. There will be one assistant model forward pass per candidate token, and we know that forward passes are expensive. While you can’t know in advance the number of tokens that the assistant model will get right, you can keep track of this information and use it to limit the number of candidate tokens requested to the assistant – some sections of the output are easier to anticipate than others. Wrapping all up, here’s our original implementation of the assisted generation loop ([code](https://github.com/huggingface/transformers/blob/849367ccf741d8c58aa88ccfe1d52d8636eaf2b7/src/transformers/generation/utils.py#L4064)): 1. Use greedy decoding to generate a certain number of candidate tokens with the assistant model, producing `candidates`. The number of produced candidate tokens is initialized to `5` the first time assisted generation is called. 2. Using our model, do a forward pass with `candidates`, obtaining `logits`. 3. Use the token selection method (`.argmax()` for greedy search or `.multinomial()` for sampling) to get the `next_tokens` from `logits`. 4. Compare `next_tokens` to `candidates` and get the number of matching tokens. Remember that this comparison has to be done with left-to-right causality: after the first mismatch, all candidates are invalidated. 5. Use the number of matches to slice things up and discard variables related to unconfirmed candidate tokens. In essence, in `next_tokens`, keep the matching tokens plus the first divergent token (which our model generates from a valid candidate subsequence). 6. Adjust the number of candidate tokens to be produced in the next iteration — our original heuristic increases it by `2` if ALL tokens match and decreases it by `1` otherwise. <!-- [GIF 4 -- ASSISTED GENERATION] --> <figure class="image table text-center m-0 w-full"> <video style="max-width: 90%; margin: auto;" autoplay loop muted playsinline src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_4_1080p.mov" ></video> </figure> We’ve designed the API in 🤗 Transformers such that this process is hassle-free for you. All you need to do is to pass the assistant model under the new `assistant_model` keyword argument and reap the latency gains! At the time of the release of this blog post, assisted generation is limited to a batch size of `1`. ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch prompt = "Alice and Bob" checkpoint = "EleutherAI/pythia-1.4b-deduped" assistant_checkpoint = "EleutherAI/pythia-160m-deduped" device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = AutoTokenizer.from_pretrained(checkpoint) inputs = tokenizer(prompt, return_tensors="pt").to(device) model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint).to(device) outputs = model.generate(**inputs, assistant_model=assistant_model) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] ``` Is the additional internal complexity worth it? Let’s have a look at the latency numbers for the greedy decoding case (results for sampling are in the next section), considering a batch size of `1`. These results were pulled directly out of 🤗 Transformers without any additional optimizations, so you should be able to reproduce them in your setup. <!-- [SPACE WITH GREEDY DECODING PERFORMANCE NUMBERS] --> <script type="module" src="https://gradio.s3-us-west-2.amazonaws.com/3.28.2/gradio.js" ></script> <gradio-app space="joaogante/assisted_generation_benchmarks"></gradio-app> Glancing at the collected numbers, we see that assisted generation can deliver significant latency reductions in diverse settings, but it is not a silver bullet – you should benchmark it before applying it to your use case. We can conclude that assisted generation: 1. 🤏 Requires access to an assistant model that is at least an order of magnitude smaller than your model (the bigger the difference, the better); 2. 🚀 Gets up to 3x speedups in the presence of INT8 and up to 2x otherwise, when the model fits in the GPU memory; 3. 🤯 If you’re playing with models that do not fit in your GPU and are relying on memory offloading, you can see up to 10x speedups; 4. 📄 Shines in input-grounded tasks, like automatic speech recognition or summarization. ## Sample with assisted generation Greedy decoding is suited for input-grounded tasks (automatic speech recognition, translation, summarization, ...) or factual knowledge-seeking. Open-ended tasks requiring large levels of creativity, such as most uses of a language model as a chatbot, should use sampling instead. Assisted generation is naturally designed for greedy decoding, but that doesn’t mean that you can’t use assisted generation with multinomial sampling! Drawing samples from a probability distribution for the next token will cause our greedy assistant to fail more often, reducing its latency benefits. However, we can control how sharp the probability distribution for the next tokens is, using the temperature coefficient that’s present in most sampling-based applications. At one extreme, with temperatures close to 0, sampling will approximate greedy decoding, favoring the most likely token. At the other extreme, with the temperature set to values much larger than 1, sampling will be chaotic, drawing from a uniform distribution. Low temperatures are, therefore, more favorable to your assistant model, retaining most of the latency benefits from assisted generation, as we can see below. <!-- [TEMPERATURE RESULTS, SHOW THAT LATENCY INCREASES STEADILY WITH TEMP] --> <div align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/temperature.png"/> </div> Why don't you see it for yourself, so get a feeling of assisted generation? <!-- [DEMO] --> <gradio-app space="joaogante/assisted_generation_demo"></gradio-app> ## Future directions Assisted generation shows that modern text generation strategies are ripe for optimization. Understanding that it is currently a memory-bound problem, not a compute-bound problem, allows us to apply simple heuristics to get the most out of the available memory bandwidth, alleviating the bottleneck. We believe that further refinement of the use of assistant models will get us even bigger latency reductions - for instance, we may be able to skip a few more forward passes if we request the assistant to generate several candidate continuations. Naturally, releasing high-quality small models to be used as assistants will be critical to realizing and amplifying the benefits. Initially released under our 🤗 Transformers library, to be used with the `.generate()` function, we expect to offer it throughout the Hugging Face universe. Its implementation is also completely open-source so, if you’re working on text generation and not using our tools, feel free to use it as a reference. Finally, assisted generation resurfaces a crucial question in text generation. The field has been evolving with the constraint where all new tokens are the result of a fixed amount of compute, for a given model. One token per homogeneous forward pass, in pure autoregressive fashion. This blog post reinforces the idea that it shouldn’t be the case: large subsections of the generated output can also be equally generated by models that are a fraction of the size. For that, we’ll need new model architectures and decoding methods – we’re excited to see what the future holds! ## Related Work After the original release of this blog post, it came to my attention that other works have explored the same core principle (use a forward pass to validate longer continuations). In particular, have a look at the following works: - [Blockwise Parallel Decoding](https://proceedings.neurips.cc/paper/2018/file/c4127b9194fe8562c64dc0f5bf2c93bc-Paper.pdf), by Google Brain - [Speculative Sampling](https://arxiv.org/abs/2302.01318), by DeepMind ## Citation ```bibtex @misc {gante2023assisted, author = { {Joao Gante} }, title = { Assisted Generation: a new direction toward low-latency text generation }, year = 2023, url = { https://huggingface.co/blog/assisted-generation }, doi = { 10.57967/hf/0638 }, publisher = { Hugging Face Blog } } ``` ## Acknowledgements I'd like to thank Sylvain Gugger, Nicolas Patry, and Lewis Tunstall for sharing many valuable suggestions to improve this blog post. Finally, kudos to Chunte Lee for designing the gorgeous cover you can see in our web page.
joaogante/assisted_generation
[ "arxiv:2302.01318", "doi:10.57967/hf/0638", "region:us" ]
2023-05-11T12:33:30+00:00
{}
2023-05-16T08:45:33+00:00
938e2af60aa48a7c974841a968ade2fc9c58b151
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
iamjinchen/FightingICE
[ "region:us" ]
2023-05-11T12:35:04+00:00
{}
2023-05-11T12:40:13+00:00
36755a3ca40b8218d204b6f6662cb54a8152878b
tolysim/test
[ "license:bigcode-openrail-m", "region:us" ]
2023-05-11T12:41:53+00:00
{"license": "bigcode-openrail-m"}
2023-05-11T12:41:53+00:00
5e9bb6fe140531a0c47216530f0cefba7cd5c561
# Dataset Card for "oasst1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ybelkada/oasst1
[ "region:us" ]
2023-05-11T12:50:58+00:00
{"dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 59236805.7, "num_examples": 39663}, {"name": "test", "num_bytes": 6581867.3, "num_examples": 4407}], "download_size": 38472497, "dataset_size": 65818673.0}}
2023-05-11T12:51:08+00:00
376523e151ad3a736b1ecdec55730c935a0f65c8
# Train (80%), val (10%), test (10%) splits for IWildCam 2022 Each text file is a list of sequence IDs. We sampled from the original train set samples, that has `count` annotations, which is 1780 samples total.
kiyoonkim/iwildcam-2022-splits
[ "region:us" ]
2023-05-11T12:57:03+00:00
{}
2023-05-11T12:59:18+00:00
1f486923d43b8dee4f1603597ee0e777c22b5585
# Dataset Card for "oasst1-tiny-subset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ybelkada/oasst1-tiny-subset
[ "region:us" ]
2023-05-11T13:06:58+00:00
{"dataset_info": {"features": [{"name": "messages", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 59104494.0, "num_examples": 39663}, {"name": "test", "num_bytes": 6567166.0, "num_examples": 4407}], "download_size": 38767143, "dataset_size": 65671660.0}}
2023-05-11T13:07:03+00:00
5dfefeaae18a02989dd938fe539887fa4551db8f
# Fine tuning progress validation - RedPajama 3B, StableLM Alpha 7B, Open-LLaMA This repository contains the progress of fine-tuning models: RedPajama 3B, StableLM Alpha 7B, Open-LLaMA. These models have been fine-tuned on a specific text dataset and the results of the fine-tuning process are provided in the text file included in this repository. ## Fine-Tuning Details - **Model: RedPajama 3B, size: 3 billion parameters, method: adapter** - **Model: StableLM Alpha 7B, size: 7 billion parameters, method: adapter** - **Model: Open-LLaMA 7B 300B, size: 7 billion parameters (300B tokens), method: LoRA** - **Model: Open-LLaMA 7B 300B, size: 7 billion parameters (300B tokens), method: adapter** ## Dataset The text source used for fine-tuning these models has a size of 25MB, which has been split into 174,000 data inputs. ## Fine-Tuning Process The fine-tuning process was conducted with the following details: - **Epochs:** 1 - **Validation Frequency:** Every 1% of the training data - **Training Data:** 174,000 data inputs ## Acknowledgments #1 I would like to acknowledge @stabilityai, @togethercompute and OpenLM Research for providing the base models. Their groundbreaking work in the field of natural language processing has made projects like this possible. ## Acknowledgments #2 I would like to acknowledge @LightningAI for providing the lit-parrot fine-tuning framework. ## Disclaimer There might be NSFW results in the results. ## License This repository and the fine-tuned models are licensed under the [MIT License](LICENSE). Feel free to modify and use them according to the terms of the license.
kstevica/llm-comparison
[ "task_categories:text-generation", "size_categories:n<1K", "language:en", "license:mit", "stories", "region:us" ]
2023-05-11T13:41:15+00:00
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "LLM Comparison", "tags": ["stories"]}
2023-05-14T12:03:41+00:00
0c9308136e4b30872d4d66b09cea8ab37899adde
# Dataset Card for "randomized_raw_miniwob_episodes" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
LucasThil/randomized_raw_miniwob_episodes
[ "region:us" ]
2023-05-11T13:57:10+00:00
{"dataset_info": {"features": [{"name": "task_name", "dtype": "string"}, {"name": "utterance", "dtype": "string"}, {"name": "reward", "dtype": "float64"}, {"name": "raw_reward", "dtype": "float64"}, {"name": "processed_states", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1611819051, "num_examples": 13412}], "download_size": 163171601, "dataset_size": 1611819051}}
2023-05-13T12:57:33+00:00
28b9359e8764b6a686e74c4bd599b609fadefe64
Repton/testing_embeddings
[ "license:mit", "region:us" ]
2023-05-11T14:17:54+00:00
{"license": "mit"}
2023-05-11T14:21:53+00:00
769a15e44e7d691148dd05e54ae2b058ceaed1f0
# PIE Dataset Card for "brat" This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for the [BRAT Huggingface dataset loading script](https://huggingface.co/datasets/DFKI-SLT/brat). ## Dataset Variants The dataset provides the following variants: - `default`: The original dataset. Documents are of type `BratDocument` (with `LabeledMultiSpan` annotations, see below). - `merge_fragmented_spans`: Documents are of type `BratDocumentWithMergedSpans` (this variant merges spans that are fragmented into simple `LabeledSpans`, see below). ## Data Schema The document type for this dataset is `BratDocument` or `BratDocumentWithMergedSpans`, depending on if the data was loaded with `merge_fragmented_spans=True` (default: `False`). They define the following data fields: - `text` (str) - `id` (str, optional) - `metadata` (dictionary, optional) and the following annotation layers: - `spans` (annotation type: `LabeledMultiSpan` in the case of `BratDocument` and `LabeledSpan` and in the case of `BratDocumentWithMergedSpans`, target: `text`) - `relations` (annotation type: `BinaryRelation`, target: `spans`) - `span_attributes` (annotation type: `Attribute`, target: `spans`) - `relation_attributes` (annotation type: `Attribute`, target: `relations`) The `LabeledMultiSpan` annotation type is defined as follows: - `slices` (type: `Tuple[Tuple[int, int], ...]`): the slices consisting if start (including) and end (excluding) indices of the spans - `label` (type: `str`) - `score` (type: `float`, optional, not included in comparison) The `Attribute` annotation type is defined as follows: - `annotation` (type: `Annotation`): the annotation to which the attribute is attached - `label` (type: `str`) - `value` (type: `str`, optional) - `score` (type: `float`, optional, not included in comparison) See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/annotations.py) for the remaining annotation type definitions. ## Document Converters The dataset provides no predefined document converters because the BRAT format is very flexible and can be used for many different tasks.
pie/brat
[ "region:us" ]
2023-05-11T14:25:51+00:00
{}
2024-01-03T13:25:22+00:00
c826af381866b8c961ae33bb540d9a926ca618e5
ronellcross22/Welcome_to_LangChain
[ "license:mit", "region:us" ]
2023-05-11T14:26:54+00:00
{"license": "mit"}
2023-05-11T14:28:20+00:00
7cf859bf3eecc4b37deee2e0f7f1cea2976d23de
# Dataset Card for "drug_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
P3ps/drug_dataset
[ "region:us" ]
2023-05-11T14:33:11+00:00
{"dataset_info": {"features": [{"name": "condition", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 38930814, "num_examples": 69082}, {"name": "test", "num_bytes": 12987201, "num_examples": 23051}], "download_size": 29791121, "dataset_size": 51918015}}
2023-05-11T14:33:15+00:00
c13d071429abf65615d6b17cce6ed8eddbd902c9
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
ylzz1997/sovits-4.0-pretrained-model
[ "region:us" ]
2023-05-11T14:39:00+00:00
{}
2024-01-30T16:33:01+00:00
a9c14a58b5f723668ab69c1f5484b966e27ba69b
# pixel giffusion Dataset of pixel-style art generated from stable-diffusion model
sunilSabnis/pixelart
[ "license:mit", "region:us" ]
2023-05-11T14:55:47+00:00
{"license": "mit", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 831007653.0, "num_examples": 2000}], "download_size": 831037182, "dataset_size": 831007653.0}}
2023-05-12T18:06:23+00:00
a8ae39963cef8309a23c63334cd5c6c6604942f2
# Universal Text Classification Dataset (UTCD) ## Load dataset ```python from datasets import load_dataset dataset = load_dataset('claritylab/utcd', name='in-domain') ``` ## Description UTCD is a curated compilation of 18 datasets revised for Zero-shot Text Classification spanning 3 aspect categories of Sentiment, Intent/Dialogue, and Topic classification. UTCD focuses on the task of zero-shot text classification where the candidate labels are descriptive of the text being classified. TUTCD consists of ~ 6M/800K train/test examples. UTCD was introduced in the Findings of ACL'23 Paper **Label Agnostic Pre-training for Zero-shot Text Classification** by ***Christopher Clarke, Yuzhao Heng, Yiping Kang, Krisztian Flautner, Lingjia Tang and Jason Mars***. [Project Homepage](https://github.com/ChrisIsKing/zero-shot-text-classification/tree/master). UTCD Datasets & Principles: In order to make NLP models more broadly useful, zero-shot techniques need to be capable of label, domain \& aspect transfer. As such, in the construction of UTCD we enforce the following principles: - **Textual labels**: In UTCD, we mandate the use of textual labels. While numerical label values are often used in classification tasks, descriptive textual labels such as those present in the datasets across UTCD enable the development of techniques that can leverage the class name which is instrumental in providing zero-shot support. As such, for each of the compiled datasets, labels are standardized such that the labels are descriptive of the text in natural language. - **Diverse domains and Sequence lengths**: In addition to broad coverage of aspects, UTCD compiles diverse data across several domains such as Banking, Finance, Legal, etc each comprising varied length sequences (long and short). The datasets are listed above. - Sentiment - GoEmotions introduced in [GoEmotions: A Dataset of Fine-Grained Emotions](https://arxiv.org/pdf/2005.00547v2.pdf) - TweetEval introduced in [TWEETEVAL: Unified Benchmark and Comparative Evaluation for Tweet Classification](https://arxiv.org/pdf/2010.12421v2.pdf) (Sentiment subset) - Emotion introduced in [CARER: Contextualized Affect Representations for Emotion Recognition](https://aclanthology.org/D18-1404.pdf) - Amazon Polarity introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf) - Finance Phrasebank introduced in [Good debt or bad debt: Detecting semantic orientations in economic texts](https://arxiv.org/pdf/1307.5336.pdf) - Yelp introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf) - Intent/Dialogue - Schema-Guided Dialogue introduced in [Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue Dataset](https://arxiv.org/pdf/1909.05855v2.pdf) - Clinc-150 introduced in [An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction](https://arxiv.org/pdf/1909.02027v1.pdf) - SLURP SLU introduced in [SLURP: A Spoken Language Understanding Resource Package](https://arxiv.org/pdf/2011.13205.pdf) - Banking77 introduced in [Efficient Intent Detection with Dual Sentence Encoders](https://arxiv.org/abs/2003.04807](https://arxiv.org/pdf/2003.04807.pdf) - Snips introduced in [Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces](https://arxiv.org/pdf/1805.10190.pdf) - NLU Evaluation introduced in [Benchmarking Natural Language Understanding Services for building Conversational Agents](https://arxiv.org/pdf/1903.05566.pdf) - Topic - AG News introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf) - DBpedia 14 introduced in [DBpedia: A Nucleus for a Web of Open Data](https://link.springer.com/chapter/10.1007/978-3-540-76298-0_52) - Yahoo Answer Topics introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf) - MultiEurlex introduced in [MultiEURLEX -- A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer](https://aclanthology.org/2021.emnlp-main.559v2.pdf) - BigPatent introduced in [BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization](https://aclanthology.org/P19-1212.pdf) - Consumer Finance introduced in [Consumer Complaint Database](https://www.consumerfinance.gov/data-research/consumer-complaints/) ## Structure ### Data Samples Each dataset sample contains the text, the label encoded as an integer, and the dataset name encoded as an integer. ```python { 'text': "My favourite food is anything I didn't have to cook myself.", 'labels': [215], 'dataset_name': 0 } ``` ### Datasets Contained The UTCD dataset contains 18 datasets, 9 `in-domain`, 9 `out-of-domain`, spanning 3 aspects: `sentiment`, `intent` and `topic`. Below are statistics on the datasets. **In-Domain Datasets** | Dataset | Aspect | #Samples in Train/Test | #labels | average #token in text in Train/Test | | ---------- | --------- | ---------------------- | ------- | ------------------------------------ | | GoEmotions | sentiment | 43K/5.4K | 28 | 12/12 | | TweetEval | sentiment | 45K/12K | 3 | 19/14 | | Emotion | sentiment | 16K/2K | 6 | 17/17 | | SGD | intent | 16K/4.2K | 26 | 8/9 | | Clinc-150 | intent | 15K/4.5K | 150 | 8/8 | | SLURP | intent | 12K/2.6K | 75 | 7/7 | | AG News | topic | 120K7.6K | 4 | 38/37 | | DBpedia | topic | 560K/70K | 14 | 45/45 | | Yahoo | topic | 1.4M/60K | 10 | 10/10 | **Out-of-Domain Datasets** | Dataset | Aspect | #Samples in Train/Test | #labels | average #token in text | | --------------------- | --------- | ---------------------- | ------- | ---------------------- | | Amazon Polarity | sentiment | 3.6M/400K | 2 | 71/71 | | Financial Phrase Bank | sentiment | 1.8K/453 | 3 | 19/19 | | Yelp | sentiment | 650K/50K | 3 | 128/128 | | Banking77 | intent | 10K/3.1K | 77 | 11/10 | | SNIPS | intent | 14K/697 | 7 | 8/8 | | NLU Eval | intent | 21K/5.2K | 68 | 7/7 | | MultiEURLEX | topic | 55K/5K | 21 | 1198/1853 | | Big Patent | topic | 25K/5K | 9 | 2872/2892 | | Consumer Finance | topic | 630K/160K | 18 | 190/189 | ### Configurations The `in-domain` and `out-of-domain` configurations has 2 splits: `train` and `test`. The aspect-normalized configurations (`aspect-normalized-in-domain`, `aspect-normalized-out-of-domain`) has 3 splits: `train`, `validation` and `test`. Below are statistics on the configuration splits. **In-Domain Configuration** | Split | #samples | | ----- | --------- | | Train | 2,192,703 | | Test | 168,365 | **Out-of-Domain Configuration** | Split | #samples | | ----- | --------- | | Train | 4,996,673 | | Test | 625,911 | **Aspect-Normalized In-Domain Configuration** | Split | #samples | | ---------- | -------- | | Train | 115,127 | | Validation | 12,806 | | Test | 168,365 | **Aspect-Normalized Out-of-Domain Configuration** | Split | #samples | | ---------- | -------- | | Train | 119,167 | | Validation | 13,263 | | Test | 625,911 |
claritylab/utcd
[ "task_categories:text-classification", "annotations_creators:no-annotation", "multilinguality:monolingual", "size_categories:1M<n<10M", "language:en", "license:mit", "arxiv:2005.00547", "arxiv:2010.12421", "arxiv:1509.01626", "arxiv:1307.5336", "arxiv:1909.05855", "arxiv:1909.02027", "arxiv:2011.13205", "arxiv:2003.04807", "arxiv:1805.10190", "arxiv:1903.05566", "region:us" ]
2023-05-11T15:17:23+00:00
{"annotations_creators": ["no-annotation"], "language": ["en"], "license": "mit", "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "task_categories": ["text-classification"], "pretty_name": "UTCD", "dataset_info": [{"config_name": "in-domain", "features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "Add Alarm", "1": "Album", "2": "Animal", "3": "Artist", "4": "Athlete", "5": "Book Appointment", "6": "Book House", "7": "Building", "8": "Business", "9": "Business & Finance", "10": "Buy Bus Ticket", "11": "Buy Event Tickets", "12": "Buy Movie Tickets", "13": "Check Balance", "14": "Company", "15": "Computers & Internet", "16": "Education & Reference", "17": "Educational Institution", "18": "Entertainment & Music", "19": "Family & Relationships", "20": "Film", "21": "Find Apartment", "22": "Find Attractions", "23": "Find Bus", "24": "Find Events", "25": "Find Home By Area", "26": "Find Movies", "27": "Find Provider", "28": "Find Restaurants", "29": "Find Trains", "30": "Get Alarms", "31": "Get Available Time", "32": "Get Cars Available", "33": "Get Event Dates", "34": "Get Events", "35": "Get Ride", "36": "Get Times For Movie", "37": "Get Weather", "38": "Health", "39": "Lookup Music", "40": "Lookup Song", "41": "Make Payment", "42": "Mean Of Transportation", "43": "Natural Place", "44": "Office Holder", "45": "Plant", "46": "Play Media", "47": "Play Movie", "48": "Play Song", "49": "Politics & Government", "50": "Request Payment", "51": "Reserve Car", "52": "Reserve Hotel", "53": "Reserve One way Flight", "54": "Reserve Restaurant", "55": "Reserve Round trip Flights", "56": "Schedule Visit", "57": "Science & Mathematics", "58": "Science & Technology", "59": "Search Hotel", "60": "Search House", "61": "Search One way Flight", "62": "Search Round trip Flights", "63": "Society & Culture", "64": "Sports", "65": "Transfer Money", "66": "Village", "67": "World News", "68": "Written Work", "69": "accept reservations", "70": "account blocked", "71": "add contact", "72": "admiration", "73": "alarm", "74": "alarm query", "75": "alarm remove", "76": "alarm set", "77": "amusement", "78": "anger", "79": "annoyance", "80": "application status", "81": "approval", "82": "apr", "83": "are you a bot", "84": "audio volume down", "85": "audio volume mute", "86": "audio volume other", "87": "audio volume up", "88": "balance", "89": "bill balance", "90": "bill due", "91": "book flight", "92": "book hotel", "93": "calculator", "94": "calendar", "95": "calendar query", "96": "calendar remove", "97": "calendar set", "98": "calendar update", "99": "calories", "100": "cancel", "101": "cancel reservation", "102": "car rental", "103": "card declined", "104": "caring", "105": "carry on", "106": "change accent", "107": "change ai name", "108": "change language", "109": "change speed", "110": "change user name", "111": "change volume", "112": "cleaning", "113": "coffee", "114": "confirm reservation", "115": "confusion", "116": "convert", "117": "cook time", "118": "cooking query", "119": "cooking recipe", "120": "create or add", "121": "credit limit", "122": "credit limit change", "123": "credit score", "124": "curiosity", "125": "currency", "126": "current location", "127": "damaged card", "128": "date", "129": "date time convert", "130": "date time query", "131": "definition", "132": "desire", "133": "direct deposit", "134": "directions", "135": "disappointment", "136": "disapproval", "137": "disgust", "138": "distance", "139": "do you have pets", "140": "email add contact", "141": "email query", "142": "email query contact", "143": "email send email", "144": "embarrassment", "145": "events", "146": "exchange rate", "147": "excitement", "148": "expiration date", "149": "factoid", "150": "fear", "151": "find phone", "152": "flight status", "153": "flip coin", "154": "food last", "155": "freeze account", "156": "fun fact", "157": "game", "158": "gas", "159": "gas type", "160": "general greet", "161": "general joke", "162": "general quirky", "163": "goodbye", "164": "gratitude", "165": "greet", "166": "greeting", "167": "grief", "168": "how busy", "169": "how old are you", "170": "hue light dim", "171": "hue light off", "172": "hue light up", "173": "improve credit score", "174": "income", "175": "ingredient substitution", "176": "ingredients list", "177": "insurance", "178": "insurance change", "179": "interest rate", "180": "international fees", "181": "international visa", "182": "iot cleaning", "183": "iot coffee", "184": "iot hue light change", "185": "iot hue light dim", "186": "iot hue light off", "187": "iot hue light on", "188": "iot hue light up", "189": "iot wemo on", "190": "iot wemo plug off", "191": "joke", "192": "joy", "193": "jump start", "194": "last maintenance", "195": "lists create or add", "196": "lists query", "197": "lists remove", "198": "lost luggage", "199": "love", "200": "make call", "201": "maybe", "202": "meal suggestion", "203": "meaning of life", "204": "measurement conversion", "205": "meeting schedule", "206": "min payment", "207": "mpg", "208": "music", "209": "music dislike ness", "210": "music likeness", "211": "music query", "212": "music settings", "213": "negative", "214": "nervousness", "215": "neutral", "216": "new card", "217": "news query", "218": "next holiday", "219": "next song", "220": "no", "221": "nutrition info", "222": "oil change how", "223": "oil change when", "224": "optimism", "225": "order", "226": "order checks", "227": "order status", "228": "paid time off request status", "229": "paid time off used", "230": "pay bill", "231": "payday", "232": "pin change", "233": "play audiobook", "234": "play game", "235": "play music", "236": "play podcasts", "237": "play radio", "238": "plug type", "239": "podcasts", "240": "positive", "241": "post", "242": "pride", "243": "pto balance", "244": "pto request", "245": "qa currency", "246": "qa definition", "247": "qa factoid", "248": "qa maths", "249": "qa stock", "250": "query", "251": "query contact", "252": "quirky", "253": "radio", "254": "realization", "255": "recipe", "256": "recommendation events", "257": "recommendation locations", "258": "recommendation movies", "259": "redeem rewards", "260": "relief", "261": "reminder", "262": "reminder update", "263": "remorse", "264": "remove", "265": "repeat", "266": "replacement card duration", "267": "report fraud", "268": "report lost card", "269": "reset settings", "270": "restaurant reservation", "271": "restaurant reviews", "272": "restaurant suggestion", "273": "rewards balance", "274": "roll dice", "275": "rollover 401k", "276": "routing", "277": "sadness", "278": "schedule maintenance", "279": "schedule meeting", "280": "send email", "281": "set", "282": "settings", "283": "share location", "284": "shopping list", "285": "shopping list update", "286": "smart home", "287": "social post", "288": "social query", "289": "spelling", "290": "spending history", "291": "surprise", "292": "sync device", "293": "take away order", "294": "take away query", "295": "taxes", "296": "tell joke", "297": "text", "298": "thank you", "299": "ticket", "300": "time", "301": "timer", "302": "timezone", "303": "tire change", "304": "tire pressure", "305": "todo list", "306": "todo list update", "307": "traffic", "308": "transactions", "309": "transfer", "310": "translate", "311": "transport query", "312": "transport taxi", "313": "transport ticket", "314": "transport traffic", "315": "travel alert", "316": "travel notification", "317": "travel suggestion", "318": "uber", "319": "update playlist", "320": "user name", "321": "vaccines", "322": "volume other", "323": "w2 wage and tax statement", "324": "weather", "325": "weather query", "326": "wemo off", "327": "wemo plug on", "328": "what are your hobbies", "329": "what can i ask you", "330": "what is your name", "331": "what song", "332": "where are you from", "333": "whisper mode", "334": "who do you work for", "335": "who made you", "336": "yes"}}}}, {"name": "dataset_name", "dtype": {"class_label": {"names": {"0": "go_emotion", "1": "sentiment_tweets_2020", "2": "emotion", "3": "sgd", "4": "clinc_150", "5": "slurp", "6": "ag_news", "7": "dbpedia", "8": "yahoo"}}}}], "splits": [{"name": "train", "num_bytes": 347382307, "num_examples": 2192703}, {"name": "test", "num_bytes": 36063588, "num_examples": 168365}], "download_size": 1744258165, "dataset_size": 383445895}, {"config_name": "aspect-normalized-in-domain", "features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "Add Alarm", "1": "Album", "2": "Animal", "3": "Artist", "4": "Athlete", "5": "Book Appointment", "6": "Book House", "7": "Building", "8": "Business", "9": "Business & Finance", "10": "Buy Bus Ticket", "11": "Buy Event Tickets", "12": "Buy Movie Tickets", "13": "Check Balance", "14": "Company", "15": "Computers & Internet", "16": "Education & Reference", "17": "Educational Institution", "18": "Entertainment & Music", "19": "Family & Relationships", "20": "Film", "21": "Find Apartment", "22": "Find Attractions", "23": "Find Bus", "24": "Find Events", "25": "Find Home By Area", "26": "Find Movies", "27": "Find Provider", "28": "Find Restaurants", "29": "Find Trains", "30": "Get Alarms", "31": "Get Available Time", "32": "Get Cars Available", "33": "Get Event Dates", "34": "Get Events", "35": "Get Ride", "36": "Get Times For Movie", "37": "Get Weather", "38": "Health", "39": "Lookup Music", "40": "Lookup Song", "41": "Make Payment", "42": "Mean Of Transportation", "43": "Natural Place", "44": "Office Holder", "45": "Plant", "46": "Play Media", "47": "Play Movie", "48": "Play Song", "49": "Politics & Government", "50": "Request Payment", "51": "Reserve Car", "52": "Reserve Hotel", "53": "Reserve One way Flight", "54": "Reserve Restaurant", "55": "Reserve Round trip Flights", "56": "Schedule Visit", "57": "Science & Mathematics", "58": "Science & Technology", "59": "Search Hotel", "60": "Search House", "61": "Search One way Flight", "62": "Search Round trip Flights", "63": "Society & Culture", "64": "Sports", "65": "Transfer Money", "66": "Village", "67": "World News", "68": "Written Work", "69": "accept reservations", "70": "account blocked", "71": "add contact", "72": "admiration", "73": "alarm", "74": "alarm query", "75": "alarm remove", "76": "alarm set", "77": "amusement", "78": "anger", "79": "annoyance", "80": "application status", "81": "approval", "82": "apr", "83": "are you a bot", "84": "audio volume down", "85": "audio volume mute", "86": "audio volume other", "87": "audio volume up", "88": "balance", "89": "bill balance", "90": "bill due", "91": "book flight", "92": "book hotel", "93": "calculator", "94": "calendar", "95": "calendar query", "96": "calendar remove", "97": "calendar set", "98": "calendar update", "99": "calories", "100": "cancel", "101": "cancel reservation", "102": "car rental", "103": "card declined", "104": "caring", "105": "carry on", "106": "change accent", "107": "change ai name", "108": "change language", "109": "change speed", "110": "change user name", "111": "change volume", "112": "cleaning", "113": "coffee", "114": "confirm reservation", "115": "confusion", "116": "convert", "117": "cook time", "118": "cooking query", "119": "cooking recipe", "120": "create or add", "121": "credit limit", "122": "credit limit change", "123": "credit score", "124": "curiosity", "125": "currency", "126": "current location", "127": "damaged card", "128": "date", "129": "date time convert", "130": "date time query", "131": "definition", "132": "desire", "133": "direct deposit", "134": "directions", "135": "disappointment", "136": "disapproval", "137": "disgust", "138": "distance", "139": "do you have pets", "140": "email add contact", "141": "email query", "142": "email query contact", "143": "email send email", "144": "embarrassment", "145": "events", "146": "exchange rate", "147": "excitement", "148": "expiration date", "149": "factoid", "150": "fear", "151": "find phone", "152": "flight status", "153": "flip coin", "154": "food last", "155": "freeze account", "156": "fun fact", "157": "game", "158": "gas", "159": "gas type", "160": "general greet", "161": "general joke", "162": "general quirky", "163": "goodbye", "164": "gratitude", "165": "greet", "166": "greeting", "167": "grief", "168": "how busy", "169": "how old are you", "170": "hue light dim", "171": "hue light off", "172": "hue light up", "173": "improve credit score", "174": "income", "175": "ingredient substitution", "176": "ingredients list", "177": "insurance", "178": "insurance change", "179": "interest rate", "180": "international fees", "181": "international visa", "182": "iot cleaning", "183": "iot coffee", "184": "iot hue light change", "185": "iot hue light dim", "186": "iot hue light off", "187": "iot hue light on", "188": "iot hue light up", "189": "iot wemo on", "190": "iot wemo plug off", "191": "joke", "192": "joy", "193": "jump start", "194": "last maintenance", "195": "lists create or add", "196": "lists query", "197": "lists remove", "198": "lost luggage", "199": "love", "200": "make call", "201": "maybe", "202": "meal suggestion", "203": "meaning of life", "204": "measurement conversion", "205": "meeting schedule", "206": "min payment", "207": "mpg", "208": "music", "209": "music dislike ness", "210": "music likeness", "211": "music query", "212": "music settings", "213": "negative", "214": "nervousness", "215": "neutral", "216": "new card", "217": "news query", "218": "next holiday", "219": "next song", "220": "no", "221": "nutrition info", "222": "oil change how", "223": "oil change when", "224": "optimism", "225": "order", "226": "order checks", "227": "order status", "228": "paid time off request status", "229": "paid time off used", "230": "pay bill", "231": "payday", "232": "pin change", "233": "play audiobook", "234": "play game", "235": "play music", "236": "play podcasts", "237": "play radio", "238": "plug type", "239": "podcasts", "240": "positive", "241": "post", "242": "pride", "243": "pto balance", "244": "pto request", "245": "qa currency", "246": "qa definition", "247": "qa factoid", "248": "qa maths", "249": "qa stock", "250": "query", "251": "query contact", "252": "quirky", "253": "radio", "254": "realization", "255": "recipe", "256": "recommendation events", "257": "recommendation locations", "258": "recommendation movies", "259": "redeem rewards", "260": "relief", "261": "reminder", "262": "reminder update", "263": "remorse", "264": "remove", "265": "repeat", "266": "replacement card duration", "267": "report fraud", "268": "report lost card", "269": "reset settings", "270": "restaurant reservation", "271": "restaurant reviews", "272": "restaurant suggestion", "273": "rewards balance", "274": "roll dice", "275": "rollover 401k", "276": "routing", "277": "sadness", "278": "schedule maintenance", "279": "schedule meeting", "280": "send email", "281": "set", "282": "settings", "283": "share location", "284": "shopping list", "285": "shopping list update", "286": "smart home", "287": "social post", "288": "social query", "289": "spelling", "290": "spending history", "291": "surprise", "292": "sync device", "293": "take away order", "294": "take away query", "295": "taxes", "296": "tell joke", "297": "text", "298": "thank you", "299": "ticket", "300": "time", "301": "timer", "302": "timezone", "303": "tire change", "304": "tire pressure", "305": "todo list", "306": "todo list update", "307": "traffic", "308": "transactions", "309": "transfer", "310": "translate", "311": "transport query", "312": "transport taxi", "313": "transport ticket", "314": "transport traffic", "315": "travel alert", "316": "travel notification", "317": "travel suggestion", "318": "uber", "319": "update playlist", "320": "user name", "321": "vaccines", "322": "volume other", "323": "w2 wage and tax statement", "324": "weather", "325": "weather query", "326": "wemo off", "327": "wemo plug on", "328": "what are your hobbies", "329": "what can i ask you", "330": "what is your name", "331": "what song", "332": "where are you from", "333": "whisper mode", "334": "who do you work for", "335": "who made you", "336": "yes"}}}}, {"name": "dataset_name", "dtype": {"class_label": {"names": {"0": "go_emotion", "1": "sentiment_tweets_2020", "2": "emotion", "3": "sgd", "4": "clinc_150", "5": "slurp", "6": "ag_news", "7": "dbpedia", "8": "yahoo"}}}}], "splits": [{"name": "train", "num_bytes": 28974188, "num_examples": 115127}, {"name": "validation", "num_bytes": 3213586, "num_examples": 12806}, {"name": "test", "num_bytes": 36063590, "num_examples": 168365}], "download_size": 1744258165, "dataset_size": 68251364}, {"config_name": "out-of-domain", "features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "Add To Playlist", "1": "Bank account or service", "2": "Book Restaurant", "3": "Checking or savings account", "4": "Chemistry; Metallurgy", "5": "Consumer Loan", "6": "Credit card", "7": "Credit card or prepaid card", "8": "Credit reporting", "9": "Credit reporting, credit repair services, or other personal consumer reports", "10": "Debt collection", "11": "EUROPEAN UNION", "12": "Electricity", "13": "Fixed Constructions", "14": "General tagging of new or cross-sectional technology", "15": "Get Weather", "16": "Human Necessities", "17": "Mechanical Engineering; Lightning; Heating; Weapons; Blasting", "18": "Money transfer, virtual currency, or money service", "19": "Money transfers", "20": "Mortgage", "21": "Other financial service", "22": "Payday loan", "23": "Payday loan, title loan, or personal loan", "24": "Performing Operations; Transporting", "25": "Physics", "26": "Play Music", "27": "Prepaid card", "28": "Rate Book", "29": "Refund not showing up", "30": "Search Creative Work", "31": "Search Screening Event", "32": "Student loan", "33": "Textiles; Paper", "34": "Vehicle loan or lease", "35": "Virtual currency", "36": "activate my card", "37": "age limit", "38": "agri-foodstuffs", "39": "agriculture, forestry and fisheries", "40": "alarm query", "41": "alarm remove", "42": "alarm set", "43": "apple pay or google pay", "44": "atm support", "45": "audio volume down", "46": "audio volume mute", "47": "audio volume other", "48": "audio volume up", "49": "automatic top up", "50": "balance not updated after bank transfer", "51": "balance not updated after cheque or cash deposit", "52": "beneficiary not allowed", "53": "business and competition", "54": "calendar query", "55": "calendar remove", "56": "calendar set", "57": "cancel transfer", "58": "card about to expire", "59": "card acceptance", "60": "card arrival", "61": "card delivery estimate", "62": "card linking", "63": "card not working", "64": "card payment fee charged", "65": "card payment not recognised", "66": "card payment wrong exchange rate", "67": "card swallowed", "68": "cash withdrawal charge", "69": "cash withdrawal not recognised", "70": "change pin", "71": "compromised card", "72": "contactless not working", "73": "cooking query", "74": "cooking recipe", "75": "country support", "76": "datetime convert", "77": "datetime query", "78": "declined card payment", "79": "declined cash withdrawal", "80": "declined transfer", "81": "direct debit payment not recognised", "82": "disposable card limits", "83": "economics", "84": "edit personal details", "85": "education and communications", "86": "email addcontact", "87": "email query", "88": "email querycontact", "89": "email sendemail", "90": "employment and working conditions", "91": "energy", "92": "environment", "93": "exchange charge", "94": "exchange rate", "95": "exchange via app", "96": "extra charge on statement", "97": "failed transfer", "98": "fiat currency support", "99": "finance", "100": "general affirm", "101": "general commandstop", "102": "general confirm", "103": "general dontcare", "104": "general explain", "105": "general greet", "106": "general joke", "107": "general negate", "108": "general praise", "109": "general quirky", "110": "general repeat", "111": "geography", "112": "get disposable virtual card", "113": "get physical card", "114": "getting spare card", "115": "getting virtual card", "116": "industry", "117": "international organisations", "118": "international relations", "119": "iot cleaning", "120": "iot coffee", "121": "iot hue lightchange", "122": "iot hue lightdim", "123": "iot hue lightoff", "124": "iot hue lighton", "125": "iot hue lightup", "126": "iot wemo off", "127": "iot wemo on", "128": "law", "129": "lists createoradd", "130": "lists query", "131": "lists remove", "132": "lost or stolen card", "133": "lost or stolen phone", "134": "music dislikeness", "135": "music likeness", "136": "music query", "137": "music settings", "138": "negative", "139": "neutral", "140": "news query", "141": "order physical card", "142": "passcode forgotten", "143": "pending card payment", "144": "pending cash withdrawal", "145": "pending top up", "146": "pending transfer", "147": "pin blocked", "148": "play audiobook", "149": "play game", "150": "play music", "151": "play podcasts", "152": "play radio", "153": "politics", "154": "positive", "155": "production, technology and research", "156": "qa currency", "157": "qa definition", "158": "qa factoid", "159": "qa maths", "160": "qa stock", "161": "receiving money", "162": "recommendation events", "163": "recommendation locations", "164": "recommendation movies", "165": "request refund", "166": "reverted card payment?", "167": "science", "168": "social post", "169": "social query", "170": "social questions", "171": "supported cards and currencies", "172": "takeaway order", "173": "takeaway query", "174": "terminate account", "175": "top up by bank transfer charge", "176": "top up by card charge", "177": "top up by cash or cheque", "178": "top up failed", "179": "top up limits", "180": "top up reverted", "181": "topping up by card", "182": "trade", "183": "transaction charged twice", "184": "transfer fee charged", "185": "transfer into account", "186": "transfer not received by recipient", "187": "transfer timing", "188": "transport", "189": "transport query", "190": "transport taxi", "191": "transport ticket", "192": "transport traffic", "193": "unable to verify identity", "194": "verify my identity", "195": "verify source of funds", "196": "verify top up", "197": "virtual card not working", "198": "visa or mastercard", "199": "weather query", "200": "why verify identity", "201": "wrong amount of cash received", "202": "wrong exchange rate for cash withdrawal"}}}}, {"name": "dataset_name", "dtype": {"class_label": {"names": {"0": "amazon_polarity", "1": "finance_sentiment", "2": "yelp", "3": "banking77", "4": "snips", "5": "nlu_evaluation", "6": "multi_eurlex", "7": "patent", "8": "consumer_finance"}}}}], "splits": [{"name": "train", "num_bytes": 3608196895, "num_examples": 4996673}, {"name": "test", "num_bytes": 541174753, "num_examples": 625911}], "download_size": 1744258165, "dataset_size": 4149371648}, {"config_name": "aspect-normalized-out-of-domain", "features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "Add To Playlist", "1": "Bank account or service", "2": "Book Restaurant", "3": "Checking or savings account", "4": "Chemistry; Metallurgy", "5": "Consumer Loan", "6": "Credit card", "7": "Credit card or prepaid card", "8": "Credit reporting", "9": "Credit reporting, credit repair services, or other personal consumer reports", "10": "Debt collection", "11": "EUROPEAN UNION", "12": "Electricity", "13": "Fixed Constructions", "14": "General tagging of new or cross-sectional technology", "15": "Get Weather", "16": "Human Necessities", "17": "Mechanical Engineering; Lightning; Heating; Weapons; Blasting", "18": "Money transfer, virtual currency, or money service", "19": "Money transfers", "20": "Mortgage", "21": "Other financial service", "22": "Payday loan", "23": "Payday loan, title loan, or personal loan", "24": "Performing Operations; Transporting", "25": "Physics", "26": "Play Music", "27": "Prepaid card", "28": "Rate Book", "29": "Refund not showing up", "30": "Search Creative Work", "31": "Search Screening Event", "32": "Student loan", "33": "Textiles; Paper", "34": "Vehicle loan or lease", "35": "Virtual currency", "36": "activate my card", "37": "age limit", "38": "agri-foodstuffs", "39": "agriculture, forestry and fisheries", "40": "alarm query", "41": "alarm remove", "42": "alarm set", "43": "apple pay or google pay", "44": "atm support", "45": "audio volume down", "46": "audio volume mute", "47": "audio volume other", "48": "audio volume up", "49": "automatic top up", "50": "balance not updated after bank transfer", "51": "balance not updated after cheque or cash deposit", "52": "beneficiary not allowed", "53": "business and competition", "54": "calendar query", "55": "calendar remove", "56": "calendar set", "57": "cancel transfer", "58": "card about to expire", "59": "card acceptance", "60": "card arrival", "61": "card delivery estimate", "62": "card linking", "63": "card not working", "64": "card payment fee charged", "65": "card payment not recognised", "66": "card payment wrong exchange rate", "67": "card swallowed", "68": "cash withdrawal charge", "69": "cash withdrawal not recognised", "70": "change pin", "71": "compromised card", "72": "contactless not working", "73": "cooking query", "74": "cooking recipe", "75": "country support", "76": "datetime convert", "77": "datetime query", "78": "declined card payment", "79": "declined cash withdrawal", "80": "declined transfer", "81": "direct debit payment not recognised", "82": "disposable card limits", "83": "economics", "84": "edit personal details", "85": "education and communications", "86": "email addcontact", "87": "email query", "88": "email querycontact", "89": "email sendemail", "90": "employment and working conditions", "91": "energy", "92": "environment", "93": "exchange charge", "94": "exchange rate", "95": "exchange via app", "96": "extra charge on statement", "97": "failed transfer", "98": "fiat currency support", "99": "finance", "100": "general affirm", "101": "general commandstop", "102": "general confirm", "103": "general dontcare", "104": "general explain", "105": "general greet", "106": "general joke", "107": "general negate", "108": "general praise", "109": "general quirky", "110": "general repeat", "111": "geography", "112": "get disposable virtual card", "113": "get physical card", "114": "getting spare card", "115": "getting virtual card", "116": "industry", "117": "international organisations", "118": "international relations", "119": "iot cleaning", "120": "iot coffee", "121": "iot hue lightchange", "122": "iot hue lightdim", "123": "iot hue lightoff", "124": "iot hue lighton", "125": "iot hue lightup", "126": "iot wemo off", "127": "iot wemo on", "128": "law", "129": "lists createoradd", "130": "lists query", "131": "lists remove", "132": "lost or stolen card", "133": "lost or stolen phone", "134": "music dislikeness", "135": "music likeness", "136": "music query", "137": "music settings", "138": "negative", "139": "neutral", "140": "news query", "141": "order physical card", "142": "passcode forgotten", "143": "pending card payment", "144": "pending cash withdrawal", "145": "pending top up", "146": "pending transfer", "147": "pin blocked", "148": "play audiobook", "149": "play game", "150": "play music", "151": "play podcasts", "152": "play radio", "153": "politics", "154": "positive", "155": "production, technology and research", "156": "qa currency", "157": "qa definition", "158": "qa factoid", "159": "qa maths", "160": "qa stock", "161": "receiving money", "162": "recommendation events", "163": "recommendation locations", "164": "recommendation movies", "165": "request refund", "166": "reverted card payment?", "167": "science", "168": "social post", "169": "social query", "170": "social questions", "171": "supported cards and currencies", "172": "takeaway order", "173": "takeaway query", "174": "terminate account", "175": "top up by bank transfer charge", "176": "top up by card charge", "177": "top up by cash or cheque", "178": "top up failed", "179": "top up limits", "180": "top up reverted", "181": "topping up by card", "182": "trade", "183": "transaction charged twice", "184": "transfer fee charged", "185": "transfer into account", "186": "transfer not received by recipient", "187": "transfer timing", "188": "transport", "189": "transport query", "190": "transport taxi", "191": "transport ticket", "192": "transport traffic", "193": "unable to verify identity", "194": "verify my identity", "195": "verify source of funds", "196": "verify top up", "197": "virtual card not working", "198": "visa or mastercard", "199": "weather query", "200": "why verify identity", "201": "wrong amount of cash received", "202": "wrong exchange rate for cash withdrawal"}}}}, {"name": "dataset_name", "dtype": {"class_label": {"names": {"0": "amazon_polarity", "1": "finance_sentiment", "2": "yelp", "3": "banking77", "4": "snips", "5": "nlu_evaluation", "6": "multi_eurlex", "7": "patent", "8": "consumer_finance"}}}}], "splits": [{"name": "train", "num_bytes": 109566474, "num_examples": 119167}, {"name": "validation", "num_bytes": 12432497, "num_examples": 13263}, {"name": "test", "num_bytes": 541174753, "num_examples": 625911}], "download_size": 1744258165, "dataset_size": 663173724}]}
2023-05-24T16:27:42+00:00
10725454939f2fd5daa175b211ca73aef8d27102
# Dataset Card for "dataset_easy_ocr_v0.3.0_clean" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fimu-docproc-research/dataset_easy_ocr_v0.3.0_clean
[ "region:us" ]
2023-05-11T15:21:45+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "bboxes", "sequence": {"sequence": "float32"}}, {"name": "image_path", "dtype": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "DIC", "1": "IBAN", "2": "ICO", "3": "O", "4": "account_number", "5": "bank_code", "6": "const_symbol", "7": "contr_address", "8": "contr_name", "9": "due_date", "10": "invoice_date", "11": "invoice_number", "12": "qr_code", "13": "spec_symbol", "14": "total_amount", "15": "var_symbol"}}}}], "splits": [{"name": "train", "num_bytes": 28030910, "num_examples": 3212}, {"name": "val", "num_bytes": 3166612, "num_examples": 356}], "download_size": 9291114, "dataset_size": 31197522}}
2023-06-17T10:03:15+00:00
977f153f7d5fd87ab43234321dfe37e0ea6027b3
RuhamaKhan/youtube_parsed_dataset
[ "license:openrail", "region:us" ]
2023-05-11T15:29:03+00:00
{"license": "openrail"}
2023-05-11T18:45:33+00:00
88a817016764280b3fe65070d6e2c41299bbbc2c
yogesh0502/cuad_v1
[ "license:cc-by-4.0", "region:us" ]
2023-05-11T15:38:20+00:00
{"license": "cc-by-4.0"}
2023-05-12T16:10:20+00:00
ef4f3d5804ffb9209037060e65a10a744dc6f18e
# Ukrainian Hypernymy Pairs Dataset ## Background Hypernymy is the super-subordinate or ISA semantic relation that links more general terms to more specific ones. For example, *rose* is a hyponym of *flower*, a hypernym of *rose*. Words that are hyponyms of the same hypernym are called co-hyponyms, for instance, *rose* and *tulip*. Hyponymy relation is transitive and asymmetric. Hypernymy is also differentiated by: * Types — common nouns: *armchair* is a type (hyponym) of *chair*; * Instances — specific persons, countries, and geographic entities: *Dnipro river* is an instance (instance hyponym) of *river*. ## Project Description The Ukrainian Hypernymy Pairs Dataset is a collection of noun pairs that express hypernymy relations between words in the Ukrainian language. The dataset contains pairs of words linked by four different types of relations: hypernym-hyponym, co-hyponyms, hypernym-instance, and co-instances. An example of such a dataset in English may be [BLESS](https://sites.google.com/site/geometricalmodels/shared-evaluation). However, their concepts are linked by one of the following six relations: co-hyponyms, hypernyms, meronyms, attributes, events, and random. Moreover, the hypernymy relation is not divided by terms and instances. Ukrainian Hypernymy Pairs were constructed utilizing the linkage between [Princeton WordNet](https://wordnet.princeton.edu/), [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page), and Ukrainian [Wikipedia](https://www.wikipedia.org/). We used the Python [Wn package](https://wn.readthedocs.io/en/latest/) to get the relation, which provides an interface to WordNet data. All terms in the dataset are Wikipedia article titles, and no preprocessing was applied. Therefore it sometimes contains additional info in the brackets. ## Dataset Statistics This table presents the number of word pairs obtained for each relation type. | **Relation Type** | **# of Pairs** | |-----------------------|----------------| | **Hypernym-Hyponym** | 6,906 | | **Co-Hyponyms** | 42,860 | | **Hypernym-Instance** | 2,971 | | **Co-Instances** | 22,927 | | **Total # of Pairs** | 275,664 | ## Intended Use The dataset produced can be particularly valuable for the Hypernym Detection task, where the pair of words is presented to a model, and it should classify whether they are in a hypernymy relation. Other lexico-semantic relations can be added to improve the diversity of the dataset. ## License Copyright: [Nataliia Romanyshyn](https://twitter.com/supersubnat), [Dmytro Chaplynskyi](https://twitter.com/dchaplinsky), [lang-uk project](https://lang.org.ua), 2023
lang-uk/hypernymy_pairs
[ "task_categories:question-answering", "task_categories:summarization", "size_categories:100K<n<1M", "language:uk", "region:us" ]
2023-05-11T16:03:25+00:00
{"language": ["uk"], "size_categories": ["100K<n<1M"], "task_categories": ["question-answering", "summarization"]}
2023-05-11T16:15:45+00:00
1234134c32065ada53468b25164aa2b16f4d4e17
yacahu/misako
[ "license:other", "region:us" ]
2023-05-11T16:15:51+00:00
{"license": "other"}
2023-05-20T03:32:21+00:00
cccc729272c9fc2b507376776ec68d6fdfec1830
# DAIS-2023 Dataset This dataset contains scraped text data from the Databricks Data and AI Summit 2023 (DAIS 2023) [homepage](https://www.databricks.com/dataaisummit/) as well as text from any public page that is linked in that page or is a two-hop linked page. We have used this dataset to fine-tune our [DAIS DLite model](https://huggingface.co/aisquared/dlite-dais-2023), along with our dataset of [AI-generated question-answer pairs](https://huggingface.co/datasets/aisquared/dais-question-answers) generated from this dataset. Feel free to check them out!
aisquared/dais-2023
[ "task_categories:text-generation", "language:en", "license:apache-2.0", "region:us" ]
2023-05-11T16:27:21+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation"], "pretty_name": "Databricks Data and AI Summit 2023 Website Content"}
2023-06-25T22:55:16+00:00
05bae04ee9a090d2935f0f17a978b94858f39083
# DiffusionDB-Pixelart ## Table of Contents - [DiffusionDB](#diffusiondb) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Subset](#subset) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Dataset Metadata](#dataset-metadata) - [Metadata Schema](#metadata-schema) - [Data Splits](#data-splits) - [Loading Data Subsets](#loading-data-subsets) - [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [DiffusionDB homepage](https://poloclub.github.io/diffusiondb) - **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb) - **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb) - **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896) ### Dataset Summary **This is a subset of the DiffusionDB 2M dataset which has been turned into pixel-style art.** DiffusionDB is the first large-scale text-to-image prompt dataset. It contains **14 million** images generated by Stable Diffusion using prompts and hyperparameters specified by real users. DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb). ### Supported Tasks and Leaderboards The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models. ### Languages The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian. ### Subset DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs. The pixelated version of the data was taken from the DiffusionDB 2M and has 2000 examples only. |Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table| |:--|--:|--:|--:|--:|--:| |DiffusionDB-pixelart|2k|~1.5k|~1.6GB|`images/`|`metadata.parquet`| Images in DiffusionDB-pixelart are stored in `png` format. ## Dataset Structure We use a modularized file structure to distribute DiffusionDB. The 2k images in DiffusionDB-pixelart are split into folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. ```bash # DiffusionDB 2k ./ ├── images │ ├── part-000001 │ │ ├── 3bfcd9cf-26ea-4303-bbe1-b095853f5360.png │ │ ├── 5f47c66c-51d4-4f2c-a872-a68518f44adb.png │ │ ├── 66b428b9-55dc-4907-b116-55aaa887de30.png │ │ ├── [...] │ │ └── part-000001.json │ ├── part-000002 │ ├── part-000003 │ ├── [...] │ └── part-002000 └── metadata.parquet ``` These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB-pixelart). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters. ### Data Instances For example, below is the image of `ec9b5e2c-028e-48ac-8857-a52814fd2a06.png` and its key-value pair in `part-000001.json`. <img width="300" src="https://datasets-server.huggingface.co/assets/jainr3/diffusiondb-pixelart/--/2k_all/train/0/image/image.png"> ```json { "ec9b5e2c-028e-48ac-8857-a52814fd2a06.png": { "p": "doom eternal, game concept art, veins and worms, muscular, crustacean exoskeleton, chiroptera head, chiroptera ears, mecha, ferocious, fierce, hyperrealism, fine details, artstation, cgsociety, zbrush, no background ", "se": 3312523387, "c": 7.0, "st": 50, "sa": "k_euler" }, } ``` ### Data Fields - key: Unique image name - `p`: Text ### Dataset Metadata To help you easily access prompts and other attributes of images without downloading all the Zip files, we include a metadata table `metadata.parquet` for DiffusionDB-pixelart. Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table. Below are three random rows from `metadata.parquet`. | image_name | prompt | part_id | seed | step | cfg | sampler | width | height | user_name | timestamp | image_nsfw | prompt_nsfw | |:-----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------:|-------:|------:|----------:|--------:|---------:|:-----------------------------------------------------------------|:--------------------------|-------------:|--------------:| | 0c46f719-1679-4c64-9ba9-f181e0eae811.png | a small liquid sculpture, corvette, viscous, reflective, digital art | 1050 | 2026845913 | 50 | 7 | 8 | 512 | 512 | c2f288a2ba9df65c38386ffaaf7749106fed29311835b63d578405db9dbcafdb | 2022-08-11 09:05:00+00:00 | 0.0845108 | 0.00383462 | | a00bdeaa-14eb-4f6c-a303-97732177eae9.png | human sculpture of lanky tall alien on a romantic date at italian restaurant with smiling woman, nice restaurant, photography, bokeh | 905 | 1183522603 | 50 | 10 | 8 | 512 | 768 | df778e253e6d32168eb22279a9776b3cde107cc82da05517dd6d114724918651 | 2022-08-19 17:55:00+00:00 | 0.692934 | 0.109437 | | 6e5024ce-65ed-47f3-b296-edb2813e3c5b.png | portrait of barbaric spanish conquistador, symmetrical, by yoichi hatakenaka, studio ghibli and dan mumford | 286 | 1713292358 | 50 | 7 | 8 | 512 | 640 | 1c2e93cfb1430adbd956be9c690705fe295cbee7d9ac12de1953ce5e76d89906 | 2022-08-12 03:26:00+00:00 | 0.0773138 | 0.0249675 | #### Metadata Schema `metadata.parquet` schema: |Column|Type|Description| |:---|:---|:---| |`image_name`|`string`|Image UUID filename.| |`text`|`string`|The text prompt used to generate this image.| > **Warning** > Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects. <img src="https://i.imgur.com/1RiGAXL.png" width="100%"> ### Data Splits For DiffusionDB-pixelart, we split 2k images into folders where each folder contains 1,000 images and a JSON file. ### Loading Data Subsets DiffusionDB is large! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary. #### Method 1: Using Hugging Face Datasets Loader You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the [Dataset Preview](https://huggingface.co/datasets/poloclub/diffusiondb/viewer/all/train). ```python import numpy as np from datasets import load_dataset # Load the dataset with the `2k_random_1k` subset dataset = load_dataset('jainr3/diffusiondb-pixelart', '2k_random_1k') ``` ## Dataset Creation ### Curation Rationale Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos. However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt. Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images. To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs. ### Source Data #### Initial Data Collection and Normalization We construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion) because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information. #### Who are the source language producers? The language producers are users of the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion). ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information The authors removed the discord usernames from the dataset. We decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop better understanding of large text-to-image generative models. The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models. It should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a [Google Form](https://forms.gle/GbYaSpRNYqxCafMZ9) on the [DiffusionDB website](https://poloclub.github.io/diffusiondb/) where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB. ### Discussion of Biases The 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images. ### Other Known Limitations **Generalizability.** Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models. Therefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models. ## Additional Information ### Dataset Curators DiffusionDB is created by [Jay Wang](https://zijie.wang), [Evan Montoya](https://www.linkedin.com/in/evan-montoya-b252391b4/), [David Munechika](https://www.linkedin.com/in/dmunechika/), [Alex Yang](https://alexanderyang.me), [Ben Hoover](https://www.bhoov.com), [Polo Chau](https://faculty.cc.gatech.edu/~dchau/). ### Licensing Information The DiffusionDB dataset is available under the [CC0 1.0 License](https://creativecommons.org/publicdomain/zero/1.0/). The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE). ### Citation Information ```bibtex @article{wangDiffusionDBLargescalePrompt2022, title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models}, author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng}, year = {2022}, journal = {arXiv:2210.14896 [cs]}, url = {https://arxiv.org/abs/2210.14896} } ``` ### Contributions If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact the original author [Jay Wang](https://zijie.wang).
jainr3/diffusiondb-pixelart
[ "task_categories:text-to-image", "task_categories:image-to-text", "task_ids:image-captioning", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:n>1T", "source_datasets:modified", "language:en", "license:cc0-1.0", "stable diffusion", "prompt engineering", "prompts", "arxiv:2210.14896", "region:us" ]
2023-05-11T16:28:21+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["n>1T"], "source_datasets": ["modified"], "task_categories": ["text-to-image", "image-to-text"], "task_ids": ["image-captioning"], "pretty_name": "DiffusionDB-Pixelart", "layout": "default", "title": "Home", "nav_order": 1, "has_children": false, "tags": ["stable diffusion", "prompt engineering", "prompts"]}
2023-05-11T17:59:45+00:00
1949c2edf1c8bacc133ee5015bc65bb7da4f0672
Dataset created from bittensor's subnet1. Will be constantly updated as I add more Q/A. Dataset is currently in "raw" format, would love to have something prettier for loading into datasets.
mrseeker87/bittensor_qa
[ "task_categories:question-answering", "size_categories:1K<n<10K", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-05-11T16:56:45+00:00
{"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"]}
2023-05-12T18:19:04+00:00
46cb82ac4cee25379fe69d65f315114902d7e283
harouzie/vietnews
[ "task_categories:summarization", "size_categories:100K<n<1M", "language:vi", "license:apache-2.0", "finance", "legal", "region:us" ]
2023-05-11T17:19:26+00:00
{"language": ["vi"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["summarization"], "pretty_name": "vietnews", "dataset_info": {"features": [{"name": "guid", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "article", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 325418455, "num_examples": 99134}, {"name": "validation", "num_bytes": 73397317, "num_examples": 22184}, {"name": "test", "num_bytes": 74536959, "num_examples": 22498}], "download_size": 246782373, "dataset_size": 473352731}, "tags": ["finance", "legal"]}
2023-07-18T05:42:11+00:00
224a2ab788863a483c10231071f45b666ce5dd3e
ericbalfour1977/eric-balfour
[ "license:openrail", "region:us" ]
2023-05-11T17:28:02+00:00
{"license": "openrail"}
2023-05-11T17:28:02+00:00
b1a0d3551ee061a97ca287f3f361d5fe3d022e2b
This is the same dataset as [`armanc/pubmed-rct20k`](https://huggingface.co/datasets/armanc/pubmed-rct20k). The only differences are 1. Addition of a unique identifier, `uid` 1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers - `all-mpnet-base-v2` - `multi-qa-mpnet-base-dot-v1` - `all-MiniLM-L12-v2` 1. Renaming of the `label` column to `labels` for easier compatibility with the transformers library
pietrolesci/pubmed-20k-rct
[ "task_categories:text-classification", "language:en", "region:us" ]
2023-05-11T17:28:35+00:00
{"language": ["en"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "abstract_id", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "background", "1": "conclusions", "2": "methods", "3": "objective", "4": "results"}}}}, {"name": "text", "dtype": "string"}, {"name": "sentence_id", "dtype": "int64"}, {"name": "uid", "dtype": "int64"}, {"name": "embedding_all-mpnet-base-v2", "sequence": "float32"}, {"name": "embedding_multi-qa-mpnet-base-dot-v1", "sequence": "float32"}, {"name": "embedding_all-MiniLM-L12-v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1392522399, "num_examples": 176642}, {"name": "validation", "num_bytes": 233905609, "num_examples": 29672}, {"name": "test", "num_bytes": 233146005, "num_examples": 29578}], "download_size": 0, "dataset_size": 1859574013}}
2023-05-12T09:04:08+00:00
ad98da7e1ba1e1a40966478ed6d3dbb63e6f5ad9
xedwin23x/SoyLocal
[ "license:unknown", "region:us" ]
2023-05-11T17:29:06+00:00
{"license": "unknown"}
2023-05-11T17:40:44+00:00
f80284d730eca655f7fe0ab2c7ccfd9adab5307c
xedwin23x/Cotton
[ "license:unknown", "region:us" ]
2023-05-11T17:29:38+00:00
{"license": "unknown"}
2023-05-11T17:35:03+00:00
db8147bf4138b60eb4bbe8d50ea160883988fbfb
xedwin23x/MoeImouto
[ "license:unknown", "region:us" ]
2023-05-11T17:30:32+00:00
{"license": "unknown"}
2023-05-11T17:38:23+00:00
3f6e6cfeab8a3894d962f5e922cb6d43520d055d
xedwin23x/SoyGene
[ "license:other", "region:us" ]
2023-05-11T17:31:25+00:00
{"license": "other"}
2023-05-11T17:31:25+00:00
f95accfb6610324313b1700cfb8682a851bd32a6
# Dataset Card for "UA_speech_multiclass" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AravindVadlapudi02/UA_speech_multiclass
[ "region:us" ]
2023-05-11T18:31:07+00:00
{"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "0_control", "1": "1_very_low", "2": "2_low", "3": "3_mid", "4": "4_high"}}}}, {"name": "input_features", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 1910100348, "num_examples": 1989}, {"name": "test", "num_bytes": 3457195200, "num_examples": 3600}], "download_size": 619695502, "dataset_size": 5367295548}}
2023-05-11T18:32:23+00:00
f262dbc3a1d234f500d9173f9ebf8d19996b03ba
# FindZebra case reports A collection of 3344 case reports fetched from the PubMed API for the Fabry, Gaucher and Familial amyloid cardiomyopathy (FAC) diseases. Articles are labelled using a text segmentation model described in "FindZebra online search delving into rare disease case reports using natural language processing".
findzebra/case-reports
[ "size_categories:1K<n<10K", "language:en", "license:cc-by-4.0", "medical", "region:us" ]
2023-05-11T18:36:24+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "pretty_name": "FindZebra case reports", "tags": ["medical"]}
2023-05-11T18:44:22+00:00
bab9860e369a2740585ff4678818e1b42660ebf5
# Dataset Card for "UA_speech_multiclass_digits_letters" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AravindVadlapudi02/UA_speech_multiclass_digits_letters
[ "region:us" ]
2023-05-11T19:00:00+00:00
{"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "0_control", "1": "1_very_low", "2": "2_low", "3": "3_mid", "4": "4_high"}}}}, {"name": "input_features", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 351481512, "num_examples": 366}, {"name": "test", "num_bytes": 625176132, "num_examples": 651}], "download_size": 107058713, "dataset_size": 976657644}}
2023-05-11T19:00:17+00:00
76a66b5be588d1ff9704f24790b541a737224b8f
# Dataset Card for "TACO-Reformatted-Full" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
RandyHuynh5815/TACO-Reformatted-Full
[ "region:us" ]
2023-05-11T19:26:11+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "categories", "sequence": "int8"}, {"name": "segmentation", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "bbox", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 2721354265.5, "num_examples": 1500}], "download_size": 2622505060, "dataset_size": 2721354265.5}}
2023-05-11T19:35:00+00:00
0da1751bdb702d9b829e1c881495328c5f21ebd5
# Dataset Card for "TVCG_Paper_NER" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Yamei/TVCG_Paper_NER
[ "region:us" ]
2023-05-11T19:47:21+00:00
{"dataset_info": {"features": [{"name": "data", "struct": [{"name": "adjacentArticles", "struct": [{"name": "__typename", "dtype": "string"}, {"name": "next", "struct": [{"name": "__typename", "dtype": "string"}, {"name": "articleId", "dtype": "string"}, {"name": "fno", "dtype": "string"}]}, {"name": "previous", "struct": [{"name": "__typename", "dtype": "string"}, {"name": "articleId", "dtype": "string"}, {"name": "fno", "dtype": "string"}]}]}, {"name": "article", "struct": [{"name": "__typename", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "abstracts", "list": [{"name": "__typename", "dtype": "string"}, {"name": "abstractType", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "authors", "list": [{"name": "__typename", "dtype": "string"}, {"name": "affiliation", "dtype": "string"}, {"name": "fullName", "dtype": "string"}, {"name": "givenName", "dtype": "string"}, {"name": "surname", "dtype": "string"}]}, {"name": "doi", "dtype": "string"}, {"name": "fno", "dtype": "string"}, {"name": "hasPdf", "dtype": "bool"}, {"name": "id", "dtype": "string"}, {"name": "idPrefix", "dtype": "string"}, {"name": "isOpenAccess", "dtype": "bool"}, {"name": "isbn", "dtype": "null"}, {"name": "issn", "dtype": "string"}, {"name": "issueNum", "dtype": "string"}, {"name": "keywords", "sequence": "string"}, {"name": "normalizedAbstract", "dtype": "string"}, {"name": "normalizedTitle", "dtype": "string"}, {"name": "notes", "dtype": "null"}, {"name": "notesType", "dtype": "null"}, {"name": "pages", "dtype": "string"}, {"name": "pubDate", "dtype": "string"}, {"name": "pubType", "dtype": "string"}, {"name": "replicability", "struct": [{"name": "__typename", "dtype": "string"}, {"name": "codeDownloadUrl", "dtype": "string"}, {"name": "codeRepositoryUrl", "dtype": "string"}, {"name": "isEnabled", "dtype": "bool"}]}, {"name": "showBuyMe", "dtype": "bool"}, {"name": "showRecommendedArticles", "dtype": "bool"}, {"name": "title", "dtype": "string"}, {"name": "year", "dtype": "string"}]}, {"name": "articleVideos", "sequence": "null"}, {"name": "entities", "sequence": {"sequence": "string"}}, {"name": "issue", "struct": [{"name": "__typename", "dtype": "string"}, {"name": "downloadables", "struct": [{"name": "__typename", "dtype": "string"}, {"name": "hasCover", "dtype": "bool"}]}, {"name": "id", "dtype": "string"}, {"name": "idPrefix", "dtype": "string"}, {"name": "issueNum", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "pubType", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "volume", "dtype": "string"}, {"name": "year", "dtype": "string"}]}, {"name": "recommendedArticles", "list": [{"name": "__typename", "dtype": "string"}, {"name": "abstractUrl", "dtype": "string"}, {"name": "doi", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "parentPublication", "struct": [{"name": "__typename", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}]}, {"name": "title", "dtype": "string"}]}, {"name": "webExtras", "list": [{"name": "__typename", "dtype": "string"}, {"name": "extension", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "size", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 42952165, "num_examples": 5178}], "download_size": 17356935, "dataset_size": 42952165}}
2023-05-11T19:47:28+00:00
31b44504d76c98691ecccc8f0b45bac46c84adfd
# Dataset Card for "trump-tweets-ray" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
plural-user/trump-tweets-ray
[ "region:us" ]
2023-05-11T20:09:26+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4865384, "num_examples": 1}, {"name": "test", "num_bytes": 272829, "num_examples": 1}, {"name": "validation", "num_bytes": 271308, "num_examples": 1}], "download_size": 3720431, "dataset_size": 5409521}}
2023-05-11T20:09:51+00:00
337eaa3e04afcc42437f34932da037e1b89102f3
KyonBS/TsubakiAnime
[ "license:openrail", "region:us" ]
2023-05-11T20:40:16+00:00
{"license": "openrail"}
2023-05-11T20:40:59+00:00
d3b600000def952a3a13f051a70f512e6857f467
AugQ-CC is an unsupervised augmented dataset for training retrievers used in `AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation`. It consists of 52.4M pseudo query-document pairs based on [Pile-CommonCrawl](https://pile.eleuther.ai/paper.pdf). ``` @article{meng2022augtriever, title={AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation}, author={Meng, Rui and Liu, Ye and Yavuz, Semih and Agarwal, Divyansh and Tu, Lifu and Yu, Ning and Zhang, Jianguo and Bhat, Meghana and Zhou, Yingbo}, journal={arXiv preprint arXiv:2212.08841}, year={2022} } ```
memray/AugTriever-AugQ-CC
[ "license:mit", "region:us" ]
2023-05-11T20:46:00+00:00
{"license": "mit"}
2023-05-24T02:28:38+00:00
febdbb44a3320e88483f1316d088c7bcdab12d36
AugQ-Wiki is an unsupervised augmented dataset for training retrievers used in AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation. It consists of 22.6M pseudo query-document pairs based on Wikipedia. It follows the same license of Wikipedia (Creative Commons Attribution-Share-Alike License 3.0). ``` @article{meng2022augtriever, title={AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation}, author={Meng, Rui and Liu, Ye and Yavuz, Semih and Agarwal, Divyansh and Tu, Lifu and Yu, Ning and Zhang, Jianguo and Bhat, Meghana and Zhou, Yingbo}, journal={arXiv preprint arXiv:2212.08841}, year={2022} } ```
memray/AugTriever-AugQ-Wiki
[ "task_categories:text-retrieval", "size_categories:10M<n<100M", "license:cc-by-sa-3.0", "region:us" ]
2023-05-11T20:47:10+00:00
{"license": "cc-by-sa-3.0", "size_categories": ["10M<n<100M"], "task_categories": ["text-retrieval"]}
2023-06-19T21:31:28+00:00
2e76256257321f45217eb15f0eb0f47cdaa13480
edu1313/pp
[ "region:us" ]
2023-05-11T21:33:12+00:00
{}
2023-05-11T21:34:30+00:00
4371f70442bfcd7081f78307c9866c9a4e1f4f00
joelespinozaro/audio_es
[ "license:mit", "region:us" ]
2023-05-11T22:05:50+00:00
{"license": "mit"}
2023-05-11T22:07:29+00:00
7d8ff0dee7352b63b9d3827d8192c53d26e4efbb
# Dataset Card for "sample-hf-github-issues" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cafbr/sample-hf-github-issues
[ "region:us" ]
2023-05-11T22:07:46+00:00
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "labels", "list": [{"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "assignees", "list": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "milestone", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "creator", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "open_issues", "dtype": "int64"}, {"name": "closed_issues", "dtype": "int64"}, {"name": "state", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "due_on", "dtype": "null"}, {"name": "closed_at", "dtype": "null"}]}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "closed_at", "dtype": "timestamp[s]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "url", "dtype": "string"}, {"name": "total_count", "dtype": "int64"}, {"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "state_reason", "dtype": "string"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "diff_url", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "merged_at", "dtype": "timestamp[s]"}]}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 7677601, "num_examples": 1000}], "download_size": 2120805, "dataset_size": 7677601}}
2023-05-11T22:07:54+00:00
3d6bd2e1c44caa7498e396182a01edaf0d68f7db
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。 #### 以acos数据集中抽取的jsonl文件一条数据举例: ``` { "task_type": "generation", "dataset": "acos", "input": ["the computer has difficulty switching between tablet and computer ."], "output": "[['computer', 'laptop usability', 'negative', 'difficulty']]", "situation": "none", "label": "", "extra": "", "instruction": " Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words. Input: A sentence Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence. Example: Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\" Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]' " } ``` > 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。 #### 原始数据集 - 数据[链接](https://github.com/IsakZhang/ABSA-QUAD) - Paper: [Aspect Sentiment Quad Prediction as Paraphrase Generation](https://aclanthology.org/2021.emnlp-main.726.pdf) - 说明:原始数据集由Rest15和Rest16两个文件夹的数据组成,本次改造我将两个数据集的数据合并并区分为train、validation与test #### 当前SOTA *数据来自[论文](https://arxiv.org/abs/2305.09193)* - 评价指标:F1 score - SOTA模型:E2H-large (Rest15上F1 Score:**52.39** , Rest16上F1 Score:**61.86**) - Paper:[Easy-to-Hard Learning for Information Extraction](https://arxiv.org/pdf/2305.09193.pdf) - 说明:该论文来自[Google Scholar](https://scholar.google.com/scholar?hl=zh-CN&as_sdt=2005&sciodt=0,5&cites=13359676136585163616&scipsc=&q=&scisbd=1)检索到的引用ABSA-QUAD原论文的论文之一,我比较了2023年的一些论文工作后筛选了一个最优指标以及模型。
NEUDM/absa-quad
[ "task_categories:text-generation", "size_categories:1K<n<10K", "language:en", "arxiv:2305.09193", "region:us" ]
2023-05-12T01:01:25+00:00
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"]}
2023-05-23T16:31:07+00:00
e07d280765f34629c063116b740ccd29268dbba1
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。 #### 以acos数据集中抽取的jsonl文件一条数据举例: ``` { "task_type": "generation", "dataset": "acos", "input": ["the computer has difficulty switching between tablet and computer ."], "output": "[['computer', 'laptop usability', 'negative', 'difficulty']]", "situation": "none", "label": "", "extra": "", "instruction": " Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words. Input: A sentence Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence. Example: Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\" Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]' " } ``` > 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。 #### 原始数据集 - 数据[链接](https://github.com/IsakZhang/ABSA-QUAD) - Paper: [Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions](https://aclanthology.org/2021.acl-long.29.pdf) - 说明:原始数据集由Restaurant-ACOS和Laptop-ACOS两个文件夹的数据组成,本次改造我将两个数据集的数据合并并区分为train、validation与test #### 当前SOTA *数据来自[论文](https://arxiv.org/abs/2305.09193)* - 评价指标:F1 score - SOTA模型:E2H-large (Restaurant-ACOS上F1 Score:**63.50** , Laptop-ACOS上F1 Score:**44.51**) - Paper:[Easy-to-Hard Learning for Information Extraction](https://arxiv.org/pdf/2305.09193.pdf) - 说明:该论文来自[Google Scholar](https://scholar.google.com/scholar?as_ylo=2023&hl=zh-CN&as_sdt=2005&sciodt=0,5&cites=5295149944344120368&scipsc=)检索到的引用ACOS原论文的论文之一,我比较了2023年的一些论文工作后筛选了一个最优指标以及模型。
NEUDM/acos
[ "arxiv:2305.09193", "region:us" ]
2023-05-12T01:09:38+00:00
{}
2023-05-23T16:32:05+00:00
092ec42f74ae885426e184b6d6bfc6894463f6ca
billxbf/sotu2023-qa
[ "license:mit", "region:us" ]
2023-05-12T01:26:52+00:00
{"license": "mit"}
2023-05-12T01:28:34+00:00
47ec1a46d9c5a3a6e4867849027b45cd8b66c045
# Dataset Card for "sql-create-context_alpaca_style" We provide a minor modification of the [sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context) dataset. In particular, we 1) prepend each instruction with the phrase, "Write a SQL query that answers the following question: " and 2) prepend each context with the phrase, "The relevant table was constructed using the following SQL CREATE TABLE statement: ". ## Numbers: Prompts: 78577 Tokens: 6438971 using the EleutherAI/gpt-neox-20b tokenizer (counting instruction+input+output)
lucasmccabe-lmi/sql-create-context_alpaca_style
[ "region:us" ]
2023-05-12T01:32:40+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28203562.0, "num_examples": 78577}], "download_size": 9312899, "dataset_size": 28203562.0}}
2023-05-15T20:16:51+00:00
f1d8ed63a70832a92a058ed0f71645a65740a209
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。 #### 以acos数据集中抽取的jsonl文件一条数据举例: ``` { "task_type": "generation", "dataset": "acos", "input": ["the computer has difficulty switching between tablet and computer ."], "output": "[['computer', 'laptop usability', 'negative', 'difficulty']]", "situation": "none", "label": "", "extra": "", "instruction": " Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words. Input: A sentence Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence. Example: Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\" Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]' " } ``` > 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。 #### 原始数据集 - 数据[链接](https://github.com/zhijing-jin/ARTS_TestSet) - Paper: [Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis](https://arxiv.org/pdf/2009.07964.pdf) - 说明:原始数据集由laptop和restaurant两个领域的的json数据组成,本次改造我将两个数据集的数据合并并区分为train、validation与test,该数据的提出目的是测试模型鲁棒性,因此在引用该数据集的文章中多是通过在一个领域的数据上训练,在该数据集的另一个领域上测试。 #### 当前SOTA *数据来自[论文](https://arxiv.org/abs/2303.02846)* - 评价指标:macro-averaged F1 - SOTA模型:CVIB - 其他领域数据训练后在restaurant数据集上macro-averaged F1:**70.29** - restaurant数据集上训练并测评的macro-averaged F1:**82.03** - 其他领域训练后在laptop上测评的macro-averaged F1:**69.39** - laptop数据集上训练并测评的macro-averaged F1:**77.53** ) - Paper:[Reducing Spurious Correlations for Aspect-Based Sentiment Analysis with Variational Information Bottleneck and Contrastive Learning](https://arxiv.org/pdf/2303.02846.pdf) - 说明:该论文来自[Google Scholar](https://scholar.google.com/scholar?as_ylo=2023&q=ABSA+ARTS&hl=zh-CN&as_sdt=0,5)检索到的引用ARTS原论文的论文之一,我比较了2023年的一些论文工作后筛选了一个最优指标以及模型。
NEUDM/arts
[ "arxiv:2009.07964", "arxiv:2303.02846", "region:us" ]
2023-05-12T01:41:01+00:00
{}
2023-05-23T16:30:10+00:00
33aaa9467a91663aedcc1da50e090d7eabad54ac
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。 #### 以acos数据集中抽取的jsonl文件一条数据举例: ``` { "task_type": "generation", "dataset": "acos", "input": ["the computer has difficulty switching between tablet and computer ."], "output": "[['computer', 'laptop usability', 'negative', 'difficulty']]", "situation": "none", "label": "", "extra": "", "instruction": " Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words. Input: A sentence Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence. Example: Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\" Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]' " } ``` > 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。 #### 原始数据集 - 数据[链接](https://github.com/xuuuluuu/Position-Aware-Tagging-for-ASTE) - Paper: [Position-Aware Tagging for Aspect Sentiment Triplet Extraction](https://arxiv.org/abs/2010.02609) - 说明:原始数据集由laptop14、restaurant14、restaurant15以及restaurant16四部分文件组成。 #### 当前SOTA *数据来自[Easy-to-Hard Learning for Information Extraction](https://arxiv.org/abs/2305.09193)* - 评价指标:F1 Score - SOTA模型:E2H-large - 在laptop14数据部分:**75.92** - 在restaurant14数据部分:**65.98** - 在restaurant15数据部分:**68.80** - 在restaurant16数据部分:**75.46** - 平均:**71.54** - Paper:[Easy-to-Hard Learning for Information Extraction](https://arxiv.org/pdf/2305.09193.pdf) - 说明:该论文来自[Google Scholar](https://scholar.google.com/scholar?as_ylo=2023&hl=zh-CN&as_sdt=2005&sciodt=0,5&cites=8596892198266513995&scipsc=)检索到的引用ASTE-data-v2原论文的论文之一,在比较2023年的一些论文工作后筛选了一个最优指标以及模型。
NEUDM/aste-data-v2
[ "arxiv:2010.02609", "arxiv:2305.09193", "region:us" ]
2023-05-12T01:44:04+00:00
{}
2023-05-23T16:29:01+00:00
292cd37d971c53c08ebbefd969aaf33466f31bb3
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。 #### 以acos数据集中抽取的jsonl文件一条数据举例: ``` { "task_type": "generation", "dataset": "acos", "input": ["the computer has difficulty switching between tablet and computer ."], "output": "[['computer', 'laptop usability', 'negative', 'difficulty']]", "situation": "none", "label": "", "extra": "", "instruction": " Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words. Input: A sentence Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence. Example: Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\" Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]' " } ``` > 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。 #### 原始数据集 - 数据[链接](https://github.com/siat-nlp/MAMS-for-ABSA) - Paper:[A Challenge Dataset and Effective Models for Aspect-Based Sentiment Analysis](https://aclanthology.org/D19-1654.pdf) - 说明:原始数据由MAMS-ACSA和MAMS-ATSA组成,两部分数据集为不同任务,抽取不同元素。 #### 当前SOTA *数据来自[PaperWithCode](https://paperswithcode.com/sota/aspect-based-sentiment-analysis-on-mams)* - 评价指标:Accuracy 、 Macro-F1 - 模型:RGAT+ (Accuracy: **84.52** , Macro-F1: **83.74**) - Paper:[Investigating Typed Syntactic Dependencies for Targeted Sentiment Classification Using Graph Attention Neural Network](https://paperswithcode.com/paper/exploiting-typed-syntactic-dependencies-for)
NEUDM/mams
[ "region:us" ]
2023-05-12T01:46:00+00:00
{}
2023-05-23T16:25:04+00:00
5ef163e19093d25c96d90a652b418529c45bd8d4
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。 补充:SemEval-2014数据集文件夹中有两个文件夹"laptop"和"restaurant",其实根据数据集文本的主要围绕主题区分的。抽取的元素方面,laptop和restaurant两文件夹中,数据的抽取元素也不同,laptop抽取的是方面类别和情感极性、restaurant抽取的是{(方面术语,情感极性),(方面类别,情感极性)}的元素 #### 以acos数据集中抽取的jsonl文件一条数据举例: ``` { "task_type": "generation", "dataset": "acos", "input": ["the computer has difficulty switching between tablet and computer ."], "output": "[['computer', 'laptop usability', 'negative', 'difficulty']]", "situation": "none", "label": "", "extra": "", "instruction": " Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words. Input: A sentence Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence. Example: Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\" Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]' " } ``` > 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。 #### 原始数据集 - 数据[链接](https://alt.qcri.org/semeval2014/task4/) - Paper:[SemEval-2014 Task 4: Aspect Based Sentiment Analysis](https://aclanthology.org/S14-2004/) - 说明:数据分为Laptop和restaurant两个主题的数据,分别在两个文件夹中放置。两个主题的数据抽取的元素不同。 #### 当前SOTA *数据来自[PaperWithCode](https://paperswithcode.com/sota)* - [SemEval2014-Laptop](https://paperswithcode.com/sota/aspect-based-sentiment-analysis-on-semeval-5) - 评价指标:F1-score - 模型:InstructABSA (**79.34**) - Paper:[InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis](https://paperswithcode.com/paper/instructabsa-instruction-learning-for-aspect) - [SemEval2014-Restaurant](https://paperswithcode.com/sota/aspect-based-sentiment-analysis-on-semeval-5) - 评价指标:Accuracy(抽取的分类准确率) - 模型:HGCN (**84.09**) - Paper:[Learn from Structural Scope: Improving Aspect-Level Sentiment Analysis with Hybrid Graph Convolutional Networks](https://paperswithcode.com/paper/learn-from-structural-scope-improving-aspect)
NEUDM/semeval-2014
[ "task_categories:text-generation", "language:en", "region:us" ]
2023-05-12T01:48:36+00:00
{"language": ["en"], "task_categories": ["text-generation"]}
2023-05-23T16:18:49+00:00
c43c506d5f8e35445f66cd9b9f22635da3bdae3a
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。 补充:SemEval-2015数据集文件夹中有两个文件夹"laptop"和"restaurant",其实根据数据集文本的主要围绕主题区分的。抽取的元素方面,laptop和restaurant两文件夹中,数据的抽取元素也不同,laptop抽取的是方面类别和情感极性的二元组,restaurant抽取的是方面术语、方面类别和情感极性的三元组 #### 以acos数据集中抽取的jsonl文件一条数据举例: ``` { "task_type": "generation", "dataset": "acos", "input": ["the computer has difficulty switching between tablet and computer ."], "output": "[['computer', 'laptop usability', 'negative', 'difficulty']]", "situation": "none", "label": "", "extra": "", "instruction": " Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words. Input: A sentence Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence. Example: Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\" Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]' " } ``` > 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。 #### 原始数据集 - 数据[链接](https://alt.qcri.org/semeval2015/task12/) - Paper:[SemEval-2015 Task 12: Aspect Based Sentiment Analysis](https://aclanthology.org/S15-2082/) - 说明:数据分为Laptop和restaurant两个主题的数据,分别在两个文件夹中放置。两个主题的数据抽取的元素不同。 #### 当前SOTA *数据来自[PaperWithCode](https://paperswithcode.com/sota)* - SemEval2015-Laptop 未调研到该部分数据的评测 - [SemEval2015-Restaurant](https://paperswithcode.com/sota/aspect-based-sentiment-analysis-on-semeval-4) - 评价指标:Accuracy(抽取的分类准确率) - 模型:HAABSA++ (**81.7**) - Paper:[A Hybrid Approach for Aspect-Based Sentiment Analysis Using Deep Contextual Word Embeddings and Hierarchical Attention](https://paperswithcode.com/paper/a-hybrid-approach-for-aspect-based-sentiment-1)
NEUDM/semeval-2015
[ "language:en", "region:us" ]
2023-05-12T01:52:12+00:00
{"language": ["en"]}
2023-05-23T16:16:33+00:00
868ee486f2d528ad8b03715fc5a7edab64930ed7
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。 #### 以acos数据集中抽取的jsonl文件一条数据举例: ``` { "task_type": "generation", "dataset": "acos", "input": ["the computer has difficulty switching between tablet and computer ."], "output": "[['computer', 'laptop usability', 'negative', 'difficulty']]", "situation": "none", "label": "", "extra": "", "instruction": " Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words. Input: A sentence Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence. Example: Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\" Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]' " } ``` > 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。 #### 原始数据集 - 数据[链接](https://alt.qcri.org/semeval2016/task5/) - Paper:[SemEval-2016 Task 5: Aspect Based Sentiment Analysis](https://aclanthology.org/S16-1002/) - 说明:数据分为Laptop和restaurant两个主题的数据,分别在两个文件夹中放置。两个主题的数据抽取的元素不同。 #### 当前SOTA *数据来自[PaperWithCode](https://paperswithcode.com/sota)* - SemEval2016-Laptop 未调研到相关评测工作 - [SemEval2016-Restaurant](https://paperswithcode.com/sota/aspect-based-sentiment-analysis-on-semeval-2) - 评价指标:Accuracy(抽取的分类准确率) - 模型:BERT-IL Finetuned (**88.70**) - Paper:[Does BERT Understand Sentiment? Leveraging Comparisons Between Contextual and Non-Contextual Embeddings to Improve Aspect-Based Sentiment Models](https://paperswithcode.com/paper/does-bert-understand-sentiment-leveraging) - 信息来源:[SemEval-2016](https://paperswithcode.com/sota/aspect-based-sentiment-analysis-on-semeval-2)
NEUDM/semeval-2016
[ "language:en", "region:us" ]
2023-05-12T02:00:52+00:00
{"language": ["en"]}
2023-05-23T16:22:44+00:00
1bbdc2b18518b01c4633abeb8dc1d747dcc8bd63
# Dataset Card for "StatBuddy_Data" This data card will be left for the time being but I can keep it there for later. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SalimK49/StatBuddy_Data
[ "region:us" ]
2023-05-12T02:01:16+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "sql_query", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10633, "num_examples": 84}], "download_size": 4598, "dataset_size": 10633}}
2023-05-12T03:51:56+00:00
698a9b18496d2ff7dfa9490cdf380bc36fd627de
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。 #### 以acos数据集中抽取的jsonl文件一条数据举例: ``` { "task_type": "generation", "dataset": "acos", "input": ["the computer has difficulty switching between tablet and computer ."], "output": "[['computer', 'laptop usability', 'negative', 'difficulty']]", "situation": "none", "label": "", "extra": "", "instruction": " Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words. Input: A sentence Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence. Example: Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\" Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]' " } ``` > 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。 #### 原始数据集 - 数据[链接](https://github.com/NJUNLP/TOWE) - Paper:[Target-oriented Opinion Words Extraction with Target-fused Neural Sequence Labeling](https://aclanthology.org/N19-1259/) - 说明:原始数据由laptop14、restuarant14、restuarant15和restuarant16四个文件夹组成,四个文件夹的数据不同,但抽取的元素相同 #### 当前SOTA *数据来自[论文](https://aclanthology.org/N19-1259/)* - 评价指标:F1-Score - 模型:IOG - laptop14:**71.35** - restuarant14:**80.02** - restuarant15:**73.25** - restuarant16:**81.69** - Paper:Paper:[Target-oriented Opinion Words Extraction with Target-fused Neural Sequence Labeling](https://aclanthology.org/N19-1259/) - 说明:TOWE论文贡献为提出ABSA新的子任务(TOWE),并创建了新的数据集,但是据我在[google scholar](https://scholar.google.com/scholar?as_ylo=2023&hl=zh-CN&as_sdt=2005&sciodt=0,5&cites=10978596531168101977&scipsc=)的初步调研发现虽然后较多工作引用该论文,但是均未使用该TOWE数据集,因此暂且将提出TOWE数据集的论文模型作为SOTA模型
NEUDM/towe
[ "language:en", "region:us" ]
2023-05-12T02:04:42+00:00
{"language": ["en"]}
2023-05-23T16:20:24+00:00
59bfc748104055b379caf855af74591a4189ec00
jeremy641/Sovits4.0_Femalesing_G_61000_model
[ "license:openrail", "region:us" ]
2023-05-12T02:15:00+00:00
{"license": "openrail"}
2023-05-12T02:36:58+00:00
0453b03bdf693871351d8952b8e9075adfbc317b
# Dataset Card for "chinese_landscape_paintings_1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mingyy/chinese_landscape_paintings_1k
[ "region:us" ]
2023-05-12T02:17:07+00:00
{"dataset_info": {"features": [{"name": "target", "dtype": "image"}, {"name": "filename", "dtype": "string"}, {"name": "image_caption", "dtype": "string"}, {"name": "source", "dtype": "image"}, {"name": "hed", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 396424758.0, "num_examples": 1000}], "download_size": 396202283, "dataset_size": 396424758.0}}
2023-05-14T20:25:14+00:00
afb9def5405c7b90c90dd2c65e1d6c8232e56067
danielfein/face_conditioning
[ "license:cc-by-4.0", "region:us" ]
2023-05-12T02:50:32+00:00
{"license": "cc-by-4.0"}
2023-05-12T02:52:05+00:00
e07cf65714b365e47e31df49bc06d094d0fdd3fd
# Dataset Card for "COCO-Text" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
howard-hou/COCO-Text
[ "region:us" ]
2023-05-12T03:17:56+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "coco_file_name", "dtype": "string"}, {"name": "image_id", "dtype": "string"}, {"name": "caption", "sequence": "string"}, {"name": "ocr_tokens", "sequence": "string"}, {"name": "ocr_info", "list": [{"name": "word", "dtype": "string"}, {"name": "bounding_box", "struct": [{"name": "width", "dtype": "float64"}, {"name": "height", "dtype": "float64"}, {"name": "top_left_x", "dtype": "float64"}, {"name": "top_left_y", "dtype": "float64"}]}]}, {"name": "image_width", "dtype": "int64"}, {"name": "image_height", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2230879987.67, "num_examples": 13097}, {"name": "validation", "num_bytes": 526583286.88, "num_examples": 3074}], "download_size": 259904361, "dataset_size": 2757463274.55}}
2023-05-12T04:22:01+00:00
e75750e6cef2bca1efd70515f046180694d01f41
gmonsoon/HSR-SplashArt
[ "license:openrail", "region:us" ]
2023-05-12T03:56:19+00:00
{"license": "openrail"}
2023-05-12T04:29:05+00:00
e18ec1b40c2b9b686ca2a1464c7ca9492481dcbe
openpecha/tibetan_voice
[ "language:bo", "license:other", "audio", "automatic-speech-recognition", "region:us" ]
2023-05-12T04:04:58+00:00
{"language": ["bo"], "license": "other", "tags": ["audio", "automatic-speech-recognition"]}
2023-05-15T04:41:32+00:00
681db7d527942ca9512c568e223fc32b51f95748
Congliu/USPTO-50k-Instruction
[ "license:apache-2.0", "region:us" ]
2023-05-12T04:46:03+00:00
{"license": "apache-2.0"}
2023-05-12T04:50:29+00:00
6f4069e9987b50167fcd069342ef4bcdb131d923
xedwin23x/SoyAgeing
[ "license:other", "region:us" ]
2023-05-12T05:05:32+00:00
{"license": "other"}
2023-05-12T05:05:32+00:00
1b9d5b277e6fe2b8a968961cd0b70baec91c64a3
readerbench/ro-business-emails
[ "license:apache-2.0", "region:us" ]
2023-05-12T05:35:59+00:00
{"license": "apache-2.0", "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "data", "struct": [{"name": "body", "dtype": "string"}]}, {"name": "annotation", "struct": [{"name": "choices", "list": [{"name": "name", "dtype": "string"}, {"name": "value", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 920922, "num_examples": 868}, {"name": "val", "num_bytes": 273464, "num_examples": 289}, {"name": "test", "num_bytes": 284370, "num_examples": 290}], "download_size": 739445, "dataset_size": 1478756}}
2023-05-18T07:46:58+00:00
d1d7692228acbde386a054bcff9e3d0bda907fb3
# NLP: Sentiment Classification Dataset This is a bundle dataset for a NLP task of sentiment classification in English. There is a sample project is using this dataset [GURA-gru-unit-for-recognizing-affect](https://github.com/NatLee/GURA-gru-unit-for-recognizing-affect). ## Content - `myanimelist-sts`: This dataset is derived from MyAnimeList, a social networking and cataloging service for anime and manga fans. The dataset typically includes user reviews with ratings. We used [skip-thoughts](https://pypi.org/project/skip-thoughts/) to summarize them. You can find the original source of the dataset [myanimelist-comment-dataset](https://www.kaggle.com/datasets/natlee/myanimelist-comment-dataset) and the version is `2023-05-11`. - `aclImdb`: The ACL IMDB dataset is a large movie review dataset collected for sentiment analysis tasks. It contains 50,000 highly polar movie reviews, divided evenly into 25,000 training and 25,000 test sets. Each set includes an equal number of positive and negative reviews. The source is from [sentiment](https://ai.stanford.edu/~amaas/data/sentiment/) - `MR`: Movie Review Data (MR) is a dataset that contains 5,331 positive and 5,331 negative processed sentences/lines. This dataset is suitable for binary sentiment classification tasks, and it's a good starting point for text classification models. You can find the source [movie-review-data](http://www.cs.cornell.edu/people/pabo/movie-review-data/) and the section is `Sentiment scale datasets`. - `MPQA`: The Multi-Perspective Question Answering (MPQA) dataset is a resource for opinion detection and sentiment analysis research. It consists of news articles from a wide variety of sources annotated for opinions and other private states. You can get the source from [MPQA](https://mpqa.cs.pitt.edu/) - `SST2`: The Stanford Sentiment Treebank version 2 (SST2) is a popular benchmark for sentence-level sentiment analysis. It includes movie review sentences with corresponding sentiment labels (positive or negative). You can obtain the dataset from [SST2](https://huggingface.co/datasets/sst2) - `SUBJ`: The Subjectivity dataset is used for sentiment analysis research. It consists of 5000 subjective and 5000 objective processed sentences, which can help a model to distinguish between subjective and objective (factual) statements. You can find the source [movie-review-data](http://www.cs.cornell.edu/people/pabo/movie-review-data/) and the section is `Subjectivity datasets`. # Tokenizer ```python from pathlib import Path import pickle from tensorflow.keras.preprocessing.text import Tokenizer def check_data_path(file_path:str) -> bool: if Path(file_path).exists(): print(f'[Path][OK] {file_path}') return True print(f'[Path][FAILED] {file_path}') return False sentences = [] # ===================== # Anime Reviews # ===================== dataset = './myanimelist-sts.pkl' if check_data_path(dataset): with open(dataset, 'rb') as p: X, Y = pickle.load(p) sentences.extend(X) sentences.extend(Y) # ===================== # MPQA # ===================== dataset = './MPQA.pkl' if check_data_path(dataset): with open(dataset, 'rb') as p: mpqa = pickle.load(p) sentences.extend(list(mpqa.sentence)) # ===================== # IMDB # ===================== dataset = './aclImdb.pkl' if check_data_path(dataset): with open(dataset, 'rb') as p: x_test, y_test, x_train, y_train = pickle.load(p) sentences.extend(x_train) sentences.extend(y_train) # ===================== # MR # ===================== dataset = './MR.pkl' if check_data_path(dataset): with open(dataset, 'rb') as p: mr = pickle.load(p) sentences.extend(list(mr.sentence)) # ===================== # SST2 # ===================== dataset = './SST2.pkl' if check_data_path(dataset): with open(dataset, 'rb') as p: sst2 = pickle.load(p) sentences.extend(list(sst2.sentence)) # ===================== # SUBJ # ===================== dataset = './SUBJ.pkl' if check_data_path(dataset): with open(dataset, 'rb') as p: subj = pickle.load(p) sentences.extend(list(subj.sentence)) sentences = map(str, sentences) #Tokenize the sentences myTokenizer = Tokenizer( num_words = 100, oov_token="{OOV}" ) myTokenizer.fit_on_texts(sentences) print(myTokenizer.word_index) with open('./big-tokenizer.pkl', 'wb') as p: pickle.dump(myTokenizer, p) ```
NatLee/sentiment-classification-dataset-bundle
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:en", "region:us" ]
2023-05-12T05:50:24+00:00
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"]}
2023-05-12T06:43:17+00:00
1079bc9089cee17610c79d6e6f0a17691ff5050c
# Dataset Card for Dataset Name ### Dataset Summary The benchmark datasets for document-level machine translation. ### Supported Tasks Document-level Machine Translation Tasks. ### Languages English-German ## Dataset Structure ### Data Instances TED: iwslt17, News: nc2016, Europarl: europarl7 ### Data Fields Pure text that each line represents a sentence and multiple lines separated by '\<d\>' line form a document. ### Data Splits train, dev, test ### Data Usage This dataset is created for the convenience of usage by https://github.com/baoguangsheng/g-transformer
gshbao/DocNMT
[ "task_categories:translation", "size_categories:100K<n<1M", "language:en", "language:de", "license:afl-3.0", "region:us" ]
2023-05-12T06:00:08+00:00
{"language": ["en", "de"], "license": "afl-3.0", "size_categories": ["100K<n<1M"], "task_categories": ["translation"], "pretty_name": "Doc-Level NMT"}
2023-05-12T06:52:30+00:00
71410267f6529694c42550c9efe21adc35fddd50
deepsynthbody/deepfake-ecg2
[ "license:mit", "region:us" ]
2023-05-12T06:54:12+00:00
{"license": "mit"}
2023-05-12T07:18:12+00:00
465200a85cc852eb990eded867c19b19d4884f2f
# Dataset Card for "hfh4_oasst1_zh" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dltdojo/hfh4_oasst1_zh
[ "region:us" ]
2023-05-12T06:58:16+00:00
{"dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 30744254.277176227, "num_examples": 19034}, {"name": "test", "num_bytes": 3416207.722823774, "num_examples": 2115}, {"name": "train_ift", "num_bytes": 30744254.277176227, "num_examples": 19034}, {"name": "test_ift", "num_bytes": 3416207.722823774, "num_examples": 2115}], "download_size": 37300334, "dataset_size": 68320924.0}}
2023-05-12T08:08:47+00:00
01aa573ce4e0c2574a2e9ba8b4c67e7f56c5d083
# CCAE: A Corpus of Chinese-based Asian Englishes ## Dataset Description - **Repository:** https://github.com/jacklanda/CCAE - **Paper:** ### Dataset Summary Language models have been foundations in various scenarios of NLP applications, but it has not been well applied in language variety studies, even for the most popular language like English. This paper represents one of the few initial efforts to utilize the NLP technology in the paradigm of World Englishes, specifically in creating a multi-variety corpus for studying Asian Englishes. We present an overview of the CCAE — Corpus of Chinese-based Asian English, a suite of corpora comprising six Chinese-based Asian English varieties. It is based on 340 million tokens in 448 thousand web documents from six regions. The ontology of data would make the corpus a helpful resource with enormous research potential for Asian Englishes (especially for Chinese Englishes for which there has not been a publicly accessible corpus yet so far) and an ideal source for varietyspecific language modeling and downstream tasks, thus setting the stage for NLP-based World Englishes studies. And preliminary experiments on this corpus reveal the practical value of CCAE. ### Languages Six varieties in asian areas of English: CHE, HKE, MCE, TWE, MYE, SGE ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
CCAE/CCAE-Corpus
[ "task_categories:text-classification", "task_categories:text-generation", "size_categories:100K<n<1M", "language:en", "license:cc-by-nc-nd-4.0", "region:us" ]
2023-05-12T07:17:59+00:00
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification", "text-generation"], "pretty_name": "Colorful Candies Are Exciting"}
2023-10-08T14:12:47+00:00
89bd33e221d4b76845b8eeec240c4e133d23c22e
# AI Horde Aesthetic and Artifact Ratings A dataset of exported aesthetic and artifact ratings provided by the [AI Horde](https://aihorde.net) community through our [open ratings API](https://ratings.aihorde.net/api). Each row in this dataset presents the rating for a single image from the [diffusiondb](https://poloclub.github.io/diffusiondb/). Each image UUID in this parquet will match the diffusiondb filename. Each rating contains an aesthetic rating of 1-10, where 1 represents an image found distasteful, and 10 an image most found very pleasing. This is an explicitly subjective rating. Each rating also contains an artifact rating of 0-5, where 0 represents no artifacts or image disruption, and 5 represents an image ruined. This ratings aims to be more objective. The aim is for each image to be rated at least 5 times, so that a useful average can be ascertained. While there are countermeasures to avoid bad actors, due to the open nature of the API for the ratings, some ratings might be random or malicious. However due to the vast amount of other valid ratings, they overarching trend should be towards accuracy. Nevertheless, if you notice any ratings which are obviously malicious, or users which are consistently fake-rating, please let us know and we'll clear them from this dataset. # Structure The columns in the dataset are as follows * ratings_count: How many times this image has been rated throughout this dataset * rating: The aesthetic (1-10) rating. * kudos: The amount of kudos (i.e. priority) the user had at the moment of rating this image. Higher values represent users who have positively contributed to the AI Horde. This can be used to discover bad actors. (-50 are anonymous ratings) * account_age: How old the user account is. This can be used to discover bad actors. * usage_requests: How many images this user has generated at the moment of rating this image. This can be used to discover bad actors. * created_at: When this rating was added * client_agent: The client which was used to provide this rating. Unknown clients are more suspicious. This can be used to discover bad actors. * artifacts: The artifacts (0-5) rating. * user_id: The hashed user id who provided this rating * trusted: If true, this user has been trusted by the horde by generating images or text for others for a long amount of time. * validated: If true, this user's ratings have been manually validated by one of the AI Horde moderators. * captchas_failed: How many captchas this user has failed. This can be used to discover bad actors. This is cumulative with succeeded captchas, so a negative amount means that many more succeeded captchas over failed ones. * country: From which country did the rating originate. This can be used to create location-based rating models. # Use cases * [Clip-based aesthetic scorer](https://github.com/kenjiqq/aesthetics-scorer) ([Huggingface Demo](https://huggingface.co/spaces/kenjiqq/aesthetics-scorer))
Haidra-Org/AI-Horde-Ratings
[ "language:en", "license:cc-by-sa-4.0", "ratings", "stable diffusion", "aesthetic", "artifacts", "region:us" ]
2023-05-12T07:37:47+00:00
{"language": ["en"], "license": "cc-by-sa-4.0", "pretty_name": "AI Horde Ratings", "tags": ["ratings", "stable diffusion", "aesthetic", "artifacts"]}
2024-02-16T23:04:43+00:00
c9b3269c03db2a04a86bc229c6d447dc562d0201
Prost Aktiv - In ganz Italien gibt es Millionen von Männern, die an Prostatabeschwerden wie Entzündungen (Prostatitis) und Vergrößerungen (benigne Prostatahypertrophie) der Prostata leiden , und obwohl die erste Behandlung von Ihrem Arzt verschrieben werden muss, ist dies auch der Fall dass es Nahrungsergänzungsmittel gibt, die das Wohlbefinden dieser für den Menschen so wichtigen Drüse unterstützen können. Achtung, Nahrungsergänzungsmittel ersetzen in keinem Fall die vom Facharzt verschriebenen Medikamente, können aber in vielen Fällen ad hoc dabei helfen, das verlorene Wohlbefinden wiederzugewinnen . Wenn das Brennen, die Schmerzen, die Häufigkeit und der Drang des Wasserlassens unerträglich werden, ist es Zeit zu handeln . ➢ Produkt Name – Prost Aktiv ➢ Nebenwirkungen – Keine größeren Nebenwirkungen ➢ Ergebnisse – In 1-2 Monaten ➢ Verfügbarkeit – Online ➢Offizielle Website : ►► KLICKEN SIE HIER ◄<!--mehr--> Hier direkt kaufen... https://www.facebook.com/prostaktiverfahrungen/ https://www.facebook.com/groups/prostaktiv https://www.facebook.com/groups/prostaktiverfahrungen https://www.facebook.com/groups/prostaktivkapseln https://www.apsense.com/article/prost-aktiv-prost-aktiv-erfahrungen-prost-aktiv-kapseln.html https://www.apsense.com/page/prost-aktiv-erfahrungen-prost-aktiv-aktiv-kapseln https://groups.google.com/g/prost-aktiv/c/OGeMcbxbvfY https://sites.google.com/view/prost-aktiv-erfahrungen/home https://soundcloud.com/aditi-sharma-326650335/prost-aktiv https://soundcloud.com/aditi-sharma-326650335/prost-aktiv-erfahrungen https://soundcloud.com/aditi-sharma-326650335/prost-aktiv-kapseln https://www.linkedin.com/pulse/prost-aktiv-erfahrungen-kapseln-web-press-global/ https://in.pinterest.com/pin/898890406858780353/ https://in.pinterest.com/pin/898890406858780293/ https://in.pinterest.com/pin/898890406858780376 https://www.scoop.it/topic/prost-aktiv-erfahrungen/p/4143598067/2023/05/12/prost-aktiv-erfahrungen https://www.scoop.it/topic/prost-aktiv-erfahrungen/p/4143597097/2023/05/12/prost-aktiv https://www.scoop.it/topic/prost-aktiv-erfahrungen/p/4143596820/2023/05/12/prost-aktiv-kapseln
prostaktiverfahrungen/prostaktiverfahrungen
[ "region:us" ]
2023-05-12T07:54:29+00:00
{}
2023-05-12T07:55:39+00:00
0808de3ea0b531799ce5f97ed00955adc12036ff
# Dataset Card for Dataset Name ### Dataset Summary Text corpus dataset (fifa world cup 2022) ## Additional Information ### Citation Information ``` @misc{ enwiki:1154298520, author = "{Wikipedia contributors}", title = "2022 FIFA World Cup --- {Wikipedia}{,} The Free Encyclopedia", year = "2023", url = "https://en.wikipedia.org/w/index.php?title=2022_FIFA_World_Cup&oldid=1154298520" } ```
krinal/fifa_2022
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:question-answering", "language:en", "license:apache-2.0", "region:us" ]
2023-05-12T08:41:41+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["summarization", "text-generation", "question-answering"]}
2023-05-12T09:05:11+00:00
72f6829ee3d9c663f74021715ddadbf5415234b2
nxgiz/test
[ "license:mit", "region:us" ]
2023-05-12T10:13:51+00:00
{"license": "mit"}
2023-05-12T10:15:29+00:00
9a1634bbbe27215c7e3d5476857b188645d98119
ggxxii-AI/testing
[ "region:us" ]
2023-05-12T10:15:22+00:00
{}
2023-05-12T10:41:52+00:00
6c3a4ae18c7f07fda99312814e0bf4f7cab3fd85
# Dataset Card for "cup_it_ds_split_with_lang" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ummagumm-a/cup_it_ds_split_with_lang
[ "region:us" ]
2023-05-12T10:33:41+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "comments", "list": [{"name": "score", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "lang", "dtype": "string"}, {"name": "lang_score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 217538069, "num_examples": 79296}, {"name": "validation", "num_bytes": 24388917, "num_examples": 8811}, {"name": "test", "num_bytes": 39959748, "num_examples": 14004}], "download_size": 178372606, "dataset_size": 281886734}}
2023-05-12T12:31:18+00:00
bffeeb05c932c2dfb7b3ec2f8e9643cfaa0d5b53
# Dataset Card for "medsam-vit-base-cancer-dummy-data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
masapasa/medsam-vit-base-cancer-dummy-data
[ "region:us" ]
2023-05-12T10:34:57+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 42431652.0, "num_examples": 130}], "download_size": 10004142, "dataset_size": 42431652.0}}
2023-05-12T10:35:03+00:00
5383e9c41b64eddc605d4792c4749c603648f6cd
alex-medvedev-msc/chromatin3D
[ "license:apache-2.0", "region:us" ]
2023-05-12T10:38:34+00:00
{"license": "apache-2.0"}
2023-05-12T10:38:34+00:00