sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
85afd7905aff4cb28943721ba03a719544460e03
This dataset is HuggingFaceH4/instruction-dataset unfiltered, removing 10 instances of blatant alignment. 317 instructions remain. clean.py was first ran on https://huggingface.co/datasets/HuggingFaceH4/instruction-dataset/blob/fc50699d275ed86a072f1166f7390473176719fd/step3-eval.jsonl then renamed to .json not .jsonl normal dedupe.py script didn't find any dupes here. inspired by https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to clean.py
ewof/hf-instruction-unfiltered
[ "region:us" ]
2023-05-10T01:37:57+00:00
{}
2023-05-13T02:53:15+00:00
7debe71f9ec341529b302a5987b25c3d52062c0c
This dataset is HuggingFaceH4/asss unfiltered and deduped, removing 29 instances of blatant alignment and 21 duplicates. 398 instructions remain. clean.py was first ran on https://huggingface.co/datasets/HuggingFaceH4/asss/blob/122c06526f06a25c54614c1f5c63b4c3f51e96fb/surge-scale-si-aws-dataset.jsonl and then dedupe.py was ran on it. renamed to .json not .jsonl inspired by https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to clean.py
ewof/asss-unfiltered-deduped
[ "region:us" ]
2023-05-10T01:47:43+00:00
{}
2023-05-13T02:54:20+00:00
b272a70d18dfe10ded959dc8a31ea89461355acb
# Dataset Card for "THUCNewsText" 这是[seamew/THUCNewsText](https://huggingface.co/datasets/seamew/THUCNewsText)的克隆,试图解决谷歌硬盘国内无法访问的问题443 ```python from datasets import load_dataset datasets = load_dataset("seamew/THUCNewsText") datasets.push_to_hub("oyxy2019/THUCNewsText") ```
oyxy2019/THUCNewsText
[ "region:us" ]
2023-05-10T01:59:44+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "education", "1": "entertainment", "2": "fashion", "3": "finance", "4": "game", "5": "politic", "6": "society", "7": "sport", "8": "stock", "9": "technology"}}}}], "splits": [{"name": "train", "num_bytes": 126435258, "num_examples": 50000}, {"name": "validation", "num_bytes": 12851939, "num_examples": 5000}, {"name": "test", "num_bytes": 25321290, "num_examples": 9890}], "download_size": 110495565, "dataset_size": 164608487}}
2023-05-10T02:05:21+00:00
0d95099ea03b2766b5c4894dd90b7835aed08cf2
az8720255/stable_diffusion_models
[ "license:other", "region:us" ]
2023-05-10T02:10:07+00:00
{"license": "other"}
2023-05-11T12:32:47+00:00
4d0740249de27f836d772618e5211bd32c25cffc
billxbf/rewoo-instruction-finetuning
[ "license:mit", "region:us" ]
2023-05-10T02:23:29+00:00
{"license": "mit"}
2023-05-15T01:43:54+00:00
321b3c39b74ac3b0c08ec825a788514416c0544f
evelyn168/gept
[ "license:cc-by-3.0", "region:us" ]
2023-05-10T03:21:32+00:00
{"license": "cc-by-3.0"}
2023-05-10T06:07:33+00:00
81f736360aeb38b71b116a8ff3d542a3a9ab91c7
skrishna/coin_flip
[ "license:mit", "region:us" ]
2023-05-10T03:54:00+00:00
{"license": "mit"}
2023-05-12T11:58:10+00:00
aab7a4872cbb52bc1e9f0579d7cc0ba758925b38
yanmc/lora-dataset
[ "license:openrail", "region:us" ]
2023-05-10T04:17:41+00:00
{"license": "openrail"}
2023-05-10T04:17:41+00:00
f4d03d5ee3d1e302a9b4200e71231c46013eaeb2
wics/strategy-qa
[ "license:other", "region:us" ]
2023-05-10T04:53:26+00:00
{"license": "other"}
2023-05-10T05:12:13+00:00
d548acf47f1f6b23bbceff696046c4e87b86ae7e
ShawnGGG/models
[ "license:openrail", "region:us" ]
2023-05-10T04:55:19+00:00
{"license": "openrail"}
2023-05-15T10:22:26+00:00
8323a5248b9abfb9e5aa12293cea1977cf8ee203
# Dataset Card for "IMDB_helm" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lighteval/IMDB_helm
[ "region:us" ]
2023-05-10T06:26:09+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "reference", "dtype": "string"}, {"name": "contrast_input", "dtype": "string"}, {"name": "contrast_references", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 32956831, "num_examples": 25000}, {"name": "train", "num_bytes": 34049134, "num_examples": 25000}], "download_size": 43257347, "dataset_size": 67005965}}
2023-05-10T07:17:01+00:00
dfecd585c9b5c8fe56e8d7809d5b3cf710668dff
# RVL-CDIP Small-200 Dataset ## Dataset Summary This is a subset of the RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset, containing 200 samples per class for a total of 3,200 samples. The dataset consists of scanned document images in TIFF format, collected from various sources. The documents belong to 16 different categories, such as letter, memo, email, and more. The purpose of this dataset is to facilitate document classification tasks using NLP and computer vision techniques. ## Supported Tasks and Leaderboards - **Document Classification**: This dataset can be used for document classification tasks where the goal is to predict the correct category for each document image. No specific leaderboard is associated with this dataset. ## Languages The dataset contains documents in English. ## Dataset Structure ### Data Instances A data instance consists of a TIFF image file representing a scanned document and its corresponding label indicating the document category. ### Data Fields - `image`: A TIFF image file representing a scanned document. - `label`: A string representing the category of the document (e.g., "letter", "memo", "email", etc.). ### Data Splits The dataset is split into two subsets: - Training set: Contains 200 samples per class, totaling 3,200 samples. - Validation set: Contains a smaller number of samples per class. ## Dataset Creation ### Curation Rationale This subset of the RVL-CDIP dataset was created to provide a smaller and more manageable dataset for researchers and practitioners who want to experiment with document classification tasks without the computational overhead of the full dataset. ### Source Data The dataset is a subset of the [RVL-CDIP dataset](https://www.cs.cmu.edu/~aharley/rvl-cdip/), which contains 400,000 grayscale images in 16 classes, with 25,000 images per class. ### Annotations The dataset labels were derived from the original RVL-CDIP dataset. Each image file is associated with a label indicating its document category. ## Personal and Sensitive Information The dataset may contain personal or sensitive information, such as names, addresses, phone numbers, or email addresses. Users should take this into consideration when using the dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset can be used to develop models for document classification tasks, which can benefit a wide range of applications, such as document management systems, content analysis, and information retrieval. ### Discussion of Biases The dataset may contain biases due to the limited number of samples per class and the fact that the documents are sourced from different domains. These biases may affect the generalizability of models trained on this dataset. ### Other Known Limitations As this dataset is a small subset of the RVL-CDIP dataset, it may not be as representative or diverse as the full dataset. Additionally, the dataset only contains English documents, which may limit its applicability to other languages. ## Additional Information ### Licensing Please refer to the [RVL-CDIP dataset website](https://www.cs.cmu.edu/~aharley/rvl-cdip/) for information on licensing and usage restrictions. ### Citation Information If you use this dataset, please cite the following paper: @inproceedings{harley2015evaluation, title={An evaluation of deep learning techniques for document image classification}, author={Harley, Adam W and Ufkes, Alex and Derpanis, Konstantinos G}, booktitle={2015 13th International Conference on Document Analysis and Recognition (ICDAR)}, pages={991--995}, year={2015}, organization={IEEE} } ### Contact Information For questions regarding the dataset, please refer to the [RVL-CDIP dataset website](https://www.cs.cmu.edu/~aharley/rvl-cdip/) for contact information. ### Acknowledgements This dataset is a subset of the RVL-CDIP dataset created by Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis at the Ryerson Vision Lab (RVL), Ryerson University. The dataset creation was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC).
vaclavpechtor/rvl_cdip-small-200
[ "region:us" ]
2023-05-10T06:34:29+00:00
{}
2023-05-10T06:36:15+00:00
8d8e6c99165038ccd453d3125ced500cad97203f
``` @inproceedings{chakrabarty-etal-2021-figurative, title = "Figurative Language in Recognizing Textual Entailment", author = "Chakrabarty, Tuhin and Ghosh, Debanjan and Poliak, Adam and Muresan, Smaranda", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.297", pages = "3354--3361", } ```
metaeval/figurative-nli
[ "region:us" ]
2023-05-10T06:34:56+00:00
{}
2023-05-10T06:35:44+00:00
16b3faeff2312f237cea9e3e57f39193dcb5fe7c
# Dataset Card for "rmh_tokenized_2048" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
stoddur/rmh_tokenized_2048
[ "region:us" ]
2023-05-10T06:38:20+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 22982206896, "num_examples": 2804076}], "download_size": 3578440618, "dataset_size": 22982206896}}
2023-05-10T07:41:19+00:00
e3277973eb38ce39e2f2b6ed22d14909d3dba2c6
## 内容 包含约350万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。 ## 样例 ``` { "id":"66623028", "conversations":[ { "from":"human", "value":"归纳介绍三种不同类型的电影题材" }, { "from":"assistant", "value":"1. 动作片:动作片通常强调动作和战斗场面,故事情节紧凑,充满了紧张和兴奋的氛围。主角一般是具有超凡实力或特殊技能的英雄人物,往往会与反派角色进行一系列激烈的格斗,追逐和爆破等酷炫场面。例如《速度与激情》和《复仇者联盟》等。\n2. 爱情片:爱情片是以爱情为主题的电影,着重呈现角色之间的感情纠葛,展现了爱情的美好和悲壮。情节与角色之间的发展令人深思,增加了观众的感性体验。这种电影的故事情节主要以人类与人类之间的情感变化为主题。例如《泰坦尼克号》和《让子弹飞》等。\n3. 科幻片:科幻片是以未来科技和未知世界为主题的电影,通常包括时间旅行、外星人、虚拟现实、未来社会等各种奇幻的元素。这种电影描绘了一种比现实更加宏伟、更加神秘和惊奇的世界。例如《星际穿越》和《发条橙》等。" } ] } { "id":"18205437", "conversations":[ { "from":"human", "value":"根据给定的食材,编写一道美食菜谱。\n鸡腿肉、蘑菇、洋葱" }, { "from":"assistant", "value":"鸡肉炒蘑菇洋葱\n食材:\n- 鸡腿肉 400克,去骨去皮切成小块\n- 蘑菇 200克,切片\n- 洋葱 1个,切丝\n- 生姜 适量,切丝\n- 酱油 2勺\n- 料酒 1勺\n- 盐 适量\n- 白胡椒粉 适量\n- 植物油 适量\n做法:\n1. 在一个碗里放入切好的鸡腿肉块,加入酱油、料酒和适量的白胡椒粉,搅拌均匀,腌制10分钟。\n2. 热锅凉油,加入姜丝翻炒,放入腌好的鸡肉块翻炒至变色。\n3. 加入洋葱丝和蘑菇,翻炒2-3分钟至蘑菇变软。\n4. 加入适量的盐和少量的水,盖上锅盖焖煮5分钟直至鸡肉熟透。\n5. 最后翻炒几下,使汤汁变浓稠即可。\n温馨提示:\n1. 在腌制鸡肉时,如果想让肉质更嫩滑,可以加入少量的淀粉和蛋清。\n2. 可以根据个人口味加入其他调料,如蒜末、辣椒等。\n3. 洋葱和蘑菇可以用其他蔬菜代替,如青椒、胡萝卜等。" }, { "from":"human", "value":"太好了,我会试着照着你教的做一次,但是我最近买了一部新相机,想让你推荐一些好的拍照地点。" }, { "from":"assistant", "value":"当地的风景区、公园、景点等等都是比较适合拍照的地点。如果你想要一些特别的地方,可以考虑一些建筑群,街道巷子、咖啡馆、书店等具有文艺气息的地方。" }, { "from":"human", "value":"嗯,这些地方都很不错,但是我还想找一些可以体验当地文化和风俗的地点,你有什么好的建议吗?" }, { "from":"assistant", "value":"如果你想要更深入地了解当地文化和风俗,可以考虑参加一些游学活动,比如参观当地的博物馆、参加民俗节庆、尝试当地美食等等。此外,也可以去一些小城镇,感受那里的古朴风情。" } ] } ``` ### 字段: ``` id: 数据id conversations: 数据内容,以对话形式给出,包括多轮和单轮对话的数据 ``` ### 补充: 针对该数据集,新增了指令类别字段,共包括13个类别,详情见 [train_3.5M_CN_With_Category](https://huggingface.co/datasets/BELLE-2/train_3.5M_CN_With_Category) ## 使用限制 仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
BelleGroup/train_3.5M_CN
[ "task_categories:text2text-generation", "size_categories:1M<n<10M", "language:zh", "license:gpl-3.0", "region:us" ]
2023-05-10T06:45:23+00:00
{"language": ["zh"], "license": "gpl-3.0", "size_categories": ["1M<n<10M"], "task_categories": ["text2text-generation"]}
2023-08-16T05:13:59+00:00
08f13e7e63ab9bfbd8c0b40955defe3bb7f68c2b
# Dataset Card for ParlamentoPT ### Dataset Summary The ParlamentoPT is a **Portuguese** language data set obtained by collecting publicly available documents containing transcriptions of debates in the Portuguese Parliament. The data was collected from the Portuguese Parliament portal in accordance with its [open data policy](https://www.parlamento.pt/Cidadania/Paginas/DadosAbertos.aspx). This dataset was collected with the purpose of creating the [Albertina-PT*](https://huggingface.co/PORTULAN/albertina-ptpt) language model, and it serves as training data for model development. The development of the model is a collaborative effort between the University of Lisbon and the University of Porto in Portugal </br> # Citation When using or citing this data set, kindly cite the following [publication](https://arxiv.org/abs/2305.06721): ``` latex @misc{albertina-pt, title={Advancing Neural Encoding of Portuguese with Transformer Albertina PT-*}, author={João Rodrigues and Luís Gomes and João Silva and António Branco and Rodrigo Santos and Henrique Lopes Cardoso and Tomás Osório}, year={2023}, eprint={2305.06721}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <br> # Acknowledgments The research reported here was partially supported by: PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language, funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the grant PINFRA/22117/2016; research project ALBERTINA - Foundation Encoder Model for Portuguese and AI, funded by FCT—Fundação para a Ciência e Tecnologia under the grant CPCA-IAC/AV/478394/2022; innovation project ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação under the grant C625734525-00462629, of Plano de Recuperação e Resiliência, call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização; and LIACC - Laboratory for AI and Computer Science, funded by FCT—Fundação para a Ciência e Tecnologia under the grant FCT/UID/CEC/0027/2020.
PORTULAN/parlamento-pt
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:pt", "license:other", "parlamentopt", "parlamento", "parlamento-pt", "albertina-pt*", "albertina-ptpt", "albertina-ptbr", "fill-mask", "bert", "deberta", "portuguese", "encoder", "foundation model", "arxiv:2305.06721", "region:us" ]
2023-05-10T07:15:20+00:00
{"annotations_creators": ["no-annotation"], "language": ["pt"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "ParlamentoPT", "tags": ["parlamentopt", "parlamento", "parlamento-pt", "albertina-pt*", "albertina-ptpt", "albertina-ptbr", "fill-mask", "bert", "deberta", "portuguese", "encoder", "foundation model"]}
2023-05-12T05:34:53+00:00
629a0cb211e14a6a2be16b478d8db80d509ed654
# 現代漢語詞頻統計 中央研究院現代漢語平衡語料庫(Academia Sinica Balanced Corpus of Modern Chinese)各類題材現代漢語(500 萬詞、20 多萬句,約 14 萬筆詞條)的詞頻統計,以及各詞彙的詞性標記,依照出現頻率排序。 資料來源:[中央研究院語言學研究所 全球華語文數位教與學資源中心](https://elearning.ling.sinica.edu.tw/index.html)。僅個人研究使用。 ## 欄位說明 * `no` — 序列編號 * `rank` — 詞頻統計排序 * `word` — 詞彙 * `pos` — 詞性,詳見下表 * `frequency` — 詞頻(出現次數) * `percent` — 詞頻百分比 * `cumulation` — 累進詞頻百分比 ### 詞性標記 - `A` — 非謂形容詞 - `D` — 副詞 - `Da` — 數量副詞 - `Dfa` — 動詞前程度副詞 - `Dfb` — 動詞後程度副詞 - `Dk` — 句副詞 - `Di` — 時態標記 - `Caa` — 對等連接詞,如:和、跟 - `Cbb` — 關聯連接詞 - `Nep` — 指代定詞 - `Neqa` — 數量定詞 - `Nes` — 特指定詞 - `Neu` — 數詞定詞 - `FW` — 外文標記 - `Nf` — 量詞 - `Na` — 普通名詞 - `Nb` — 專有名稱 - `Nc` — 地方詞 - `Ncd` — 位置詞 - `Nd` — 時間詞 - `Nh` — 代名詞 - `P` — 介詞 - `Cab` — 連接詞,如:等等 - `Cba` — 連接詞,如:的話 - `Neqb` — 後置數量定詞 - `Ng` — 後置詞 - `DE` — 的, 之, 得, 地 - `I` — 感嘆詞 - `T` — 語助詞 - `VA` — 動作不及物動詞 - `VB` — 動作類及物動詞 - `VH` — 狀態不及物動詞 - `VI` — 狀態類及物動詞 - `SHI` — 是 - `VAC` — 動作使動動詞 - `VC` — 動作及物動詞 - `VCL` — 動作接地方賓語動詞 - `VD` — 雙賓動詞 - `VE` — 動作句賓動詞 - `VF` — 動作謂賓動詞 - `VG` — 分類動詞 - `VHC` — 狀態使動動詞 - `VJ` — 狀態及物動詞 - `VK` — 狀態句賓動詞 - `VL` — 狀態謂賓動詞 - `V_2` — 有 - `Nv` — 動詞名物化
zetavg/tw-sinica-corpus-word-frequency
[ "language:zh", "region:us" ]
2023-05-10T07:17:01+00:00
{"language": ["zh"]}
2023-05-10T12:27:35+00:00
2989899dd5bed83cf9bd17cf9fff9889705436e2
# Dataset Card for Huatuo_encyclopedia_qa ## Dataset Description - **Homepage: https://www.huatuogpt.cn/** - **Repository: https://github.com/FreedomIntelligence/HuatuoGPT** - **Paper: https://arxiv.org/abs/2305.01526** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset has a total of 364,420 pieces of medical QA data, some of which have multiple questions in different ways. We extract medical QA pairs from plain texts (e.g., medical encyclopedias and medical articles). We collected 8,699 encyclopedia entries for diseases and 2,736 encyclopedia entries for medicines on Chinese Wikipedia. Moreover, we crawled 226,432 high-quality medical articles from the Qianwen Health website. ## Dataset Creation ### Source Data https://zh.wikipedia.org/wiki/ https://51zyzy.com/ ## Citation ``` @misc{li2023huatuo26m, title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset}, author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang}, year={2023}, eprint={2305.01526}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
FreedomIntelligence/huatuo_encyclopedia_qa
[ "task_categories:text-generation", "size_categories:100K<n<1M", "language:zh", "license:apache-2.0", "medical", "arxiv:2305.01526", "region:us" ]
2023-05-10T07:30:14+00:00
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "tags": ["medical"]}
2023-05-17T02:20:55+00:00
e9c5be05a3fa288c43a8c3d182ead2494b7c0721
# Eurlex multilang pile Status: WIP - all the eurlex data has been crawled in the "crawl" branch, next steps is to parse, dedup, and clean it up for training usage **Acknowledgments** Compute for the crawling process was sponsored by [https://www.izx.ai/](https://www.izx.ai/)
Babel-Pile/eurlex-multilang-pile
[ "license:apache-2.0", "region:us" ]
2023-05-10T07:35:15+00:00
{"license": "apache-2.0"}
2023-06-20T06:22:13+00:00
18876b0aba2ddb7e21cf5afb6256a5b33eed3351
qq997978569/WebUI
[ "license:other", "region:us" ]
2023-05-10T07:46:54+00:00
{"license": "other"}
2023-05-14T08:47:36+00:00
39f2b2f75ea9240327e316b25f88df99b7c92bbe
qherreros/gpteach-dataset
[ "size_categories:10K<n<100K", "language:en", "region:us" ]
2023-05-10T08:07:01+00:00
{"language": ["en"], "size_categories": ["10K<n<100K"], "dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6245894, "num_examples": 15998}], "download_size": 2483510, "dataset_size": 6245894}}
2023-05-11T06:38:25+00:00
908f272e4b8d93f08a4a559c920cc43146c8cbc0
# Dataset Card for "thai_wikipedia_clean_20230101" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) Thai Wikipedia Database dumps to plain text for NLP work. This dataset was dump on 1 January 2023 from [Thai wikipedia](https://th.wikipedia.org). - GitHub: [PyThaiNLP / ThaiWiki-clean](https://github.com/PyThaiNLP/ThaiWiki-clean) - Notebook for upload to HF: [https://github.com/PyThaiNLP/ThaiWiki-clean/blob/main/thai_wikipedia_clean_20230101_hf.ipynb](https://github.com/PyThaiNLP/ThaiWiki-clean/blob/main/thai_wikipedia_clean_20230101_hf.ipynb)
pythainlp/thai_wikipedia_clean_20230101
[ "task_categories:text-generation", "language:th", "license:cc-by-sa-3.0", "region:us" ]
2023-05-10T08:26:27+00:00
{"language": ["th"], "license": "cc-by-sa-3.0", "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 686139541, "num_examples": 1436054}], "download_size": 260540997, "dataset_size": 686139541}}
2023-05-10T08:34:48+00:00
3a46d58baf7336f4de9399daaf957845fafc04e0
# Dataset Card for "myst_pf_ot50" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rishabhjain16/myst_pf_ot50
[ "region:us" ]
2023-05-10T08:26:59+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8509570768.06, "num_examples": 19332}, {"name": "test", "num_bytes": 1447570290.631, "num_examples": 3317}], "download_size": 8974808612, "dataset_size": 9957141058.691}}
2023-05-10T11:18:19+00:00
dac9ae69414e94768d58464c9f0c7ec0842d8068
# Dataset Card for "condition_to_drug" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
P3ps/condition_to_drug
[ "region:us" ]
2023-05-10T08:31:49+00:00
{"dataset_info": {"features": [{"name": "drugName", "dtype": "string"}, {"name": "condition", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3438399.8732642303, "num_examples": 100587}, {"name": "validation", "num_bytes": 534429.7245546674, "num_examples": 15399}, {"name": "test", "num_bytes": 1111779.385698873, "num_examples": 32553}], "download_size": 1068870, "dataset_size": 5084608.983517771}}
2023-05-10T08:31:58+00:00
258b35c5a58e5f6ea3218afcc32223c3c82e6f1a
## Akatsuki 的粤语歌声数据集 ## ---- 使用前请查看[License](https://huggingface.co/datasets/Akatsuki-Amemiya/Akatsuki_Cantonese_Singing#license) 进行申请后请发送邮件到[email protected],以便人工审核通过。 我知道申请时HF会给我发送邮箱,但是我会忽视掉它 After submitting the application, please send an email to [email protected] for manual review and approval. Only emails from HF will be ignored. 申請を行った後、[email protected]にメールを送信して、手動で審査と承認を行ってください。 HFからのメールのみ無視されます。 ---- ### License ### ---- #### 中文 #### 该数据集在使用前,需严格遵守以下条款。若您不同意这些条款,请勿使用该数据集。 1.权利授权 本数据集拥有者(以下简称“作者”)授予您非排他性、不可转让、不可分许可使用本数据集,以及使用本数据集产生的所有成果,包括商业和非商业目的。 但是,无论是否为商业用途,您必须注明数据集来源及作者,以允许其他人获得使用权限。 2.共享回报 所有使用该数据集产生的公开成果(包括发表的论文、研究报告、软件、算法等),必须无偿为该数据集作者共享完整本地实际操作流程,以便数据集作者可以在本地实际复现公开成果。 3.商业使用 如您打算使用该数据集进行商业活动,您必须提前告知数据集作者,并获得数据集作者的书面同意。商业使用包括但不限于出售数据集或使用数据集进行产品研发等。 4.使用限制 禁止从数据集猜测出数据集提供者中之人现实身份,也不允许使用该数据集产出任何宣传任何政治意识形态的作品。如有违反,数据集作者有权采取法律措施。 5.免责声明 该数据集是在其提供的现状(“AS IS”)下提供的,作者不对该数据集及使用该数据集产生的成果的质量、适用性和可靠性做出任何明示或暗示的保证。 ---- #### English #### This translation is provided by ChatGPT. In case of any discrepancy with the Chinese version, the Chinese version shall prevail. Before using this dataset, you must strictly abide by the following terms. If you do not agree to these terms, do not use this dataset. 1. Rights Authorization The owner of this dataset (hereinafter referred to as "the author") grants you a non-exclusive, non-transferable, and non-divisible license to use this dataset and all results generated by using this dataset for commercial and non-commercial purposes. However, regardless of whether it is a commercial use, you must indicate the source and author of the dataset, to allow others to obtain usage rights. 2. Sharing Returns All public results generated by using this dataset (including published papers, research reports, software, algorithms, etc.) must be fully shared with the dataset author at no charge, so that the dataset author can reproduce public results locally. 3. Commercial Use If you intend to use this dataset for commercial activities, you must inform the dataset author in advance and obtain the written consent of the dataset author. Commercial use includes but is not limited to selling the dataset or using the dataset for product development. 4. Usage Restrictions Guessing the real identity of the data providers from the dataset is prohibited, and it is also not allowed to produce any works promoting any political ideology using this dataset. If there is any violation, the dataset author has the right to take legal measures. 5. Disclaimer This dataset is provided as-is, and the author makes no express or implied warranties as to the quality, applicability, and reliability of this dataset and the results generated by using this dataset. ---- #### 日本語 #### この翻訳はChatGPTによって提供されたものであり、中国語版と相違がある場合は中国語版が優先されます。 このデータセットを使用する前に、以下の条件に厳密に従う必要があります。これらの条件に同意しない場合は、このデータセットを使用しないでください。 1. 権利の承認 このデータセットの所有者(以下、「著者」とします)は、商業および非商業目的を含む、このデータセットとこのデータセットを使用して生成されたすべての成果に対して、排他的で譲渡不可および不可分割なライセンスをあなたに付与します。 ただし、商業利用であっても、データセットの出典と著者を示す必要があり、他の人が使用権を取得できるようにする必要があります。 2. 分かち合いのリターン このデータセットを使用して生成されたすべての公開成果物(出版された論文、研究報告、ソフトウェア、アルゴリズムなど)は、データセットの著者に対して無償で完全共有する必要がありますので、データセットの著者は地元で公開成果物を再現できます。 3. 商業利用 このデータセットを商業活動に使用する場合は、事前にデータセットの著者に通知し、データセットの著者の書面による同意を得る必要があります。商業利用には、データセットの販売や製品開発に使用することなどが含まれます。 4. 使用制限 データセットからデータ提供者の実際の身元を推測することは禁止されており、このデータセットを使用して、いかなる政治的イデオロギーを促進する作品を製作することもできません。違反した場合、データセットの著者は法的手段を取る権利があります。 5. 免責事項 データセットは「現状有姿」で提供されるものであり、作者は、このデータセットおよびこのデータセットを使用して生成された成果物の品質、適用性、信頼性について、明示的または黙示的な保証を提供しません。
Akatsuki-Amemiya/Akatsuki_Cantonese_Singing
[ "size_categories:100B<n<1T", "language:zh", "license:other", "music", "region:us" ]
2023-05-10T08:36:41+00:00
{"language": ["zh"], "license": "other", "size_categories": ["100B<n<1T"], "tags": ["music"]}
2023-07-02T17:35:34+00:00
01aaada3bb52c10018dacabb48d132f3f47f9a5d
# Dataset Card for "wnut_17_syp" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jakot/wnut_17_syp
[ "region:us" ]
2023-05-10T08:40:04+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-corporation", "2": "I-corporation", "3": "B-creative-work", "4": "I-creative-work", "5": "B-group", "6": "I-group", "7": "B-location", "8": "I-location", "9": "B-person", "10": "I-person", "11": "B-product", "12": "I-product"}}}}], "splits": [{"name": "train", "num_bytes": 1060556, "num_examples": 3394}, {"name": "validation", "num_bytes": 259353, "num_examples": 1009}, {"name": "test", "num_bytes": 388062, "num_examples": 1287}, {"name": "train1", "num_bytes": 530278.0, "num_examples": 1697}, {"name": "train2", "num_bytes": 530278.0, "num_examples": 1697}, {"name": "val1", "num_bytes": 129547.98017839446, "num_examples": 504}, {"name": "val2", "num_bytes": 129805.01982160554, "num_examples": 505}], "download_size": 844238, "dataset_size": 3027880.0}}
2023-05-10T09:00:20+00:00
b6484243845f3b78ae6669c0d02e50fee1ea85e3
xedwin23x/DAFB
[ "license:unknown", "region:us" ]
2023-05-10T08:49:12+00:00
{"license": "unknown"}
2023-05-10T11:19:11+00:00
c585b845a6089f2239c3cff34018deebdc8e3149
bourneyz/dae3d
[ "license:openrail", "region:us" ]
2023-05-10T09:22:00+00:00
{"license": "openrail"}
2023-05-10T09:22:37+00:00
898149ae83133894f6b8e4654ef7a76ac1f7fd4b
Vanilla骆驼是骆驼项目在23年3月21日启动的第一个数据集和模型 我们会陆续将更多数据集发布到hf,包括 - [ ] Coco Caption的中文翻译 - [ ] CoQA的中文翻译 - [ ] CNewSum的Embedding数据 - [ ] 增广的开放QA数据 - [ ] WizardLM的中文翻译 如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。 # 骆驼(Luotuo): 开源中文大语言模型 [https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM) 骆驼(Luotuo)项目是由[冷子昂](https://blairleng.github.io) @ 商汤科技, 陈启源 @ 华中师范大学 以及 李鲁鲁 @ 商汤科技 发起的中文大语言模型开源项目,包含了一系列语言模型。 ( 注意: [陈启源](https://qiyuan-chen.github.io/) 正在寻找2024推免导师,欢迎联系 ) 骆驼项目**不是**商汤科技的官方产品。 ## Citation Please cite the repo if you use the data or code in this repo. ``` @misc{alpaca, author={Ziang Leng, Qiyuan Chen and Cheng Li}, title = {Luotuo: An Instruction-following Chinese Language model, LoRA tuning on LLaMA}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/LC1332/Luotuo-Chinese-LLM}}, } ```
silk-road/Vanilla-chinese-alpaca-luotuo
[ "size_categories:10K<n<100K", "language:zh", "license:apache-2.0", "region:us" ]
2023-05-10T09:50:05+00:00
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "pretty_name": "f"}
2023-05-12T22:17:41+00:00
c2501d5b5d3a70a0a37a9a733a692e6b76d8131f
lucy in cyberpunk edgerunner 512x512 resolution This dataset is suitable for Finetuning stable diffusion models with methods like DreamBooth,LoRA,Hypernetwork,Text inversion and so on.
seasnake/lucy
[ "license:mit", "region:us" ]
2023-05-10T09:52:03+00:00
{"license": "mit"}
2023-05-10T14:51:01+00:00
4ad6075ad237a7a951d27172d13e3a48dbc6ceea
**CTMatch Information Retrieval Dataset** This is a dataset of processed clinical trials documents, somehwat of a duplication of that found in `datasets/ir_datasets` except that these have been preprocessed with `ctproc` to clean and extract useful fields from the clinical trial documents. Note: They are currently saved as text files because of the downstream task in ctmatch, though in the future they may be converted to .csv. Each .txt file has exactly 374648 lines of corresponding data: `doc_texts.txt` - texts extracted from documents processed with `ctproc` using and eligbility criteria fields only, structured as this example from NCT00000102: "Inclusion Criteria: diagnosed with Congenital Adrenal Hyperplasia (CAH) normal ECG during baseline evaluation, Exclusion Criteria: history of liver disease, or elevated liver function tests history of cardiovascular disease" `doc_categories.txt`: - 1 x 14 vectors of somewhat arbitrarily chosen topic probabilities (softmax output) generated by zero-shot classification model `facebook/bart-large-mnli`, CTMatch.category_model(doc['condition']) lexically ordered as such: cancer,cardiac,endocrine,gastrointestinal,genetic,healthy,infection,neurological,other,pediatric,psychological,pulmonary,renal,reproductive `doc_embeddings.txt`: - 1 x 384 vectors of embeddings taken from last hidden state of model encoded doc_text using SentenceTransformers(`sentence-transformers/all-MiniLM-L6-v2`) `index2docid.txt`: - simple mapping of index to NCTID's for filtering/reference throughout IR program, corresponding to vector, texts representation order
semaj83/ctmatch_ir
[ "license:mit", "region:us" ]
2023-05-10T09:55:24+00:00
{"license": "mit"}
2023-05-15T12:19:52+00:00
5ea84423e143da5d482351358e333427b83d8de5
# Dataset Card for "image2poetry_ru" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AnyaSchen/image2poetry_ru
[ "region:us" ]
2023-05-10T10:22:48+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "poetry", "dtype": "string"}, {"name": "author", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3662906662.07, "num_examples": 7755}], "download_size": 3661571473, "dataset_size": 3662906662.07}}
2023-05-19T09:03:37+00:00
6b471f8873a7005a58cbd9f877971e4bfe47c173
# Dataset Card for "LexFiles" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Specifications](#supported-tasks-and-leaderboards) ## Dataset Description - **Homepage:** https://github.com/coastalcph/lexlms - **Repository:** https://github.com/coastalcph/lexlms - **Paper:** https://arxiv.org/abs/2305.07507 - **Point of Contact:** [Ilias Chalkidis](mailto:[email protected]) ### Dataset Summary The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India). The corpus contains approx. 19 billion tokens. In comparison, the "Pile of Law" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent. ### Dataset Specifications | Corpus | Corpus alias | Documents | Tokens | Pct. | Sampl. (a=0.5) | Sampl. (a=0.2) | |-----------------------------------|----------------------|-----------|--------|--------|----------------|----------------| | EU Legislation | `eu-legislation` | 93.7K | 233.7M | 1.2% | 5.0% | 8.0% | | EU Court Decisions | `eu-court-cases` | 29.8K | 178.5M | 0.9% | 4.3% | 7.6% | | ECtHR Decisions | `ecthr-cases` | 12.5K | 78.5M | 0.4% | 2.9% | 6.5% | | UK Legislation | `uk-legislation` | 52.5K | 143.6M | 0.7% | 3.9% | 7.3% | | UK Court Decisions | `uk-court-cases` | 47K | 368.4M | 1.9% | 6.2% | 8.8% | | Indian Court Decisions | `indian-court-cases` | 34.8K | 111.6M | 0.6% | 3.4% | 6.9% | | Canadian Legislation | `canadian-legislation` | 6K | 33.5M | 0.2% | 1.9% | 5.5% | | Canadian Court Decisions | `canadian-court-cases` | 11.3K | 33.1M | 0.2% | 1.8% | 5.4% | | U.S. Court Decisions [1] | `us-court-cases` | 4.6M | 11.4B | 59.2% | 34.7% | 17.5% | | U.S. Legislation | `us-legislation` | 518 | 1.4B | 7.4% | 12.3% | 11.5% | | U.S. Contracts | `us-contracts` | 622K | 5.3B | 27.3% | 23.6% | 15.0% | | Total | `lexlms/lex_files` | 5.8M | 18.8B | 100% | 100% | 100% | [1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents. [2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019). Additional corpora not considered for pre-training, since they do not represent factual legal knowledge. | Corpus | Corpus alias | Documents | Tokens | |----------------------------------------|------------------------|-----------|--------| | Legal web pages from C4 | `legal-c4` | 284K | 340M | ### Usage Load a specific sub-corpus, given the corpus alias, as presented above. ```python from datasets import load_dataset dataset = load_dataset('lexlms/lex_files', name='us-court-cases') ``` ### Citation [*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.* *LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.* *2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/2023.acl-long.865/) ``` @inproceedings{chalkidis-etal-2023-lexfiles, title = "{L}e{XF}iles and {L}egal{LAMA}: Facilitating {E}nglish Multinational Legal Language Model Development", author = "Chalkidis, Ilias and Garneau, Nicolas and Goanta, Catalina and Katz, Daniel and S{\o}gaard, Anders", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.865", pages = "15513--15535", } ```
lexlms/lex_files
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended", "language:en", "license:cc-by-nc-sa-4.0", "legal", "law", "arxiv:2305.07507", "region:us" ]
2023-05-10T10:40:26+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "LexFiles", "tags": ["legal", "law"]}
2023-07-24T10:59:58+00:00
5ed8014370b99304c0c9e19dd008e3c875b1a699
xiudu/testdata
[ "license:apache-2.0", "doi:10.57967/hf/0668", "region:us" ]
2023-05-10T10:41:07+00:00
{"license": "apache-2.0"}
2023-05-10T10:41:07+00:00
b0f5723c7ec234f947823408a103c0dc07a52e1d
# Dataset Card for huatuo_consultation_qa ## Dataset Description - **Homepage: https://www.huatuogpt.cn/** - **Repository: https://github.com/FreedomIntelligence/HuatuoGPT** - **Paper: https://arxiv.org/abs/2305.01526** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary We collected data from a website for medical consultation , consisting of many online consultation records by medical experts. Each record is a QA pair: a patient raises a question and a medical doctor answers the question. The basic information of doctors (including name, hospital organization, and department) was recorded. We directly crawl patient’s questions and doctor’s answers as QA pairs, getting 32,708,346 pairs. Subsequently, we removed the QA pairs containing special characters and removed the repeated pairs. Finally, we got 25,341,578 QA pairs. **Please note that for some reasons we cannot directly provide text data, so the answer part of our data set is url. If you want to use text data, you can refer to the other two parts of our open source datasets ([huatuo_encyclopedia_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa)、[huatuo_knowledge_graph_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_knowledge_graph_qa)), or use url for data collection.** ## Dataset Creation ### Source Data .... ## Citation ``` @misc{li2023huatuo26m, title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset}, author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang}, year={2023}, eprint={2305.01526}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
FreedomIntelligence/huatuo_consultation_qa
[ "task_categories:text-generation", "size_categories:1M<n<10M", "language:zh", "license:apache-2.0", "medical", "arxiv:2305.01526", "region:us" ]
2023-05-10T10:41:08+00:00
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "tags": ["medical"]}
2023-05-17T02:21:36+00:00
54a60ab79208012ffaa072328e9cb676240f98ef
instruct_chat_50k.jsonl which is composed of 30k Chinese sharegpt dataset and 20k [alpaca-instruction-Chinese-dataset](https://github.com/hikariming/alpaca_chinese_dataset)
Chinese-Vicuna/instruct_chat_50k.jsonl
[ "task_categories:question-answering", "language:zh", "license:apache-2.0", "region:us" ]
2023-05-10T11:32:11+00:00
{"language": ["zh"], "license": "apache-2.0", "task_categories": ["question-answering"]}
2023-05-12T02:27:55+00:00
06c08710a6fb836cc5e21fa5b8b96714bfae622d
mooncakex/tetsul123
[ "license:openrail", "region:us" ]
2023-05-10T11:41:05+00:00
{"license": "openrail", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 365443040.275, "num_examples": 2225}], "download_size": 364741605, "dataset_size": 365443040.275}}
2023-05-10T11:49:35+00:00
4af98afc6612b6321c96c1c80b02144fb5be1373
leon-cvetkovski/test_dataset
[ "license:unknown", "region:us" ]
2023-05-10T11:53:24+00:00
{"license": "unknown"}
2023-05-11T10:33:45+00:00
81a84809cdab70597ce9791b3792e514ceda3d60
# Dataset Card for "UA_speech_very-low" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AravindVadlapudi02/UA_speech_very-low
[ "region:us" ]
2023-05-10T12:48:31+00:00
{"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "control", "1": "pathology"}}}}, {"name": "input_features", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 766344936, "num_examples": 798}, {"name": "test", "num_bytes": 4599029948, "num_examples": 4789}], "download_size": 619863392, "dataset_size": 5365374884}}
2023-05-10T12:49:56+00:00
152624d44e5a7e5be47033c3a97fcdd8d640a489
# Dataset Card for "UA_speech_low" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AravindVadlapudi02/UA_speech_low
[ "region:us" ]
2023-05-10T13:09:06+00:00
{"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "control", "1": "pathology"}}}}, {"name": "input_features", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 768265600, "num_examples": 800}, {"name": "test", "num_bytes": 4615355592, "num_examples": 4806}], "download_size": 622015552, "dataset_size": 5383621192}}
2023-05-10T13:10:40+00:00
1a3eb4ddaf7b02fe0d27334b60f5f98c9d99833b
# AutoTrain Dataset for project: textclassifier_307samples ## Dataset Description This dataset has been automatically processed by AutoTrain for project textclassifier_307samples. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "SURESH\nfrom\nFALTILBE MEINENT\nVOC Schenker in\nDEATH CERTIFICATE\ndan 12'17 of the\n1 and fuls 213\nmess\nStore\n", "target": 4 }, { "text": "III Manulife\nACCIDENT AND HEALTH CLAIM\nDear Claimant,\nWe are sorry to learn of your accident/illness.\nIn order for us to process your claim, we require the following:\n1. Completed Accident and Health Claim Form.\n2. Hospital discharge summary, doctor's memo, medical report, MRI/X-ray results or Attending Physician's Statement (APS) to\nsupport the diagnosis.\n3. Final hospital/clinic bills:\n- For bills that indicate any payment by CPF MediSave and/or CPF MediShield Life, please provide statement from CPF\nBoard showing the deductions and Hospital Registration/Reference Number.\nFor bills from Traditional Chinese Medicine (TCM) or Chiropractic clinics, please ensure that the practitioner's name is\nindicated on the bills.\n(Do not submit original bills. However, keep the originals for 6 months from the submission date as we may require you to\nprovide them to us.)\n4. Medical certificates if claiming weekly indemnity (i.e. medical leave).\n5. Child's birth certificate if claiming Baby Bonus Benefit.\n6. Policy Owner's bank statement or passbook with name & account number if preferred payment is Electronic Fund Transfer (EFT)\nto a Singapore bank account, if an existing EFT has not been set up for the same bank account.\n7. Police report (if any).\nTo avoid any delay in processing your claim, please ensure that all required documents are completed and submitted. We may\nrequire further information/document(s) from you in certain circumstances.\nNotes:\nI. The fee for obtaining the hospital discharge summary, doctor's memo, medical report, MRI/X-ray reports or Attending\nPhysician's Statement shall be borne by the Policy Owner.\nII. All documents in foreign languages must be officially translated to English by a certified translator/interpreter.\nOnline\nsubmission\nWe encourage you to submit your claim to us via our online eClaim platform at www.manulife.com.sg/en/self-serve/file-a-claim.html.\nThis will help us process your claim more swiftly. There is no need to complete this claim form if you are submitting the claim online.\nManual\nYou may submit the completed and signed form with all relevant documents to us through any of the following modes:\nsubmission Email - SGP A&[email protected]\nMail - 8 Cross Street #15-01, Manulife Tower, Singapore 048424\nNeed Help? Please contact your Financial Representative if you require assistance. Alternatively, you may email us at\[email protected] or call our Client Service Officers at 6833 8188.\nINTERNAL USE - FOR STAFFs\nIf there is a follow-up claim number, do not create a claim number.\nFollow-up Claim No.\nPolicy No.\nClaim Type\nCL-105\n\u2610 CL-106\nCL-107\nAHCF-1221-3\nNo. of pages\nDate\nPage 1 of 5\n111\nManulife\nACCIDENT AND HEALTH CLAIM\ni\nPlease note that:\n1. The mere issue of this form or any other form(s) does not represent any admission of liability by Manulife (Singapore) Pte. Ltd.\n2. This form is to be completed by the Policy Owner, or a Financial Representative on behalf of the Policy Owner.\n3. You will receive the outcome of your claim within 10 working days.\nBEFORE you submit this claim form, do ensure your latest mailing address, mobile and email have been updated with us.\nLog in to our secured customer portal, MyManulife, at www.mymanulife.com.sg for an immediate update. You will NOT\nreceive claim updates if your particulars are outdated.\nAlternatively, you may fill in the Personal Details Update form and send it to us. Go to www.manulife.com.sg to download\nthis form. This method is NOT preferred though, as we will not be able to update your particulars if your signature does not\nmatch that in our records. This will cause a delay in the processing of your claim.\nPart 1\nPOLICY INFORMATION\nA. About the Policy Owner\nPolicy number(s)\n602200/46/15\nFull name\nSURESH CHANDRASEKHARAN\nNRIC/Passport number Z4308909\nB. Life Insured's Details\nSURESH CHANDRASEKHARAN\nFull name (if different from Policy Owner)\nNRIC/Passport no. (if different from Policy Owner)\nZ4308909\n\u2610 Self-employed\nLife Insured's Employment (compulsory to be completed)\nCurrent employment status\nUnemployed \u2611 Employed\nCurrent occupation/title\nSOFTWARE ENGINEER\nCurrent employer's name\nNEUTRINOS TECHNOLOGIES PTE. LTD.\nCurrent employer's address\n#2 SIMEI STREET 1, SIMEI SINGAPORE\nPolicy Owner's relationship with the Life Insured \u2611 Self\n\u2610 Spouse\nParent\nPart 2\nCLAIM DETAILS\nIllness\n\u2611 Accident\nBaby Bonus Benefit\nWhat is the cause of this claim?\nA. Claim details (for illness)\nWAS DRIVING ON N-91, CRASHED INTO A TREE. SEVERE PAIN IN THE LEFT\nLEG.\nDescribe the symptoms\nDoctor's diagnosis\nAHCF-1221-3\nON EXAMINATION FOUND SHATTERED PATELLA, WITH FRACTURES IN TIBIA\nAND FIBULA.\nDate of diagnosis\n14-Jan-2022\n(DD-MMM-YYYY)\nPage 2 of 5\nIII Manulife\nACCIDENT AND HEALTH CLAIM\nB. Claim details (for accident)\nAccident details\nLocation of accident\nDate of accident (DD-MMM-YYYY)\n13-Jan-2022\nTime of accident \u2610 am \u2713 pm\n11:20\nN-91\nActivity Insured was doing at the time of accident\nExercising in a gym\nRunning/walking outdoors\nPlaying racquet game/golf Swimming\nOthers (Please specify)\nDriving/travelling in a vehicle\nCarrying out home chores\nPlaying football/soccer\nCarrying out work duties\nAccident description\nInsured lifted an object\nInsured fell down\nOthers (Please specify)\nInsured collided with\nsomeone/an object\nInsured suffered a blow/impact\nfrom an external object\nInjury description\nFracture\nOthers (Please specify)\nExternal wound like cuts/bruises\nFood poisoning\nWas TCM Practitioner or Chiropractor consulted?\n\u2610 Yes\n\u2611 No\nName of TCM Practitioner or Chiropractor\nWas any imaging test (such as X-ray or MRI) done?\n\u2611 Yes\nNo\nIf yes, please provide report.\nWas there a diagnosis of gastroenteritis or stomach flu?\nYes\n\u2611 No\nWas the insured pregnant when the accident happened?\nYes\n\u2611 No\n\u2610 Not applicable\nWas the accident reported to police?\n\u2611 Yes\n\u2610 No\nIf yes, please provide police report.\nAdditional details you may wish to provide on the accident\nTHE CAB DRIVER WAS OVERSPEEDING AND NOT UNDER INFLUENCE. POLICE REPORT ATTACHED.\n\u2611 Yes\n\u2610 No\nC. Details of any hospitalisation or medical leave\nWas a day surgery performed?\nDate of day surgery\nWas the Insured hospitalised?\n15-Jan-2022\n(DD-MMM-YYYY)\n\u2611 Yes\n\u2610 No\nFrom 14-Jan-2022\nto 22-Jan-2022\nPeriod of hospitalisation\n(DD-MMM-YYYY)\n(DD-MMM-YYYY)\nFrom\nto\nWas medical leave taken due to this incident?\n\u2611 Yes\nNo\nFrom 13-Jan-2022\nto 21-Jan-2022\nPeriod of medical leave\n(DD-MMM-YYYY)\n(DD-MMM-YYYY)\nFrom\nto\nHas the insured resumed work?\n\u2610 Yes\n\u2611 No\n(DD-MMM-YYYY)\nDate of return to work\nWere light duties given at work?\n\u2610 Yes\nNo\nFrom\nto\nPeriod of light duties\n(DD-MMM-YYYY)\n(DD-MMM-YYYY)\nFrom\nto\nAHCF-1221-3\nD. Other insurance covering the same plan\nIs this hospitalisation bill partially/fully\nreimbursed by Integrated Shield Policy?\n\u2610 Yes\n\u2611 No\nNot applicable\nIf yes, please provide settlement letter.\nPage 3 of 5\nIII Manulife\nACCIDENT AND HEALTH CLAIM\nIs this claim submitted or to be submitted\nto other insurer/employer/party?\nYes\n\u2610 No\nIf yes, please provide settlement letter.\nName of insurer/employer/party\nSURESH CHANDRASEKHARAN\nPart 3\nPAYOUT OPTION\nBy default, we will pay to the Policy Owner's bank account linked to their PayNow registered with their Singapore NRIC/FIN.\nIf PayNow transaction is unsuccessful because the Policy Owner does not have a PayNow account, or if the PayNow account is\nregistered with a mobile number, we will send a cheque to the Policy Owner's mailing address as per our record. Register or add\nSingapore NRIC/FIN to the PayNow account via internet banking or mobile banking application.\nExceptions to Pay Now\n\u2022 If the policies have existing Electronic Fund Transfer (EFT) arrangements, the payout will be via EFT.\nIf a new EFT arrangement is opted by selecting the EFT option below, the payout will be via EFT.\nIf the claim payout exceeds S$200,000, or it is to a non-Policy Owner, the payout will be via cheque.\n\u2022 If the policy is subject to a trust created under Section 49L of the Insurance Act (Cap 142), or Section 73 of the Conveyancing\nand Law of Property Act (Cap 61), both PayNow or EFT will not apply and the payout will be via cheque.\nIf the Policy Owner does not have an existing EFT arrangement and wish to set up one, tick the option below. This will apply to\nall future payouts for all policies that qualify for this claim.\n\u2611 Electronic Fund Transfer (EFT)\nPlease fill in the table below and submit a copy of bank statement OR bank passbook showing account holder's name &\naccount number.\nThis must be a Singapore bank account denominated in Singapore Dollar that belongs to the Policy Owner.\nBank account number\n456-1-006307-4\nBank name\nSTANDARD CHARTERED BANK\nIf the requirements for EFT are not met, we will send a cheque to Policy Owner's mailing address as per our record.\nPart 4\nDECLARATION & AUTHORISATION BY POLICY OWNER OR CLAIMANT\n1. I/We declare, represent and warrant that all answers, information and supporting documents given by me/us in/with this form are,\nto the best of my/our knowledge and belief, correct, true and complete; and no material information has been withheld nor omitted.\n2. I/We consent to Manulife (Singapore) Pte. Ltd. (\"Manulife\") seeking/providing information about the life insured and this claim form\nfrom/to any medical practitioners, health care providers, insurers, organisations, investigation agencies, governmental\norganisations, regulators and any other parties in Singapore or any other country for purposes reasonably required by Manulife to\nprocess and administer my/our claims (\"Purposes\u201d). A photocopy or electronic copy of this authorisation shall be as valid as the\noriginal.\n3. I/We confirm that I/we have read and understood Manulife Statement of Personal Data Protection which may be amended by\nManulife from time to time (\"Manulife Statement\u201d). I/We consent to the collection, use, disclosure and processing of my/our, and\nlife insured's personal data in accordance with Manulife Statement and agree to be bound by Manulife Statement. I/We have\nobtained a hard copy of Manulife Statement from Manulife and/or downloaded a soft copy of it from www.manulife.com.sg.\n4. I/We agree that the personal data collected in this form and supporting documents will be used by Manulife for the purpose of\ncomplying with my request and other purposes reasonably required by Manulife to process and administer my/our claims.\n5. I/We authorise any person, party, organisation, company, corporation, body and partnership, including but not limited to, any\nmedical practitioners, health care providers, insurers, and investigative agencies in Singapore or any other country, to release,\ndisclose or exchange any information (including personal data or personal health information) to or with Manulife for the Purposes.\n6. I/We confirm that I/we am/are not an undischarged bankrupt, in winding up, receivership or judicial management and there is\ncurrently no pending or threatened bankruptcy or winding up proceeding, receivership or judicial management proceeding against\nme/us.\nAHCF-1221-3\nPage 4 of 5\nIII Manulife\nACCIDENT AND HEALTH CLAIM\n7. I/We authorise Manulife to assess the completed claim form and supporting documents received via electronic mail or online portal\nprovided by Manulife (\"Electronic Services\"). I/We agree that Manulife is not responsible for verifying the authenticity of the\ninstructions given or purported to be given by me/us. Manulife reserves the right (but not obliged) to suspend or disallow the claims\nprocessing for verification or other purposes as Manulife deems fit and shall not be liable for any losses incurred in consequence.\nI/We agree that Manulife shall not be liable for any losses arising from any submissions or instructions lost in transmission whether\ndue to breakdown in the system or otherwise. Manulife retains full authority and discretion to amend the terms and manner of use\nof the Electronic Services at all times. I/We understand that transmission of submissions or instructions via Electronic Services\nshall be evidenced by the receipt of a successful message.\n8. I/We agree to indemnify and hold harmless Manulife from and against any and all demands, claims, actions, damages, suits\nproceedings, assessments, judgments, costs, losses (whether direct, indirect, special or consequential) including legal costs, and\nother expenses arising from or in connection with Manulife accepting and acting on these submissions or instructions (including\nwhere relevant, the use of the Electronic Services).\n9. I/We am/are aware that this form will not be effective until it is formally accepted and approved by Manulife.\n10. If this claim form is submitted by a financial representative or third party on my/our behalf, I/we acknowledge and authorise\nfinancial representative or third party to provide the declarations, representations and warranties stated under the Declaration &\nAuthorisation by Financial Representative or Third Party heading on my/our behalf.\nI/We confirm and represent that the electronic medical invoice(s) submitted is a true copy issued by the medical institution. I/We\nunderstand and agree that I/we can claim or be reimbursed for the medical invoice(s) that I/we have incurred one time only\nregardless of the number of medical insurance policies I/we may have. I/We will not claim from my/our employer, any other\ninsurer or party for the same medical invoice(s) on the portion that will be reimbursed by Manulife. Otherwise, it may amount to\nfraud. I/We will keep the original or certified true copy of medical invoice(s) for a period of 6 months from the date of submission,\nand provide the same to Manulife upon request. I/We agree that Manulife may recover any excess amount paid to me/us.\nName\nNRIC/Passport No.\nContact No.\nE-mail\nDate\nSURESH CHANDRASEKHARAN\nZ4308909\n86439925\nSURESH.CHANDRASEKHARAN@GMAIL\n21-Jan-2022\n(DD-MMM-YYYY)\nSignature is not required for this form. By submitting this form to Manulife, the policy owner is deemed to have read, understood\nand agreed to the terms and conditions stated in this part 4.\nPart 5 (applicable if financial representative or third party is submitting this claim)\nDECLARATION & AUTHORISATION BY FINANCIAL REPRESENTATIVE OR THIRD PARTY\nI declare, represent and warrant that:\n(a) I am completing and submitting this claim form and supporting documents to Manulife (Singapore) Pte Ltd on behalf of the\npolicy owner/claimant based on the instruction, information (including the personal data) and supporting documents provided\nby the policy owner/claimant;\n(b) to the best of my knowledge and belief, such information and supporting documents stated in subclause (a) above are correct,\ntrue and complete;\n(c) the policy owner/claimant has authorised me to agree on the terms and conditions under the Declaration & Authorisation by\nPolicy Owner/Claimant heading, on policy owner/claimant's behalf; and\n(d) I have explained to the policy owner that (i) only true copy of medical invoice(s) issued by the medical institution may be\nsubmitted to Manulife; (ii) the incurred medical invoice(s) can be reimbursed one time only regardless of the number of medical\ninsurance policies policy owner may have; (iii) policy owner cannot claim from his/her employer, any other insurer or party for\nthe same medical invoice(s) on the portion that will be reimbursed by Manulife. Otherwise, it may amount to fraud; (iv) policy\nowner will need to keep original or certified true copy of medical invoice(s) for a period of 6 months from the date of submission,\nand provide the same to Manulife upon request; and (v) Manulife may recover any excess amount paid to policy owner.\nAHCF-1221-3\nName & Code\nBranch\nDate\n(DD-MMM-YYYY)\nPage 5 of 5\nIII Manulife\nThe fastest way to receive\npayment from us!\nCredited directly into your\nregistered bank account via PayNow.\nRegister PayNow with your NRIC or FIN.\nPAY\nNOW\nRegister for\nPayNow\nYou need to have a bank account with one of the\nparticipating banks\u00b9\nTo register for PayNow, use participating bank's internet\nbanking platform, mobile banking app, or SMS\u00b2\nLink your Singapore NRIC or FIN to your bank account\u00b3\nRegister via Internet banking\nor mobile banking app\nRegister via SMS\nStep 1. Log in to your bank's internet banking\nplatform or mobile banking app.\nStep 2. At the PayNow registration screen, link\nyour Singapore NRIC or FIN to your bank\naccount number. An SMS OTP verification\nprocess may be required.\nSimply send an SMS to your bank in\nthe required formats. Please check\nwith your bank for details.\nThe list of participating banks can be found from https://www.abs.org.sg/consumer-banking/pay-now\n\u00b2 Applicable to some banks only\n3 For existing PayNow users who have earlier linked your mobile number to your bank account, you need\nnot delink your mobile number. You just need to follow the steps to also link your Singapore NRIC or FIN\nto your preferred bank account.\n", "target": 3 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(names=['Aadhaar_Card', 'BANK STATEMENT', 'CBC', 'Claim Form', 'Death_Certificate', 'Discharge_Summary', 'Invoice', 'MILITARY ID', 'NRIC', 'PASSPORT', 'Policy_Document', 'URINALYSIS', 'USG', 'VN ID CARD (NEW)', 'VN RESIDENCE CARD (OLD)'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 240 | | valid | 67 |
keshavkmr076/autotrain-data-textclassifier_307samples
[ "task_categories:text-classification", "language:en", "region:us" ]
2023-05-10T13:21:54+00:00
{"language": ["en"], "task_categories": ["text-classification"]}
2023-05-10T13:22:21+00:00
ecd4d15dcff7d0649517000e799741b904e4e161
# Dataset Card for "go-fuzzing-inputs" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
elwint/go-fuzzing-inputs
[ "region:us" ]
2023-05-10T13:26:08+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 517066, "num_examples": 52}, {"name": "test", "num_bytes": 116975, "num_examples": 10}], "download_size": 275948, "dataset_size": 634041}}
2023-06-30T10:07:19+00:00
22b6dd18f9116607e09188ae37ca854916d4fb17
# Dataset Card for "UA_speech_mid" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AravindVadlapudi02/UA_speech_mid
[ "region:us" ]
2023-05-10T13:26:26+00:00
{"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "control", "1": "pathology"}}}}, {"name": "input_features", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 757701948, "num_examples": 789}, {"name": "test", "num_bytes": 4640324224, "num_examples": 4832}], "download_size": 622894650, "dataset_size": 5398026172}}
2023-05-10T13:27:53+00:00
96e70d392d5167ee33fcf7c251119411c9c8e820
> Not sure about the license. > Source: https://www.robots.ox.ac.uk/~vgg/data/dtd/ # Describable Textures Dataset (DTD) <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/banded/banded_0023.jpg" alt="banded" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/blotchy/blotchy_0027.jpg" alt="blotchy" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/braided/braided_0120.jpg" alt="braided" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/bubbly/bubbly_0173.jpg" alt="bubbly" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/bumpy/bumpy_0203.jpg" alt="bumpy" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/chequered/chequered_0215.jpg" alt="chequered" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/cobwebbed/cobwebbed_0161.jpg" alt="cobwebbed" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/cracked/cracked_0167.jpg" alt="cracked" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/crosshatched/crosshatched_0172.jpg" alt="crosshatched" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/crystalline/crystalline_0220.jpg" alt="crystalline" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/dotted/dotted_0208.jpg" alt="dotted" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/fibrous/fibrous_0217.jpg" alt="fibrous" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/flecked/flecked_0169.jpg" alt="flecked" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/freckled/freckled_0178.jpg" alt="freckled" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/frilly/frilly_0126.jpg" alt="frilly" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/gauzy/gauzy_0203.jpg" alt="gauzy" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/grid/grid_0135.jpg" alt="grid" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/grooved/grooved_0169.jpg" alt="grooved" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/honeycombed/honeycombed_0175.jpg" alt="honeycombed" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/interlaced/interlaced_0201.jpg" alt="interlaced" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/knitted/knitted_0204.jpg" alt="knitted" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/lacelike/lacelike_0121.jpg" alt="lacelike" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/lined/lined_0177.jpg" alt="lined" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/marbled/marbled_0194.jpg" alt="marbled" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/matted/matted_0166.jpg" alt="matted" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/meshed/meshed_0191.jpg" alt="meshed" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/paisley/paisley_0131.jpg" alt="paisley" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/perforated/perforated_0172.jpg" alt="perforated" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/pitted/pitted_0162.jpg" alt="pitted" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/pleated/pleated_0173.jpg" alt="pleated" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/polka-dotted/polka-dotted_0232.jpg" alt="polka-dotted" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/porous/porous_0176.jpg" alt="porous" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/potholed/potholed_0171.jpg" alt="potholed" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/scaly/scaly_0225.jpg" alt="scaly" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/smeared/smeared_0150.jpg" alt="smeared" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/spiralled/spiralled_0140.jpg" alt="spiralled" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/sprinkled/sprinkled_0144.jpg" alt="sprinkled" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/stained/stained_0140.jpg" alt="stained" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/stratified/stratified_0174.jpg" alt="stratified" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/striped/striped_0125.jpg" alt="striped" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/studded/studded_0217.jpg" alt="studded" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/swirly/swirly_0178.jpg" alt="swirly" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/veined/veined_0189.jpg" alt="veined" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/waffled/waffled_0204.jpg" alt="waffled" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/woven/woven_0131.jpg" alt="woven" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/wrinkled/wrinkled_0145.jpg" alt="wrinkled" width="100" height="100"> <img src="https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD/resolve/main/images/zigzagged/zigzagged_0133.jpg" alt="zigzagged" width="100" height="100"> The **_Describable Textures Dataset (DTD)_** is an evolving collection of textural images in the wild, annotated with a series of human-centric attributes, inspired by the perceptual properties of textures. This data is made available to the computer vision community for research purposes. [Download dataset](download/dtd-r1.0.1.tar.gz) [Download code](download/desctex.tar.gz) [Evaluation](eval.html) [Citation](#citation) ## Overview Our ability of vividly describing the content of images is a clear demonstration of the power of human visual system. Not only we can recognise objects in images (e.g. a cat, a person, or a car), but we can also describe them to the most minute details, extracting an impressive amount of information at a glance. But visual perception is not limited to the recognition and description of objects. Prior to high-level semantic understanding, most textural patterns elicit a rich array of visual impressions. We could describe a texture as "polka dotted, regular, sparse, with blue dots on a white background"; or as "noisy, line-like, and irregular". Our aim is to reproduce this capability in machines. Scientifically, the aim is to gain further insight in how textural information may be processed, analysed, and represented by an intelligent system. Compared to classic task of textural analysis such as material recognition, such perceptual properties are much richer in variety and structure, inviting new technical challenges. **DTD** is a texture database, consisting of 5640 images, organized according to a list of 47 terms (categories) inspired from human perception. There are 120 images for each category. Image sizes range between 300x300 and 640x640, and the images contain at least 90% of the surface representing the category attribute. The images were collected from [Google](https://images.google.com/) and [Flickr](https://www.flickr.com/) by entering our proposed attributes and related terms as search queries. The images were annotated using Amazon Mechanical Turk in several iterations. For each image we provide **key attribute** (main category) and a list of **joint attributes**. The data is split in three equal parts, in train, validation and test, 40 images per class, for each split. We provide the ground truth annotation for both key and joint attributes, as well as the 10 splits of the data we used for evaluation. ## Related paper - M.Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, A. Vedaldi, **_"Describing Textures in the Wild"_** ( [PDF](http://www.robots.ox.ac.uk/~vgg/publications/2014/Cimpoi14/cimpoi14.pdf) | [Poster](doc/textures-poster-cvpr14.pdf) ) @InProceedings{cimpoi14describing, Author = {M. Cimpoi and S. Maji and I. Kokkinos and S. Mohamed and and A. Vedaldi}, Title = {Describing Textures in the Wild}, Booktitle = {Proceedings of the {IEEE} Conf. on Computer Vision and Pattern Recognition ({CVPR})}, Year = {2014}} ## Downloads | | | | | --------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | | **Filename** | **Description** | **Size** | | [README.txt](download/README.txt) | README file describing:<br><br>_ Dataset structure.<br>_ Ground truth annotations: key attributes and joint attributes. | 185K | | [dtd-r1.0.1.tar.gz](download/dtd-r1.0.1.tar.gz) | The package contains:<br><br>_ Dataset images, train, validation and test.<br>_ Ground truth annotations and splits used for evaluation.<br>\* imdb.mat file, containing a struct holding file names and ground truth labels. | 625M | | [dtd-r1.0.1-labels.tar.gz](download/dtd-r1.0.1-labels.tar.gz) | Annotations and splits<br><br>_ Ground truth annotations: key attributes, joint attributes.<br>_ Splits of the data into train, val and test, as used in our experiments. | 1.4M | | [dtd-r1-decaf_feats.tar.gz](download/dtd-r1-decaf_feats.tar.gz) | Compressed decaf_feats.mat, containing a 5640x4096 matrix, represented DeCAF features for the images from DTD. <br>Each row represents the 4096 dimensional feature vector for one image, assuming images are sorted by name. | 82M | ## Acknowledgements This research is based on work done at the 2012 CLSP Summer Workshop, and was partially supported by NSF Grant #1005411, ODNI via the JHU-HLTCOE and Google Research. Mircea Cimpoi was supported by the ERC grant VisRec no. 228180 and Iasonas Kokkinos by ANR-10-JCJC-0205. The development of the describable textures dataset started in June and July 2012 at the Johns Hopkins Centre for Language and Speech Processing (CLSP) Summer Workshop. The authors are most grateful to Prof. Sanjeev Khudanpur and Prof. Greg Hager.
cansa/Describable-Textures-Dataset-DTD
[ "region:us" ]
2023-05-10T13:27:24+00:00
{}
2023-05-11T06:36:27+00:00
cff6e1b62df90425dc7c39216ca30809d8227b65
# Dataset Card for "go-fuzzing-inputs-all" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
elwint/go-fuzzing-inputs-all
[ "region:us" ]
2023-05-10T13:30:09+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1598918, "num_examples": 193}, {"name": "test", "num_bytes": 116975, "num_examples": 10}], "download_size": 507898, "dataset_size": 1715893}}
2023-06-30T10:06:58+00:00
6ab500a335e9b299d90cf99a3ec165a7a05aa252
# Dataset Card for "CSQA_preprocessed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
skrishna/CSQA_preprocessed
[ "region:us" ]
2023-05-10T13:31:46+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "question_concept", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "label", "dtype": "string"}, {"name": "text", "dtype": "string"}]}, {"name": "answerKey", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3875948, "num_examples": 9741}, {"name": "validation", "num_bytes": 480334, "num_examples": 1221}, {"name": "test", "num_bytes": 452620, "num_examples": 1140}], "download_size": 2706083, "dataset_size": 4808902}}
2023-05-10T17:01:33+00:00
281b5c04ce92e8e5fd3c4be51714067d224b92f3
# Dataset Card for "UA_speech_high" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AravindVadlapudi02/UA_speech_high
[ "region:us" ]
2023-05-10T13:44:13+00:00
{"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "control", "1": "pathology"}}}}, {"name": "input_features", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 768265600, "num_examples": 800}, {"name": "test", "num_bytes": 4599029948, "num_examples": 4789}], "download_size": 619976569, "dataset_size": 5367295548}}
2023-05-10T13:45:29+00:00
7bddaa4453c6db3cccd200d37f42e0d67febe6b3
# Dataset Card for "miniwob_plusplus_v2_raw" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
LucasThil/miniwob_plusplus_v2_raw
[ "region:us" ]
2023-05-10T13:51:00+00:00
{"dataset_info": {"features": [{"name": "task_name", "dtype": "string"}, {"name": "utterance", "dtype": "string"}, {"name": "reward", "dtype": "float64"}, {"name": "raw_reward", "dtype": "float64"}, {"name": "processed_states", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5781242512, "num_examples": 18124}], "download_size": 537245885, "dataset_size": 5781242512}}
2023-05-10T13:51:35+00:00
c3ac0a3cffc71990079d2ddb6982d8267eca5cef
# Dataset Card for "pixel_squad" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Nadav/pixel_squad
[ "region:us" ]
2023-05-10T14:46:20+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"array2_d": {"shape": [23, 23], "dtype": "uint8"}}}], "splits": [{"name": "train", "num_bytes": 55685358534.612, "num_examples": 781914}, {"name": "test", "num_bytes": 451699396.126, "num_examples": 11873}], "download_size": 56750962124, "dataset_size": 56137057930.738}}
2023-06-17T18:06:43+00:00
d97384d795945b69b3e06f866f1d25bfe3b8a459
# Bandori Card Dataset ![Sample Image](https://huggingface.co/datasets/Linaqruf/bandori-card-dataset/resolve/main/samples/sample.png) The Bandori Card Dataset is a comprehensive collection of cards from the game "BanG Dream! Girls Band Party!" (Bandori). It aims to provide card metadata and corresponding artwork images for analysis, research, and other purposes. ## Dataset Description The dataset includes the following information for each card: - Card ID: Unique identifier for each card in the game. - Rarity: The rarity level of the card, ranging from 1 to 5. - Name: The name or title of the card. - Artwork URLs: URLs pointing to the artwork images associated with each card. The dataset is organized based on card rarity, with separate folders for each rarity level. Within each folder, you can find the artwork images corresponding to the respective rarity. ## Dataset Structure The dataset follows the following file structure: ``` / ├── images/ │ ├── rarity1/ │ │ ├── Bushido%21.png │ │ ├── Cat-Eared_Producer.png │ │ ├── Cool_%26_Fierce_Singer.png │ │ └── ... │ ├── rarity2/ │ │ ├── %E2%99%A1For_you.png │ │ ├── A_Caring_Princess.png │ │ ├── A_Crucial_Member.png │ │ └── ... │ ├── rarity3/ │ │ ├── %22Just_a_Bite%22_Surprise.png │ │ ├── 1%2C_2%2C_3%2C_to_the_Light.png │ │ ├── 180_Degrees%2C_12_Minutes.png │ │ └── ... │ ├── rarity4/ │ │ ├── %22Dramatic%22_Daily_Life.png │ │ ├── %22The_School_Prince%22.png │ │ ├── %E2%88%9ESurprise%21.png │ │ └── ... │ └── rarity5/ │ ├── Afternoon_Coffee_T.png │ ├── Autumn_Dream_T.png │ ├── Blue-Tinted_Stroll_T.png │ └── ... └── metadata/ ├── rarity1.json ├── rarity2.json ├── rarity3.json ├── rarity4.json └── rarity5.json ``` The `images` folder contains subfolders for each rarity level (rarity1, rarity2, etc.), and within each rarity folder, you can find the corresponding card artwork images. The `metadata` folder contains JSON files for each rarity level, containing card metadata in the following format: ```json [ { "id": 1, "rarity": 1, "name": "Bushido!", "art_url": "https://example.com/images/rarity1/Bushido%21.png" }, { "id": 2, "rarity": 1, "name": "Cat-Eared Producer", "art_url": "https://example.com/images/rarity1/Cat-Eared_Producer.png" }, ... ] ``` ## Usage To use this dataset, you can download a zipfile from this [direct link](https://huggingface.co/datasets/Linaqruf/bandori-card-dataset/resolve/main/bandori-card-dataset.zip) or download the artwork images and access the metadata for each card by using [Bandori Wiki Scraper](https://colab.research.google.com/github/Linaqruf/Scraper/blob/main/Bandori_Wiki_Scraper.ipynb) notebook. The metadata provides information such as card ID, rarity level, name, and the URL of the artwork image. ## Image Count The total number of image files is `1454`, distributed across the different rarity levels as follows: ``` Rarity1: 35 images Rarity2: 404 images Rarity3: 448 images Rarity4: 546 images Rarity5: 21 images ``` Please note that the numbers may vary based on the dataset version. # License The Bandori Card Dataset is provided under the [Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) license. # Acknowledgments We would like to acknowledge the [BanG Dream! Girls Band Party! Fandom](https://bandori.fandom.com/) website for providing the card data and artwork used in this dataset. ## Limitations While the Bandori Card Dataset strives to provide comprehensive card data and artwork from the BanG Dream! Girls Band Party! game, there are a few limitations to be aware of: 1. **Incomplete Image Coverage**: The dataset includes card data and artwork from a range of card rarities and IDs up to an index of 1-1800. However, it's important to note that only 1453 images are available in the dataset. This means that there might be gaps in the image coverage for certain card IDs within the specified index range. 2. **Incomplete Coverage**: Additionally, the dataset covers a range of card rarities and IDs up to a specific maximum index. However, there may be newer cards or updates to the game that are not included in the dataset. It's advisable to cross-reference the dataset with the latest game data for the most up-to-date information. 3. **Website Structure Dependency**: The scraper code relies on the specific HTML structure of the BanG Dream! Girls Band Party! Fandom website. Any changes to the website structure may impact the scraper's ability to extract card data accurately. It's important to monitor and adjust the scraper code accordingly if there are changes to the website. 4. **Data Integrity**: While efforts have been made to ensure the accuracy and completeness of the dataset, errors or discrepancies in the card data and artwork may exist. Users are encouraged to verify and validate the dataset independently for their specific use cases. ## Data Quality The Bandori Card Dataset aims to provide high-quality card data and artwork. Here are a few considerations regarding the dataset's organization and image files: 1. **Image File Separation by Rarity**: The artwork images in the dataset are organized and stored in separate folders based on the card rarity, rather than the index. Each rarity level has its own folder in the `images` directory, making it easier to locate images based on their rarity. Please refer to the directory structure provided in the repository for more details. 2. **Data Source**: The card data and artwork are sourced from the BanG Dream! Girls Band Party! Fandom website. The dataset quality is dependent on the accuracy and consistency of the information available on the website. It's advisable to refer to official game sources for authoritative data. 3. **Artwork Resolution**: The artwork images in the dataset are obtained from the website and may vary in resolution and quality. The provided images are the best available representations within the limitations of the source material. 4. **Metadata Accuracy**: The metadata provided in the dataset, including card ID, rarity, and name, is extracted from the website. While efforts have been made to ensure accuracy, there may be instances of typographical errors or discrepancies between the dataset and the actual game data. If you encounter any significant issues with the dataset's quality or have suggestions for improvement, please feel free to contribute or reach out to us using the contact information provided below. # Citation If you use this dataset in your research or any other work, please consider citing it as: ``` @dataset{linaqruf/bandori-card-dataset, author = {Furqanil Taqwa}, title = {Bandori Card Dataset}, year = {2023}, publisher = {Hugging Face}, version = {1.0}, url = {https://huggingface.co/datasets/Linaqruf/bandori-card-dataset} } ```
Linaqruf/bandori-card-dataset
[ "size_categories:100M<n<1B", "language:en", "license:cc-by-4.0", "region:us" ]
2023-05-10T14:47:34+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["100M<n<1B"], "thumbnail": "https://huggingface.co/datasets/Linaqruf/bandori-card-dataset/resolve/main/samples/thumbnail.png"}
2023-05-10T23:55:41+00:00
52f7645f084a8d3fd0705384918e8c0300714e4e
# Dataset Card for "old_push" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
polinaeterna/old_push
[ "region:us" ]
2023-05-10T15:03:24+00:00
{"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 800, "num_examples": 50}], "download_size": 1763, "dataset_size": 800}}
2023-05-10T15:03:27+00:00
b05048f144680a7aeda333821fd4c38fcc47225e
Poupou/Regen-Rangers-Hackathon
[ "license:mit", "region:us" ]
2023-05-10T15:05:49+00:00
{"license": "mit"}
2023-05-11T20:48:09+00:00
5b6b152615afae7042778a79d25aa810091a74c7
# Dataset Card for [EDGAR-CORPUS] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [References](#references) - [Contributions](#contributions) ## Dataset Description - **Point of Contact: Lefteris Loukas** ### Dataset Summary This dataset card is based on the paper **EDGAR-CORPUS: Billions of Tokens Make The World Go Round** authored by _Lefteris Loukas et.al_, as published in the _ECONLP 2021_ workshop. This dataset contains the annual reports of public companies from 1993-2020 from SEC EDGAR filings. There is supported functionality to load a specific year. Care: since this is a corpus dataset, different `train/val/test` splits do not have any special meaning. It's the default HF card format to have train/val/test splits. If you wish to load specific year(s) of specific companies, you probably want to use the open-source software which generated this dataset, EDGAR-CRAWLER: https://github.com/nlpaueb/edgar-crawler. ### Supported Tasks This is a raw dataset/corpus for financial NLP. As such, there are no annotations or labels. ### Languages The EDGAR Filings are in English. ## Dataset Structure ### Data Instances Refer to the dataset preview. ### Data Fields **filename**: Name of file on EDGAR from which the report was extracted.<br> **cik**: EDGAR identifier for a firm.<br> **year**: Year of report.<br> **section_1**: Corressponding section of the Annual Report.<br> **section_1A**: Corressponding section of the Annual Report.<br> **section_1B**: Corressponding section of the Annual Report.<br> **section_2**: Corressponding section of the Annual Report.<br> **section_3**: Corressponding section of the Annual Report.<br> **section_4**: Corressponding section of the Annual Report.<br> **section_5**: Corressponding section of the Annual Report.<br> **section_6**: Corressponding section of the Annual Report.<br> **section_7**: Corressponding section of the Annual Report.<br> **section_7A**: Corressponding section of the Annual Report.<br> **section_8**: Corressponding section of the Annual Report.<br> **section_9**: Corressponding section of the Annual Report.<br> **section_9A**: Corressponding section of the Annual Report.<br> **section_9B**: Corressponding section of the Annual Report.<br> **section_10**: Corressponding section of the Annual Report.<br> **section_11**: Corressponding section of the Annual Report.<br> **section_12**: Corressponding section of the Annual Report.<br> **section_13**: Corressponding section of the Annual Report.<br> **section_14**: Corressponding section of the Annual Report.<br> **section_15**: Corressponding section of the Annual Report.<br> ```python import datasets # Load the entire dataset raw_dataset = datasets.load_dataset("eloukas/edgar-corpus", "full") # Load a specific year and split year_1993_training_dataset = datasets.load_dataset("eloukas/edgar-corpus", "year_1993", split="train") ``` ### Data Splits | Config | Training | Validation | Test | | --------- | -------- | ---------- | ------ | | full | 176,289 | 22,050 | 22,036 | | year_1993 | 1,060 | 133 | 133 | | year_1994 | 2,083 | 261 | 260 | | year_1995 | 4,110 | 514 | 514 | | year_1996 | 7,589 | 949 | 949 | | year_1997 | 8,084 | 1,011 | 1,011 | | year_1998 | 8,040 | 1,006 | 1,005 | | year_1999 | 7,864 | 984 | 983 | | year_2000 | 7,589 | 949 | 949 | | year_2001 | 7,181 | 898 | 898 | | year_2002 | 6,636 | 830 | 829 | | year_2003 | 6,672 | 834 | 834 | | year_2004 | 7,111 | 889 | 889 | | year_2005 | 7,113 | 890 | 889 | | year_2006 | 7,064 | 883 | 883 | | year_2007 | 6,683 | 836 | 835 | | year_2008 | 7,408 | 927 | 926 | | year_2009 | 7,336 | 917 | 917 | | year_2010 | 7,013 | 877 | 877 | | year_2011 | 6,724 | 841 | 840 | | year_2012 | 6,479 | 810 | 810 | | year_2013 | 6,372 | 797 | 796 | | year_2014 | 6,261 | 783 | 783 | | year_2015 | 6,028 | 754 | 753 | | year_2016 | 5,812 | 727 | 727 | | year_2017 | 5,635 | 705 | 704 | | year_2018 | 5,508 | 689 | 688 | | year_2019 | 5,354 | 670 | 669 | | year_2020 | 5,480 | 686 | 685 | ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization Initial data was collected and processed by the authors of the research paper **EDGAR-CORPUS: Billions of Tokens Make The World Go Round**. #### Who are the source language producers? Public firms filing with the SEC. ### Annotations #### Annotation process NA #### Who are the annotators? NA ### Personal and Sensitive Information The dataset contains public filings data from SEC. ## Considerations for Using the Data ### Social Impact of Dataset Low to none. ### Discussion of Biases The dataset is about financial information of public companies and as such the tone and style of text is in line with financial literature. ### Other Known Limitations The dataset needs further cleaning for improved performance. ## Additional Information ### Licensing Information EDGAR data is publicly available. ### Shoutout Huge shoutout to [@JanosAudran](https://huggingface.co/JanosAudran) for the HF Card setup! ## Citation If this work helps or inspires you in any way, please consider citing the relevant paper published at the [3rd Economics and Natural Language Processing (ECONLP) workshop](https://lt3.ugent.be/econlp/) at EMNLP 2021 (Punta Cana, Dominican Republic): ``` @inproceedings{loukas-etal-2021-edgar, title = "{EDGAR}-{CORPUS}: Billions of Tokens Make The World Go Round", author = "Loukas, Lefteris and Fergadiotis, Manos and Androutsopoulos, Ion and Malakasiotis, Prodromos", booktitle = "Proceedings of the Third Workshop on Economics and Natural Language Processing", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.econlp-1.2", pages = "13--18", } ``` ### References - [Research Paper] Lefteris Loukas, Manos Fergadiotis, Ion Androutsopoulos, and, Prodromos Malakasiotis. EDGAR-CORPUS: Billions of Tokens Make The World Go Round. Third Workshop on Economics and Natural Language Processing (ECONLP). https://arxiv.org/abs/2109.14394 - Punta Cana, Dominican Republic, November 2021. - [Software] Lefteris Loukas, Manos Fergadiotis, Ion Androutsopoulos, and, Prodromos Malakasiotis. EDGAR-CRAWLER. https://github.com/nlpaueb/edgar-crawler (2021) - [EDGAR CORPUS, but in zip files] EDGAR CORPUS: A corpus for financial NLP research, built from SEC's EDGAR. https://zenodo.org/record/5528490 (2021) - [Word Embeddings] EDGAR-W2V: Word2vec Embeddings trained on EDGAR-CORPUS. https://zenodo.org/record/5524358 (2021) - [Applied Research paper where EDGAR-CORPUS is used] Lefteris Loukas, Manos Fergadiotis, Ilias Chalkidis, Eirini Spyropoulou, Prodromos Malakasiotis, Ion Androutsopoulos, and, George Paliouras. FiNER: Financial Numeric Entity Recognition for XBRL Tagging. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). https://doi.org/10.18653/v1/2022.acl-long.303 (2022)
c3po-ai/edgar-corpus
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|other", "language:en", "license:apache-2.0", "research papers", "edgar", "sec", "finance", "financial", "filings", "10K", "10-K", "nlp", "research", "econlp", "economics", "business", "arxiv:2109.14394", "region:us" ]
2023-05-10T15:07:12+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|other"], "task_categories": ["other"], "task_ids": [], "pretty_name": "EDGAR-CORPUS (10-K Filings from 1999 to 2020)", "dataset_info": [{"config_name": ".", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40306320885, "num_examples": 220375}], "download_size": 10734208660, "dataset_size": 40306320885}, {"config_name": "full", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32237457024, "num_examples": 176289}, {"name": "validation", "num_bytes": 4023129683, "num_examples": 22050}, {"name": "test", "num_bytes": 4045734178, "num_examples": 22036}], "download_size": 40699852536, "dataset_size": 40306320885}, {"config_name": "year_1993", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 112714537, "num_examples": 1060}, {"name": "validation", "num_bytes": 13584432, "num_examples": 133}, {"name": "test", "num_bytes": 14520566, "num_examples": 133}], "download_size": 141862572, "dataset_size": 140819535}, {"config_name": "year_1994", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 198955093, "num_examples": 2083}, {"name": "validation", "num_bytes": 23432307, "num_examples": 261}, {"name": "test", "num_bytes": 26115768, "num_examples": 260}], "download_size": 250411041, "dataset_size": 248503168}, {"config_name": "year_1995", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 356959049, "num_examples": 4110}, {"name": "validation", "num_bytes": 42781161, "num_examples": 514}, {"name": "test", "num_bytes": 45275568, "num_examples": 514}], "download_size": 448617549, "dataset_size": 445015778}, {"config_name": "year_1996", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 738506135, "num_examples": 7589}, {"name": "validation", "num_bytes": 89873905, "num_examples": 949}, {"name": "test", "num_bytes": 91248882, "num_examples": 949}], "download_size": 926536700, "dataset_size": 919628922}, {"config_name": "year_1997", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 854201733, "num_examples": 8084}, {"name": "validation", "num_bytes": 103167272, "num_examples": 1011}, {"name": "test", "num_bytes": 106843950, "num_examples": 1011}], "download_size": 1071898139, "dataset_size": 1064212955}, {"config_name": "year_1998", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 904075497, "num_examples": 8040}, {"name": "validation", "num_bytes": 112630658, "num_examples": 1006}, {"name": "test", "num_bytes": 113308750, "num_examples": 1005}], "download_size": 1137887615, "dataset_size": 1130014905}, {"config_name": "year_1999", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 911374885, "num_examples": 7864}, {"name": "validation", "num_bytes": 118614261, "num_examples": 984}, {"name": "test", "num_bytes": 116706581, "num_examples": 983}], "download_size": 1154736765, "dataset_size": 1146695727}, {"config_name": "year_2000", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 926444625, "num_examples": 7589}, {"name": "validation", "num_bytes": 113264749, "num_examples": 949}, {"name": "test", "num_bytes": 114605470, "num_examples": 949}], "download_size": 1162526814, "dataset_size": 1154314844}, {"config_name": "year_2001", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 964631161, "num_examples": 7181}, {"name": "validation", "num_bytes": 117509010, "num_examples": 898}, {"name": "test", "num_bytes": 116141097, "num_examples": 898}], "download_size": 1207790205, "dataset_size": 1198281268}, {"config_name": "year_2002", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1049271720, "num_examples": 6636}, {"name": "validation", "num_bytes": 128339491, "num_examples": 830}, {"name": "test", "num_bytes": 128444184, "num_examples": 829}], "download_size": 1317817728, "dataset_size": 1306055395}, {"config_name": "year_2003", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1027557690, "num_examples": 6672}, {"name": "validation", "num_bytes": 126684704, "num_examples": 834}, {"name": "test", "num_bytes": 130672979, "num_examples": 834}], "download_size": 1297227566, "dataset_size": 1284915373}, {"config_name": "year_2004", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1129657843, "num_examples": 7111}, {"name": "validation", "num_bytes": 147499772, "num_examples": 889}, {"name": "test", "num_bytes": 147890092, "num_examples": 889}], "download_size": 1439663100, "dataset_size": 1425047707}, {"config_name": "year_2005", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1200714441, "num_examples": 7113}, {"name": "validation", "num_bytes": 161003977, "num_examples": 890}, {"name": "test", "num_bytes": 160727195, "num_examples": 889}], "download_size": 1538876195, "dataset_size": 1522445613}, {"config_name": "year_2006", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1286566049, "num_examples": 7064}, {"name": "validation", "num_bytes": 160843494, "num_examples": 883}, {"name": "test", "num_bytes": 163270601, "num_examples": 883}], "download_size": 1628452618, "dataset_size": 1610680144}, {"config_name": "year_2007", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1296737173, "num_examples": 6683}, {"name": "validation", "num_bytes": 166735560, "num_examples": 836}, {"name": "test", "num_bytes": 156399535, "num_examples": 835}], "download_size": 1637502176, "dataset_size": 1619872268}, {"config_name": "year_2008", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1525698198, "num_examples": 7408}, {"name": "validation", "num_bytes": 190034435, "num_examples": 927}, {"name": "test", "num_bytes": 187659976, "num_examples": 926}], "download_size": 1924164839, "dataset_size": 1903392609}, {"config_name": "year_2009", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1547816260, "num_examples": 7336}, {"name": "validation", "num_bytes": 188897783, "num_examples": 917}, {"name": "test", "num_bytes": 196463897, "num_examples": 917}], "download_size": 1954076983, "dataset_size": 1933177940}, {"config_name": "year_2010", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1493505900, "num_examples": 7013}, {"name": "validation", "num_bytes": 192695567, "num_examples": 877}, {"name": "test", "num_bytes": 191482640, "num_examples": 877}], "download_size": 1897687327, "dataset_size": 1877684107}, {"config_name": "year_2011", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1481486551, "num_examples": 6724}, {"name": "validation", "num_bytes": 190781558, "num_examples": 841}, {"name": "test", "num_bytes": 185869151, "num_examples": 840}], "download_size": 1877396421, "dataset_size": 1858137260}, {"config_name": "year_2012", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1463496224, "num_examples": 6479}, {"name": "validation", "num_bytes": 186247306, "num_examples": 810}, {"name": "test", "num_bytes": 185923601, "num_examples": 810}], "download_size": 1854377191, "dataset_size": 1835667131}, {"config_name": "year_2013", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1468172419, "num_examples": 6372}, {"name": "validation", "num_bytes": 183570866, "num_examples": 797}, {"name": "test", "num_bytes": 182495750, "num_examples": 796}], "download_size": 1852839009, "dataset_size": 1834239035}, {"config_name": "year_2014", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1499451593, "num_examples": 6261}, {"name": "validation", "num_bytes": 181568907, "num_examples": 783}, {"name": "test", "num_bytes": 181046535, "num_examples": 783}], "download_size": 1880963095, "dataset_size": 1862067035}, {"config_name": "year_2015", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1472346721, "num_examples": 6028}, {"name": "validation", "num_bytes": 180128910, "num_examples": 754}, {"name": "test", "num_bytes": 189210252, "num_examples": 753}], "download_size": 1860303134, "dataset_size": 1841685883}, {"config_name": "year_2016", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1471605426, "num_examples": 5812}, {"name": "validation", "num_bytes": 178310005, "num_examples": 727}, {"name": "test", "num_bytes": 177481471, "num_examples": 727}], "download_size": 1845967492, "dataset_size": 1827396902}, {"config_name": "year_2017", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1459021126, "num_examples": 5635}, {"name": "validation", "num_bytes": 174360913, "num_examples": 705}, {"name": "test", "num_bytes": 184398250, "num_examples": 704}], "download_size": 1836306408, "dataset_size": 1817780289}, {"config_name": "year_2018", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1433409319, "num_examples": 5508}, {"name": "validation", "num_bytes": 181466460, "num_examples": 689}, {"name": "test", "num_bytes": 182594965, "num_examples": 688}], "download_size": 1815810567, "dataset_size": 1797470744}, {"config_name": "year_2019", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1421232269, "num_examples": 5354}, {"name": "validation", "num_bytes": 175603562, "num_examples": 670}, {"name": "test", "num_bytes": 176336174, "num_examples": 669}], "download_size": 1791237155, "dataset_size": 1773172005}, {"config_name": "year_2020", "features": [{"name": "filename", "dtype": "string"}, {"name": "cik", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "section_1", "dtype": "string"}, {"name": "section_1A", "dtype": "string"}, {"name": "section_1B", "dtype": "string"}, {"name": "section_2", "dtype": "string"}, {"name": "section_3", "dtype": "string"}, {"name": "section_4", "dtype": "string"}, {"name": "section_5", "dtype": "string"}, {"name": "section_6", "dtype": "string"}, {"name": "section_7", "dtype": "string"}, {"name": "section_7A", "dtype": "string"}, {"name": "section_8", "dtype": "string"}, {"name": "section_9", "dtype": "string"}, {"name": "section_9A", "dtype": "string"}, {"name": "section_9B", "dtype": "string"}, {"name": "section_10", "dtype": "string"}, {"name": "section_11", "dtype": "string"}, {"name": "section_12", "dtype": "string"}, {"name": "section_13", "dtype": "string"}, {"name": "section_14", "dtype": "string"}, {"name": "section_15", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1541847387, "num_examples": 5480}, {"name": "validation", "num_bytes": 193498658, "num_examples": 686}, {"name": "test", "num_bytes": 192600298, "num_examples": 685}], "download_size": 1946916132, "dataset_size": 1927946343}], "tags": ["research papers", "edgar", "sec", "finance", "financial", "filings", "10K", "10-K", "nlp", "research", "econlp", "economics", "business"], "duplicated_from": "eloukas/edgar-corpus"}
2023-07-18T15:27:41+00:00
e62bb3d77ac4797e86ad9779ced416a33db0fe96
# Dataset Card for "LegalLAMA" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Specifications](#supported-tasks-and-leaderboards) ## Dataset Description - **Homepage:** https://github.com/coastalcph/lexlms - **Repository:** https://github.com/coastalcph/lexlms - **Paper:** https://arxiv.org/abs/2305.07507 - **Point of Contact:** [Ilias Chalkidis](mailto:[email protected]) ### Dataset Summary LegalLAMA is a diverse probing benchmark suite comprising 8 sub-tasks that aims to assess the acquaintance of legal knowledge that PLMs acquired in pre-training. ### Dataset Specifications | Corpus | Corpus alias | Examples | Avg. Tokens | Labels | |--------------------------------------|----------------------|-----------|-------------|--------| | Criminal Code Sections (Canada) | `canadian_sections` | 321 | 72 | 144 | | Legal Terminology (EU) | `cjeu_term` | 2,127 | 164 | 23 | | Contractual Section Titles (US) | `contract_sections` | 1,527 | 85 | 20 | | Contract Types (US) | `contract_types` | 1,089 | 150 | 15 | | ECHR Articles (CoE) | `ecthr_articles` | 5,072 | 69 | 13 | | Legal Terminology (CoE) | `ecthr_terms` | 6,803 | 97 | 250 | | Crime Charges (US) | `us_crimes` | 4,518 | 118 | 59 | | Legal Terminology (US) | `us_terms` | 5,829 | 308 | 7 | ### Usage Load a specific sub-corpus, given the corpus alias, as presented above. ```python from datasets import load_dataset dataset = load_dataset('lexlms/legal_lama', name='ecthr_terms') ``` ### Citation [*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.* *LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.* *2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/2023.acl-long.865/) ``` @inproceedings{chalkidis-etal-2023-lexfiles, title = "{L}e{XF}iles and {L}egal{LAMA}: Facilitating {E}nglish Multinational Legal Language Model Development", author = "Chalkidis, Ilias and Garneau, Nicolas and Goanta, Catalina and Katz, Daniel and S{\o}gaard, Anders", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.865", pages = "15513--15535", } ```
lexlms/legal_lama
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended", "language:en", "license:cc-by-nc-sa-4.0", "legal", "law", "arxiv:2305.07507", "region:us" ]
2023-05-10T15:07:14+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["masked-language-modeling"], "pretty_name": "LegalLAMA", "tags": ["legal", "law"]}
2023-07-24T12:13:15+00:00
0bad5f3446008d646877c6c51586383091155793
# Dataset Card for "fr-crawle-reduced" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
edanigoben/fr-crawle-reduced
[ "region:us" ]
2023-05-10T15:13:16+00:00
{"dataset_info": {"features": [{"name": "labels", "dtype": {"class_label": {"names": {"0": "business analyst", "1": "data analyst", "2": "data engineer", "3": "full stack", "4": "data scientist", "5": "software engineer", "6": "devops engineer", "7": "front end", "8": "business intelligence analyst", "9": "machine learning engineer"}}}}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13994632.751735482, "num_examples": 80000}, {"name": "val", "num_bytes": 1749329.0939669353, "num_examples": 10000}, {"name": "test", "num_bytes": 1749329.0939669353, "num_examples": 10000}], "download_size": 10098323, "dataset_size": 17493290.939669352}}
2023-05-16T13:28:16+00:00
ffbc4532d4ff1e2aaa6cb38a5df5aac400725301
# Dataset Card for "old_parquet_2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
polinaeterna/old_parquet_2
[ "region:us" ]
2023-05-10T15:19:34+00:00
{"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 160, "num_examples": 10}], "download_size": 1371, "dataset_size": 160}}
2023-05-10T15:19:36+00:00
0138ba0b5158d2326e4287fce3fe567fa81179e8
bandoos/conll2003-mini
[ "region:us" ]
2023-05-10T16:21:47+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "\"", "1": "''", "2": "#", "3": "$", "4": "(", "5": ")", "6": ",", "7": ".", "8": ":", "9": "``", "10": "CC", "11": "CD", "12": "DT", "13": "EX", "14": "FW", "15": "IN", "16": "JJ", "17": "JJR", "18": "JJS", "19": "LS", "20": "MD", "21": "NN", "22": "NNP", "23": "NNPS", "24": "NNS", "25": "NN|SYM", "26": "PDT", "27": "POS", "28": "PRP", "29": "PRP$", "30": "RB", "31": "RBR", "32": "RBS", "33": "RP", "34": "SYM", "35": "TO", "36": "UH", "37": "VB", "38": "VBD", "39": "VBG", "40": "VBN", "41": "VBP", "42": "VBZ", "43": "WDT", "44": "WP", "45": "WP$", "46": "WRB"}}}}, {"name": "chunk_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-ADJP", "2": "I-ADJP", "3": "B-ADVP", "4": "I-ADVP", "5": "B-CONJP", "6": "I-CONJP", "7": "B-INTJ", "8": "I-INTJ", "9": "B-LST", "10": "I-LST", "11": "B-NP", "12": "I-NP", "13": "B-PP", "14": "I-PP", "15": "B-PRT", "16": "I-PRT", "17": "B-SBAR", "18": "I-SBAR", "19": "B-UCP", "20": "I-UCP", "21": "B-VP", "22": "I-VP"}}}}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "config_name": "conll2003-mini", "splits": [{"name": "train", "num_bytes": 8025, "num_examples": 10}, {"name": "validation", "num_bytes": 9653, "num_examples": 10}, {"name": "test", "num_bytes": 6638, "num_examples": 10}], "download_size": 982975, "dataset_size": 24316}}
2023-05-10T17:03:13+00:00
5482190e81aaecc25a18740ce3080cdd663d4310
# so13m so13m is a dataset containing 13m discussion threads from StackOverflow. The origin of the data is the StackExchange data dump from between January 2014 and December 2022. The threads cover a multitude of topics. This dataset serves as a natural language and (often) accompanying code in the domain of software engineering. Its inclusion could help downstream tasks depending on generating or understanding natural language. --- ## so13m file list - so13m.pkl -- a pickle file that is a dictionary for stackoverflow's posts with key = post id and value = stackoverflow post - so13m.json.gz -- a compressed version of json file that is a dicrionary for stackoverflow's posts with key = post id and value = stackoverflow post - stackoverflow_txtfiles.pkl -- a pickle file that is a list of id of stackoverflow's post - train.bin; val.bin -- bin files for traning and fine-tuning models --- ## so13m dataset details We provide the size of our dataset in the following table: | Config | Value | | ------- | ------- | |number of tokens | 10,495,518,108| |number of Stack Overflow Posts | 13,071,148| |megabytes after processing |16,695 | We tokenize our data using scripts provided in our [github repository](https://github.com/apcl-research/jam/blob/main/data/jam_so13m/prepare_stackoverflow.py).
apcl/so13m
[ "task_categories:text-generation", "size_categories:10M<n<100M", "license:bigscience-openrail-m", "region:us" ]
2023-05-10T16:27:06+00:00
{"license": "bigscience-openrail-m", "size_categories": ["10M<n<100M"], "task_categories": ["text-generation"], "pretty_name": "SO13M"}
2023-05-12T04:06:30+00:00
07b6d3a88b6dc17f89f081392a6084244937fcc3
# Dataset Card for "image2music_abc" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AnyaSchen/image2music_abc
[ "region:us" ]
2023-05-10T16:29:56+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "music", "dtype": "string"}, {"name": "genre", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 439438910.011, "num_examples": 1003}], "download_size": 438955468, "dataset_size": 439438910.011}}
2023-05-10T17:29:28+00:00
d0b0ce9444c8f895d416185f4e8b0a1e805190e6
Cr0mwell/DATA
[ "license:other", "region:us" ]
2023-05-10T17:23:12+00:00
{"license": "other"}
2023-05-10T17:23:12+00:00
ea0fea25475a0df22fd4bb0d1527af7cbc444aeb
sahilur/hyper-kvasir-labeled-images
[ "task_categories:image-classification", "size_categories:1K<n<10K", "license:cc-by-4.0", "medical", "region:us" ]
2023-05-10T17:42:55+00:00
{"license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "pretty_name": "hyper-kvasir-labeled-images", "tags": ["medical"]}
2023-05-11T06:15:58+00:00
1531f0922670725d9ff9cc51c9fce40b6f50a828
# Dataset Card for "ontonotes5.0-pt-harem-selective" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
arubenruben/ontonotes5.0-pt-harem-selective
[ "region:us" ]
2023-05-10T18:01:48+00:00
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PESSOA", "2": "I-PESSOA", "3": "B-ORGANIZACAO", "4": "I-ORGANIZACAO", "5": "B-LOCAL", "6": "I-LOCAL", "7": "B-TEMPO", "8": "I-TEMPO", "9": "B-VALOR", "10": "I-VALOR"}}}}], "splits": [{"name": "train", "num_bytes": 16511400, "num_examples": 1898}, {"name": "validation", "num_bytes": 2417378, "num_examples": 279}, {"name": "test", "num_bytes": 1564609, "num_examples": 163}], "download_size": 3181837, "dataset_size": 20493387}}
2023-05-12T09:02:20+00:00
f8c4b38f13442defbd97c27b4db00dc4b4aa1c92
# Dataset Card for "ontonotes5.0-pt-harem-default" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
arubenruben/ontonotes5.0-pt-harem-default
[ "region:us" ]
2023-05-10T18:02:05+00:00
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PESSOA", "2": "I-PESSOA", "3": "B-ORGANIZACAO", "4": "I-ORGANIZACAO", "5": "B-LOCAL", "6": "I-LOCAL", "7": "B-TEMPO", "8": "I-TEMPO", "9": "B-VALOR", "10": "I-VALOR", "11": "B-ABSTRACCAO", "12": "I-ABSTRACCAO", "13": "B-ACONTECIMENTO", "14": "I-ACONTECIMENTO", "15": "B-COISA", "16": "I-COISA", "17": "B-OBRA", "18": "I-OBRA", "19": "B-OUTRO", "20": "I-OUTRO"}}}}], "splits": [{"name": "train", "num_bytes": 16511400, "num_examples": 1898}, {"name": "validation", "num_bytes": 2417378, "num_examples": 279}, {"name": "test", "num_bytes": 1564609, "num_examples": 163}], "download_size": 3182791, "dataset_size": 20493387}}
2023-05-12T09:02:51+00:00
bd4020d0a4a6c0337e57e7c24ec83f17191e87fb
# Dataset Card for "imsdb-500tokenaction-movie-scripts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aneeshas/imsdb-500tokenaction-movie-scripts
[ "region:us" ]
2023-05-10T18:36:24+00:00
{"dataset_info": {"features": [{"name": "Action", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 155714, "num_examples": 336}], "download_size": 95251, "dataset_size": 155714}}
2023-05-10T18:36:27+00:00
388934b550beab017a2904b1b4c87d15c789bae6
# Dataset Card for "imsdb-500tokenhorror-movie-scripts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aneeshas/imsdb-500tokenhorror-movie-scripts
[ "region:us" ]
2023-05-10T18:36:41+00:00
{"dataset_info": {"features": [{"name": "Horror", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 76307, "num_examples": 158}], "download_size": 50645, "dataset_size": 76307}}
2023-05-10T18:36:43+00:00
6bc69a8681d2e5ad5b4a2bf46d4fadfd6df45814
# Dataset Card for "imsdb-500tokensci-fi-movie-scripts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aneeshas/imsdb-500tokensci-fi-movie-scripts
[ "region:us" ]
2023-05-10T18:37:03+00:00
{"dataset_info": {"features": [{"name": "Sci-Fi", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 82670, "num_examples": 180}], "download_size": 53226, "dataset_size": 82670}}
2023-05-10T18:37:05+00:00
cb4af104f984ab868e84142ddbb250eda6adda5d
# Dataset Card for "imsdb-500tokencomedy-movie-scripts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aneeshas/imsdb-500tokencomedy-movie-scripts
[ "region:us" ]
2023-05-10T18:37:15+00:00
{"dataset_info": {"features": [{"name": "Comedy", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 183726, "num_examples": 394}], "download_size": 112420, "dataset_size": 183726}}
2023-05-10T18:37:17+00:00
d01cc6c156488159eb377faed61a7204458d8484
# Dataset Card for "imsdb-500tokendrama-movie-scripts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aneeshas/imsdb-500tokendrama-movie-scripts
[ "region:us" ]
2023-05-10T18:37:25+00:00
{"dataset_info": {"features": [{"name": "Drama", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 307903, "num_examples": 652}], "download_size": 189402, "dataset_size": 307903}}
2023-05-10T18:37:26+00:00
aa9a18226b86bdadbb74d119b71ab9fb343fb7ee
# funcom-java-long funcom-java-long is a dataset for code summarization presented by Bansal et al. It includes 170k Java methods and summaries for training and 8,192 Java methods in test set. This dataset is extracted from jm52m and is cleaned and deduplicated. The example prompt for fine-tuning is as follows. - TDAT: \<method code\> COMMENT: \<comment\> \<!endofdoc\> ### funcom_test.tar.gz -- a compressed directory for test set with the text inside this folder whose filename = function id ### funcom_train.tar.gz -- a compressed directory for training set with the text inside this folder whose filename = function id ### funcom_val.tar.gz -- a compressed directory for validation set with the text inside this folder whose filename = function id ### train.bin; val.bin -- bin files for training in the format above. ### test.bin; bin file with the test data in the format above.
apcl/funcom-java-long
[ "region:us" ]
2023-05-10T19:27:12+00:00
{}
2023-05-11T19:25:43+00:00
55f5b29dd7bb936201ba39cb273999413e3477d6
ucalyptus/TheRanveerShow
[ "license:mit", "region:us" ]
2023-05-10T19:34:05+00:00
{"license": "mit"}
2023-05-10T20:13:48+00:00
440e264ed0060e5fad5564fc196282b0a31755e4
This is the model files for [ControlNet 1.1](https://github.com/lllyasviel/ControlNet-v1-1-nightly). This model card will be filled in a more detailed way after 1.1 is officially merged into ControlNet.
disty/seait_ControlNet1-1-modules-safetensors
[ "license:openrail", "region:us" ]
2023-05-10T19:44:18+00:00
{"license": "openrail"}
2023-05-10T20:54:10+00:00
aeb4c4194508bba5d5afe898ab7fe45445605610
# Summary This is a question-answer dataset for the Grade 12 (M6) Social subject of the Thailand Ordinary National Educational Test (ONET). The dataset was human-extracted by my team from the official release of publicly available exams [National Institute of Educational Testing Service](https://www.niets.or.th/th/catalog/view/630) during the years 2016-2022. The exam consists of 510 multiple-choice questions with corresponding answer keys. It is important to note that only two questions, Q71 and Q85, from the year 2018, require image interpretation, which is not available in this dataset's format. Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: Thai Version: 1.0 ---
Thaweewat/onet-m6-social
[ "task_categories:question-answering", "size_categories:n<1K", "language:th", "license:cc-by-sa-3.0", "social", "instruction-finetuning", "region:us" ]
2023-05-10T20:12:45+00:00
{"language": ["th"], "license": "cc-by-sa-3.0", "size_categories": ["n<1K"], "task_categories": ["question-answering"], "pretty_name": "onet-m6", "tags": ["social", "instruction-finetuning"]}
2023-05-10T23:42:33+00:00
2bef62ccb5ea671af60bf9b9d1612ccf388fd97d
# Dataset Card for "turkishReviews-ds-mini" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AzerKBU/turkishReviews-ds-mini
[ "region:us" ]
2023-05-10T20:30:44+00:00
{"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1252876.2642514652, "num_examples": 3378}, {"name": "validation", "num_bytes": 139455.7357485349, "num_examples": 376}], "download_size": 0, "dataset_size": 1392332.0}}
2023-05-12T19:25:07+00:00
48ac594c05f0bad72dbefd7cd927a83e7a92986f
This is the same dataset as [`dbpedia_14`](https://huggingface.co/datasets/dbpedia_14). The only differences are 1. Addition of a unique identifier, `uid` 1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers - `all-mpnet-base-v2` - `multi-qa-mpnet-base-dot-v1` - `all-MiniLM-L12-v2` 1. Renaming of the `label` column to `labels` for easier compatibility with the transformers library
pietrolesci/dbpedia_14_indexed
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "region:us" ]
2023-05-10T21:11:57+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["topic-classification"], "paperswithcode_id": "dbpedia", "pretty_name": "DBpedia", "dataset_info": {"features": [{"name": "labels", "dtype": {"class_label": {"names": {"0": "Company", "1": "EducationalInstitution", "2": "Artist", "3": "Athlete", "4": "OfficeHolder", "5": "MeanOfTransportation", "6": "Building", "7": "NaturalPlace", "8": "Village", "9": "Animal", "10": "Plant", "11": "Album", "12": "Film", "13": "WrittenWork"}}}}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "uid", "dtype": "int64"}, {"name": "embedding_all-mpnet-base-v2", "sequence": "float32"}, {"name": "embedding_multi-qa-mpnet-base-dot-v1", "sequence": "float32"}, {"name": "embedding_all-MiniLM-L12-v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 4490428970, "num_examples": 560000}, {"name": "test", "num_bytes": 561310285, "num_examples": 70000}], "download_size": 0, "dataset_size": 5051739255}}
2023-05-11T12:34:45+00:00
4e885ac286270259ce8802257868e240c9ee62f1
rihoyn/atesting
[ "license:osl-3.0", "region:us" ]
2023-05-10T22:07:53+00:00
{"license": "osl-3.0"}
2023-05-10T22:10:33+00:00
e2bc28fcea70d805c9aa3d9f6845e9652cf42078
h2oai/dai_docs
[ "license:other", "region:us" ]
2023-05-10T22:19:06+00:00
{"license": "other"}
2023-05-15T16:55:59+00:00
cc92bbf8b8286c51259020cb25bdd47263a306f5
XNLI consists of 10k English sentences translated into 14 languages: ar: Arabic bg: Bulgarian de: German el: Greek es: Spanish fr: French hi: Hindi ru: Russian sw: Swahili th: Thai tr: Turkish ur: Urdu vi: Vietnamese zh: Chinese (Simplified) The XNLI 15-way parallel corpus can be used for Machine Translation as evaluation sets, in particular for low-resource languages such as Swahili or Urdu. We provide two files: xnli.15way.orig.tsv and xnli.15way.tok.tsv containing respectively the original and the tokenized version of the corpus. The files consist of 15 tab-separated columns, each corresponding to one language as indicated by the header. Please consider citing the following paper if using this dataset: @InProceedings{conneau2018xnli, author = "Conneau, Alexis and Rinott, Ruty and Lample, Guillaume and Williams, Adina and Bowman, Samuel R. and Schwenk, Holger and Stoyanov, Veselin", title = "XNLI: Evaluating Cross-lingual Sentence Representations", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", year = "2018", publisher = "Association for Computational Linguistics", location = "Brussels, Belgium", }
alvations/xnli-15way
[ "region:us" ]
2023-05-10T22:19:18+00:00
{}
2023-05-10T22:20:01+00:00
6a76f637b03278dc286ea6bd3e70df5fdfc8bbf4
# Summary This is a 🇹🇭 Thai-translated dataset based on [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) using Google Cloud Translation. This repository provides access to: - 161K Train dataset Anthropic/hh-rlhf (Thai-translated) - (Soon) 8K Test dataset Anthropic/hh-rlhf (Thai-translated) **Disclaimer:** The data contain content that may be offensive or upsetting. Topics include, but are not limited to, discriminatory language and discussions of abuse, violence, self-harm, exploitation, and other potentially upsetting subject matter. Please only engage with the data in accordance with your own personal risk tolerance. The data are intended for research purposes, especially research that can make models less harmful. The views expressed in the data do not reflect the views of Anthropic or any of its employees.
Thaweewat/hh-rlhf-th
[ "size_categories:100K<n<1M", "language:th", "license:mit", "human-feedback", "region:us" ]
2023-05-10T22:31:18+00:00
{"language": ["th"], "license": "mit", "size_categories": ["100K<n<1M"], "tags": ["human-feedback"]}
2023-05-10T22:44:07+00:00
664b653d4a0216cb3ee47abfd21158d2a314d5ec
# Dataset Card for "open-instruct-v1-oasst-dolly-hhrlhf" This dataset is a combination of: 1. Filtered subset of[OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) 2. train split of [Mosaic-dolly-hhrlhf](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) (consists of [Databrick's dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset and a filtered subset of [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf)). ## Dataset The dataset consists of 3 columns: 1. instruction: The natural language instruction without any prompt templates (we extracted them out of the alpaca-format in Mosaic-dolly-hhrlhf) 2. alpaca_prompt: Alpaca prompt template versions of instruction 3. response: The response to the instruction ## License - It is usable for commercial purposes so long as you follow the terms of the license. - Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license: - Wikipedia (various pages) - https://www.wikipedia.org/ - Copyright © Wikipedia editors and contributors. - Databricks (https://www.databricks.com) - Copyright © Databricks - Mosaic ML (https://www.mosaicml.com/) - Copyright © Mosaic ML - VMware - Copyright © VMware [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
VMware/open-instruct-v1-oasst-dolly-hhrlhf
[ "language:en", "region:us" ]
2023-05-10T22:36:12+00:00
{"language": "en", "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "alpaca_prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 60252132, "num_examples": 62971}], "download_size": 33232110, "dataset_size": 60252132}}
2023-07-13T13:21:14+00:00
8a0f1358edab233dbdb9553b772a4018f56af405
This dataset is https://github.com/teknium1/GPTeacher unfiltered, removing 1489 instances of blatant alignment. 23073 instructions remain. https://github.com/teknium1/GPTeacher/blob/8afcaaa7a11dd980162d861bd6be970f95eb7174/Codegen/codegen-instruct.json https://github.com/teknium1/GPTeacher/blob/e3b7aba886c6c0c8ad30a650edfa7a3093fbf57c/Instruct/gpt4-instruct-dedupe-only-dataset.json https://github.com/teknium1/GPTeacher/blob/5b040645528a38bfa81a258e7646f8c92ad7d0dd/Roleplay/roleplay-simple-deduped-roleplay-instruct.json i combined all of these files above into gpteacher.json and ran clean.py normal dedupe.py script didn't find any dupes here. inspired by https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to clean.py
ewof/gpteacher-unfiltered
[ "region:us" ]
2023-05-10T22:49:06+00:00
{}
2023-05-13T02:54:31+00:00
21ffd88b35043a4b3bd3569e442adbd7874efe9c
# Dataset Card for "covidqa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Sree1994/covidqa
[ "region:us" ]
2023-05-10T23:30:49+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "question", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48653509, "num_examples": 1417}, {"name": "test", "num_bytes": 11608421, "num_examples": 375}, {"name": "valid", "num_bytes": 4314598, "num_examples": 203}], "download_size": 2241429, "dataset_size": 64576528}}
2023-05-13T15:17:01+00:00
aafc8f18d6f543e30d4ac656a699b926bafa5dd7
This dataset is HuggingFaceH4/CodeAlpaca_20K unfiltered, removing 36 instances of blatant alignment. 19986 instructions remain. https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K/blob/29ba7b7fdf0c55e5435c848cf6bbf9782fef62a6/data/test-00000-of-00001.parquet https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K/blob/a123ae447f02484d83c3457438b4422cd8417ad5/data/train-00000-of-00001.parquet i combined all of these files above into code_alpaca_data.jsonl with parquet2json and ran clean.py on it, renamed to .json not .jsonl normal dedupe.py script didn't find any dupes here. inspired by https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to clean.py
ewof/code-alpaca-instruct-unfiltered
[ "region:us" ]
2023-05-10T23:52:51+00:00
{}
2023-05-13T02:53:37+00:00
e78a2eff422c20d1196dd7216093eb9f6fcd0c70
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** https://arxiv.org/abs/2209.00465 - **Leaderboard:** - **Point of Contact:** [email protected] ### Dataset Summary This **G-PlanET** dataset is built on AI2 [ALFRED](https://leaderboard.allenai.org/alfred/submissions/get-started). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
yuchenlin/G-PlanET
[ "task_categories:text-generation", "task_categories:table-to-text", "task_categories:table-question-answering", "language:en", "license:apache-2.0", "arxiv:2209.00465", "region:us" ]
2023-05-10T23:54:50+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation", "table-to-text", "table-question-answering"]}
2023-07-15T06:33:33+00:00
4a10bae703bba188cbbcec5183679ec77e0614be
daasd/model_json
[ "region:us" ]
2023-05-10T23:59:47+00:00
{}
2023-05-11T19:58:21+00:00
75b1fbf07ff3470634765d469d9d177e03918fd7
This dataset is Dahoas/synthetic-instruct-gptj-pairwise, removing 2453 instances of blatant alignment and 44 duplicates. 30646 instructions remain. https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise/blob/987e7db3ef7e68c93950bc61edc10ab9b7c239ef/data/train-00000-of-00001-1e5d57b93c448e7a.parquet turned above code_alpaca_data.jsonl with parquet2json and ran clean.py on it, renamed to .json not .jsonl inspired by https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to clean.py
ewof/synthetic-instruct-unfiltered-deduped
[ "region:us" ]
2023-05-11T00:02:30+00:00
{}
2023-05-13T02:54:41+00:00
9f1b7bfaa3c7fa609300ee2c8ca34f4efc40f7ab
# Dataset Card for "globalvoices-de-es" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
alvations/globalvoices-de-es
[ "region:us" ]
2023-05-11T00:08:14+00:00
{"dataset_info": {"features": [{"name": "de", "dtype": "string"}, {"name": "es", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18269102, "num_examples": 66782}], "download_size": 11604428, "dataset_size": 18269102}}
2023-05-11T00:14:27+00:00
0c6c09c2ae99b59404ac91b3010e28d8823cf7ac
# Dataset Card for "globalvoices-de-en" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
alvations/globalvoices-de-en
[ "region:us" ]
2023-05-11T00:14:27+00:00
{"dataset_info": {"features": [{"name": "de", "dtype": "string"}, {"name": "en", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18073774, "num_examples": 70547}], "download_size": 11610471, "dataset_size": 18073774}}
2023-05-11T00:14:29+00:00
cc493a886b3cfc7003bee3d407b09ba3dc076ce2
# Dataset Card for "globalvoices-en-es" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
alvations/globalvoices-en-es
[ "region:us" ]
2023-05-11T00:14:29+00:00
{"dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "es", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 89033765, "num_examples": 355136}], "download_size": 57678468, "dataset_size": 89033765}}
2023-05-11T00:14:34+00:00
e68e5ea09b38b52df3d06500cbf5850598759946
# Dataset Card for "Recommended_Proceeding" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Yamei/Recommended_Proceeding
[ "region:us" ]
2023-05-11T00:32:27+00:00
{"dataset_info": {"features": [{"name": "data", "struct": [{"name": "proceeding", "struct": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "acronym", "dtype": "string"}, {"name": "groupId", "dtype": "string"}, {"name": "volume", "dtype": "string"}, {"name": "displayVolume", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "article", "struct": [{"name": "id", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "normalizedTitle", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "abstracts", "list": [{"name": "abstractType", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "normalizedAbstract", "dtype": "string"}, {"name": "fno", "dtype": "string"}, {"name": "keywords", "list": "string"}, {"name": "authors", "list": [{"name": "affiliation", "dtype": "string"}, {"name": "fullName", "dtype": "string"}, {"name": "givenName", "dtype": "string"}, {"name": "surname", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "idPrefix", "dtype": "string"}, {"name": "isOpenAccess", "dtype": "bool"}, {"name": "showRecommendedArticles", "dtype": "bool"}, {"name": "showBuyMe", "dtype": "bool"}, {"name": "hasPdf", "dtype": "bool"}, {"name": "pubDate", "dtype": "timestamp[s]"}, {"name": "pubType", "dtype": "string"}, {"name": "pages", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "issn", "dtype": "string"}, {"name": "isbn", "dtype": "string"}, {"name": "notes", "dtype": "string"}, {"name": "notesType", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "webExtras", "list": [{"name": "id", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "size", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "adjacentArticles", "struct": [{"name": "previous", "struct": [{"name": "fno", "dtype": "string"}, {"name": "articleId", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "next", "struct": [{"name": "fno", "dtype": "string"}, {"name": "articleId", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "__typename", "dtype": "string"}]}, {"name": "recommendedArticles", "list": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "abstractUrl", "dtype": "string"}, {"name": "parentPublication", "struct": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "__typename", "dtype": "string"}]}, {"name": "articleVideos", "list": [{"name": "id", "dtype": "string"}, {"name": "videoExt", "dtype": "string"}, {"name": "videoType", "struct": [{"name": "featured", "dtype": "bool"}, {"name": "recommended", "dtype": "bool"}, {"name": "sponsored", "dtype": "bool"}, {"name": "__typename", "dtype": "string"}]}, {"name": "article", "struct": [{"name": "id", "dtype": "string"}, {"name": "fno", "dtype": "string"}, {"name": "issueNum", "dtype": "string"}, {"name": "pubType", "dtype": "string"}, {"name": "volume", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "idPrefix", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "channel", "struct": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "featured", "dtype": "bool"}, {"name": "defaultVideoId", "dtype": "string"}, {"name": "category", "struct": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "__typename", "dtype": "string"}]}, {"name": "year", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "keywords", "list": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "speakers", "list": [{"name": "firstName", "dtype": "string"}, {"name": "lastName", "dtype": "string"}, {"name": "affiliation", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "created", "dtype": "timestamp[s]"}, {"name": "updated", "dtype": "timestamp[s]"}, {"name": "imageThumbnailUrl", "dtype": "string"}, {"name": "runningTime", "dtype": "string"}, {"name": "aspectRatio", "dtype": "string"}, {"name": "metrics", "struct": [{"name": "views", "dtype": "string"}, {"name": "likes", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "notShowInVideoLib", "dtype": "bool"}, {"name": "__typename", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 154207098, "num_examples": 21043}], "download_size": 62572749, "dataset_size": 154207098}}
2023-05-11T17:27:02+00:00
b0e88c490b0ca3f82f31a1316b53aae30b4e88e9
# Dataset Card for "Recommended_Issue" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Yamei/Recommended_Issue
[ "region:us" ]
2023-05-11T00:40:45+00:00
{"dataset_info": {"features": [{"name": "data", "struct": [{"name": "issue", "struct": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "issueNum", "dtype": "string"}, {"name": "idPrefix", "dtype": "string"}, {"name": "pubType", "dtype": "string"}, {"name": "volume", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "downloadables", "struct": [{"name": "hasCover", "dtype": "bool"}, {"name": "__typename", "dtype": "string"}]}, {"name": "__typename", "dtype": "string"}]}, {"name": "article", "struct": [{"name": "id", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "abstracts", "list": [{"name": "abstractType", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "normalizedAbstract", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "normalizedTitle", "dtype": "string"}, {"name": "fno", "dtype": "string"}, {"name": "hasPdf", "dtype": "bool"}, {"name": "idPrefix", "dtype": "string"}, {"name": "keywords", "list": "string"}, {"name": "authors", "list": [{"name": "givenName", "dtype": "string"}, {"name": "surname", "dtype": "string"}, {"name": "fullName", "dtype": "string"}, {"name": "affiliation", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "replicability", "struct": [{"name": "isEnabled", "dtype": "bool"}, {"name": "codeDownloadUrl", "dtype": "string"}, {"name": "codeRepositoryUrl", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "showBuyMe", "dtype": "bool"}, {"name": "showRecommendedArticles", "dtype": "bool"}, {"name": "isOpenAccess", "dtype": "bool"}, {"name": "issueNum", "dtype": "string"}, {"name": "pubDate", "dtype": "string"}, {"name": "pubType", "dtype": "string"}, {"name": "pages", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "issn", "dtype": "string"}, {"name": "isbn", "dtype": "string"}, {"name": "notes", "dtype": "string"}, {"name": "notesType", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "recommendedArticles", "list": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "abstractUrl", "dtype": "string"}, {"name": "parentPublication", "struct": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "__typename", "dtype": "string"}]}, {"name": "adjacentArticles", "struct": [{"name": "previous", "struct": [{"name": "fno", "dtype": "string"}, {"name": "articleId", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "next", "struct": [{"name": "fno", "dtype": "string"}, {"name": "articleId", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "__typename", "dtype": "string"}]}, {"name": "webExtras", "list": [{"name": "id", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "extension", "dtype": "string"}, {"name": "size", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "articleVideos", "list": [{"name": "id", "dtype": "string"}, {"name": "videoExt", "dtype": "string"}, {"name": "videoType", "struct": [{"name": "featured", "dtype": "bool"}, {"name": "recommended", "dtype": "bool"}, {"name": "sponsored", "dtype": "bool"}, {"name": "__typename", "dtype": "string"}]}, {"name": "article", "struct": [{"name": "id", "dtype": "string"}, {"name": "fno", "dtype": "string"}, {"name": "issueNum", "dtype": "string"}, {"name": "pubType", "dtype": "string"}, {"name": "volume", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "idPrefix", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "channel", "struct": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "featured", "dtype": "bool"}, {"name": "defaultVideoId", "dtype": "string"}, {"name": "category", "struct": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "__typename", "dtype": "string"}]}, {"name": "year", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "keywords", "list": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "speakers", "list": [{"name": "firstName", "dtype": "string"}, {"name": "lastName", "dtype": "string"}, {"name": "affiliation", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "created", "dtype": "timestamp[s]"}, {"name": "updated", "dtype": "timestamp[s]"}, {"name": "imageThumbnailUrl", "dtype": "string"}, {"name": "runningTime", "dtype": "string"}, {"name": "aspectRatio", "dtype": "string"}, {"name": "metrics", "struct": [{"name": "views", "dtype": "string"}, {"name": "likes", "dtype": "string"}, {"name": "__typename", "dtype": "string"}]}, {"name": "notShowInVideoLib", "dtype": "bool"}, {"name": "__typename", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 51563733, "num_examples": 6791}], "download_size": 21059507, "dataset_size": 51563733}}
2023-05-11T00:41:10+00:00
65dca657db7222c12721b7f2156a28730bb7c327
clone from https://huggingface.co/datasets/lj_speech and support set data_files # use ```python dataset = load_dataset('SeanSleat/lj_speech',data_files='/path/to/LJSpeech-1.1.tar.bz2')
SeanSleat/lj_speech
[ "region:us" ]
2023-05-11T00:41:22+00:00
{}
2023-05-11T00:52:11+00:00
20d04cd6a3078eb7cdd968a8c4833f44952a0ad6
StefanH/utcd
[ "license:mit", "region:us" ]
2023-05-11T00:46:22+00:00
{"license": "mit"}
2023-05-11T14:35:46+00:00
4a0b7133bd88521ae878677e2bbad7c4a0245e1b
reply1 = "What is your question?" (reply1) user_response = input() reply2 = "What do you want the answer to be?" (reply2) user_response = input() reply3 = "That is the answer. As your intuition is simply what you want. What is your next question?" (reply3)
Holycanolies123/Bittt
[ "region:us" ]
2023-05-11T01:05:19+00:00
{}
2023-05-11T01:05:31+00:00
d1b444ae606c9ac87296f538ef3cce2f8aa9dd66
# flower 数据集 用于测试mmlab训练
jobcher/flower-datasets
[ "license:mit", "region:us" ]
2023-05-11T01:29:36+00:00
{"license": "mit"}
2023-05-11T01:51:40+00:00
2a7ebbeebf8fff2a36196ac073363bb098719543
zhenyu123/test-ds
[ "region:us" ]
2023-05-11T02:13:27+00:00
{}
2023-05-11T02:13:40+00:00
ab1cb9fa3820a1c1e79de4fad5bfda81404d2f87
# Dataset Card for "sidewalk-imagery12" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
G12345/sidewalk-imagery12
[ "region:us" ]
2023-05-11T02:29:22+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3139971.0, "num_examples": 10}], "download_size": 3141481, "dataset_size": 3139971.0}}
2023-05-11T02:56:16+00:00
835e07ebcafb48994a06bdb1cc186d31ac85fb1a
# Dataset Card for "processed_hbert_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
artem9k/processed_hbert_dataset
[ "region:us" ]
2023-05-11T02:29:54+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 2700406800.0, "num_examples": 750113}], "download_size": 716008415, "dataset_size": 2700406800.0}}
2023-05-11T02:39:47+00:00
93e34b51a9eb50ce0542d6d55cbf2f8543cbdce4
tgokhale/vqa_lol
[ "license:cc-by-nc-nd-4.0", "region:us" ]
2023-05-11T02:31:29+00:00
{"license": "cc-by-nc-nd-4.0"}
2023-05-11T03:50:32+00:00
9d78d94400c083a5dacea69ea3bf9338a892b7e5
# Dataset Card for "sdg-single" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dspoka/sdg-single
[ "region:us" ]
2023-05-11T02:55:36+00:00
{"dataset_info": {"features": [{"name": "iso3", "dtype": "string"}, {"name": "country", "dtype": "string"}, {"name": "goal", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "sector", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "infotype", "dtype": "string"}, {"name": "start", "dtype": "float64"}, {"name": "end", "dtype": "float64"}, {"name": "filename", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "full", "num_bytes": 4297968, "num_examples": 14219}], "download_size": 0, "dataset_size": 4297968}}
2023-05-15T04:14:42+00:00
7a661fb158f56206b20b246ce8b78e7ed4b5d888
# Dataset Card for "texto_osal_mexico_tag" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
agusnieto77/texto_osal_mexico_tag
[ "region:us" ]
2023-05-11T03:21:25+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "prediction", "dtype": "null"}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "list": [{"name": "end", "dtype": "int64"}, {"name": "label", "dtype": "string"}, {"name": "start", "dtype": "int64"}]}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "annotated", "struct": [{"name": "mentions", "list": [{"name": "capitalness", "dtype": "string"}, {"name": "chars_length", "dtype": "int64"}, {"name": "density", "dtype": "float64"}, {"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "tokens_length", "dtype": "int64"}, {"name": "value", "dtype": "string"}]}, {"name": "tags", "list": [{"name": "tag", "dtype": "string"}, {"name": "value", "dtype": "string"}]}]}, {"name": "predicted", "struct": [{"name": "mentions", "sequence": "null"}, {"name": "tags", "sequence": "null"}]}, {"name": "text_length", "dtype": "int64"}, {"name": "tokens", "list": [{"name": "capitalness", "dtype": "string"}, {"name": "char_end", "dtype": "int64"}, {"name": "char_start", "dtype": "int64"}, {"name": "custom", "dtype": "null"}, {"name": "idx", "dtype": "int64"}, {"name": "length", "dtype": "int64"}, {"name": "score", "dtype": "null"}, {"name": "tag", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "tokens_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 78790, "num_examples": 20}], "download_size": 40720, "dataset_size": 78790}}
2023-05-11T03:45:30+00:00