sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
tokens_length
listlengths
1
353
input_texts
listlengths
1
40
e5138d3fa48bc2d05f9c3f125f1244d721d9d40a
# Test instruction backtranslation This is the dataset I obtained by applying [instruction backtranslation](https://github.com/bigcode-project/bigcode-finetuning/tree/main/instruction_backtranslation) (just the `self-curation part`, no `self-augmentation`). The model used for curation is starcoder, fine-tuned on OpenAssistant-guanaco. Here is the command : ``` python3 -u -m torch.distributed.run main.py --model_name_or_path=bigcode/starcoder --dataset_name_or_path=ArmelR/oasst1_guanaco --shuffle_buffer 100 --seq_length 2048 --max_steps 160 --batch_size 1 --dialogue_template_name standard --input_column_name=prompt --output_column_name=completion --num_workers 1 --gradient_accumulation_steps 4 --learning_rate 5e-5 --lr_scheduler_type=cosine --log_freq 5 --eval_freq 10 --num_warmup_steps 10 --save_freq 20 --weight_decay 0.05 --save_total_limit 3 --output_dir=./checkpoints --synthetic_data_path=./data/nuprl_cleaned.jsonl --dataset_text_field=content --request_batch_size 16 --max_new_tokens 512 --temperature 0.7 --top_p 0.9 --num_beams 1 --repetition_penalty 1.2 --number_of_rounds 5 --max_samples 157767 --curation_model_name_or_path bigcode/starcoder --do_self_curation ```
ArmelR/test_instruction_backtranslation
[ "region:us" ]
2023-09-28T08:45:30+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "before_cleaning", "num_bytes": 58053871, "num_examples": 71968}, {"name": "round_0", "num_bytes": 58053871, "num_examples": 71968}, {"name": "round_1", "num_bytes": 58053871, "num_examples": 71968}, {"name": "round_2", "num_bytes": 58053871, "num_examples": 71968}, {"name": "round_3", "num_bytes": 58053871, "num_examples": 71968}], "download_size": 144722712, "dataset_size": 290269355}}
2023-09-28T08:54:30+00:00
[]
[]
TAGS #region-us
# Test instruction backtranslation This is the dataset I obtained by applying instruction backtranslation (just the 'self-curation part', no 'self-augmentation'). The model used for curation is starcoder, fine-tuned on OpenAssistant-guanaco. Here is the command :
[ "# Test instruction backtranslation\n\nThis is the dataset I obtained by applying instruction backtranslation (just the 'self-curation part', no 'self-augmentation').\nThe model used for curation is starcoder, fine-tuned on OpenAssistant-guanaco. Here is the command :" ]
[ "TAGS\n#region-us \n", "# Test instruction backtranslation\n\nThis is the dataset I obtained by applying instruction backtranslation (just the 'self-curation part', no 'self-augmentation').\nThe model used for curation is starcoder, fine-tuned on OpenAssistant-guanaco. Here is the command :" ]
[ 6, 71 ]
[ "passage: TAGS\n#region-us \n# Test instruction backtranslation\n\nThis is the dataset I obtained by applying instruction backtranslation (just the 'self-curation part', no 'self-augmentation').\nThe model used for curation is starcoder, fine-tuned on OpenAssistant-guanaco. Here is the command :" ]
7b67447c5486d070500c9f8bb1f5408e4e5bf118
# Dataset Card for "oasst1_th" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Thaweewat/oasst1_th
[ "language:th", "region:us" ]
2023-09-28T08:52:59+00:00
{"language": ["th"], "dataset_info": [{"config_name": "default", "features": [{"name": "message_id", "dtype": "string"}, {"name": "parent_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}, {"name": "created_date", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "text_th", "dtype": "string"}, {"name": "role", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "review_count", "dtype": "int32"}, {"name": "review_result", "dtype": "bool"}, {"name": "deleted", "dtype": "bool"}, {"name": "rank", "dtype": "float64"}, {"name": "synthetic", "dtype": "bool"}, {"name": "model_name", "dtype": "null"}, {"name": "detoxify", "struct": [{"name": "identity_attack", "dtype": "float64"}, {"name": "insult", "dtype": "float64"}, {"name": "obscene", "dtype": "float64"}, {"name": "severe_toxicity", "dtype": "float64"}, {"name": "sexual_explicit", "dtype": "float64"}, {"name": "threat", "dtype": "float64"}, {"name": "toxicity", "dtype": "float64"}]}, {"name": "message_tree_id", "dtype": "string"}, {"name": "tree_state", "dtype": "string"}, {"name": "emojis", "struct": [{"name": "count", "sequence": "int32"}, {"name": "name", "sequence": "string"}]}, {"name": "labels", "struct": [{"name": "count", "sequence": "int32"}, {"name": "name", "sequence": "string"}, {"name": "value", "sequence": "float64"}]}], "splits": [{"name": "train", "num_bytes": 10381992, "num_examples": 4401}], "download_size": 0, "dataset_size": 10381992}, {"config_name": "train", "features": [{"name": "message_id", "dtype": "string"}, {"name": "parent_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}, {"name": "created_date", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "text_th", "dtype": "string"}, {"name": "role", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "review_count", "dtype": "int32"}, {"name": "review_result", "dtype": "bool"}, {"name": "deleted", "dtype": "bool"}, {"name": "rank", "dtype": "float64"}, {"name": "synthetic", "dtype": "bool"}, {"name": "model_name", "dtype": "null"}, {"name": "detoxify", "struct": [{"name": "identity_attack", "dtype": "float64"}, {"name": "insult", "dtype": "float64"}, {"name": "obscene", "dtype": "float64"}, {"name": "severe_toxicity", "dtype": "float64"}, {"name": "sexual_explicit", "dtype": "float64"}, {"name": "threat", "dtype": "float64"}, {"name": "toxicity", "dtype": "float64"}]}, {"name": "message_tree_id", "dtype": "string"}, {"name": "tree_state", "dtype": "string"}, {"name": "emojis", "struct": [{"name": "count", "sequence": "int32"}, {"name": "name", "sequence": "string"}]}, {"name": "labels", "struct": [{"name": "count", "sequence": "int32"}, {"name": "name", "sequence": "string"}, {"name": "value", "sequence": "float64"}]}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 200135278, "num_examples": 84437}], "download_size": 75167235, "dataset_size": 200135278}, {"config_name": "val", "features": [{"name": "message_id", "dtype": "string"}, {"name": "parent_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}, {"name": "created_date", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "text_th", "dtype": "string"}, {"name": "role", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "review_count", "dtype": "int32"}, {"name": "review_result", "dtype": "bool"}, {"name": "deleted", "dtype": "bool"}, {"name": "rank", "dtype": "float64"}, {"name": "synthetic", "dtype": "bool"}, {"name": "model_name", "dtype": "null"}, {"name": "detoxify", "struct": [{"name": "identity_attack", "dtype": "float64"}, {"name": "insult", "dtype": "float64"}, {"name": "obscene", "dtype": "float64"}, {"name": "severe_toxicity", "dtype": "float64"}, {"name": "sexual_explicit", "dtype": "float64"}, {"name": "threat", "dtype": "float64"}, {"name": "toxicity", "dtype": "float64"}]}, {"name": "message_tree_id", "dtype": "string"}, {"name": "tree_state", "dtype": "string"}, {"name": "emojis", "struct": [{"name": "count", "sequence": "int32"}, {"name": "name", "sequence": "string"}]}, {"name": "labels", "struct": [{"name": "count", "sequence": "int32"}, {"name": "name", "sequence": "string"}, {"name": "value", "sequence": "float64"}]}], "splits": [{"name": "train", "num_bytes": 10381992, "num_examples": 4401}], "download_size": 3907352, "dataset_size": 10381992}], "configs": [{"config_name": "train", "data_files": [{"split": "train", "path": "train/train-*"}]}, {"config_name": "val", "data_files": [{"split": "train", "path": "val/train-*"}]}]}
2023-10-08T06:13:36+00:00
[]
[ "th" ]
TAGS #language-Thai #region-us
# Dataset Card for "oasst1_th" More Information needed
[ "# Dataset Card for \"oasst1_th\"\n\nMore Information needed" ]
[ "TAGS\n#language-Thai #region-us \n", "# Dataset Card for \"oasst1_th\"\n\nMore Information needed" ]
[ 11, 16 ]
[ "passage: TAGS\n#language-Thai #region-us \n# Dataset Card for \"oasst1_th\"\n\nMore Information needed" ]
0a0c61042c49f3c33e34a28c3da1146477bd96ad
# Dataset Card for "khang_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
khangmacon/khang_test
[ "region:us" ]
2023-09-28T08:55:38+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3034036.0, "num_examples": 1030}, {"name": "test", "num_bytes": 74437.0, "num_examples": 300}], "download_size": 1766705, "dataset_size": 3108473.0}}
2023-09-30T11:47:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "khang_test" More Information needed
[ "# Dataset Card for \"khang_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"khang_test\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"khang_test\"\n\nMore Information needed" ]
256d36b42ad198b2295b5d56831f123175926905
# Dataset Card for "cream_listings" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Binaryy/cream_listings
[ "region:us" ]
2023-09-28T09:07:01+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "features", "sequence": "string"}, {"name": "description", "dtype": "string"}, {"name": "images", "sequence": "string"}, {"name": "videos", "sequence": "string"}, {"name": "available", "dtype": "bool"}, {"name": "price", "dtype": "float64"}, {"name": "attachedDocument", "sequence": "null"}, {"name": "year", "dtype": "int64"}, {"name": "carCondition", "dtype": "string"}, {"name": "engineType", "dtype": "string"}, {"name": "colour", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "noOfBed", "dtype": "float64"}, {"name": "noOfBathroom", "dtype": "float64"}, {"name": "locationISO", "dtype": "string"}, {"name": "forRent", "dtype": "bool"}, {"name": "views", "sequence": "string"}, {"name": "thoseWhoSaved", "sequence": "string"}, {"name": "createdAt", "dtype": "string"}, {"name": "updatedAt", "dtype": "string"}, {"name": "__v", "dtype": "int64"}, {"name": "category._id", "dtype": "string"}, {"name": "category.title", "dtype": "string"}, {"name": "category.slug", "dtype": "string"}, {"name": "category.isAdminAllowed", "dtype": "string"}, {"name": "category.createdAt", "dtype": "string"}, {"name": "category.updatedAt", "dtype": "string"}, {"name": "category.__v", "dtype": "int64"}, {"name": "postedBy.pageViews.value", "dtype": "int64"}, {"name": "postedBy.pageViews.users", "sequence": "null"}, {"name": "postedBy.totalSaved.value", "dtype": "int64"}, {"name": "postedBy.totalSaved.users", "sequence": "string"}, {"name": "postedBy._id", "dtype": "string"}, {"name": "postedBy.firstName", "dtype": "string"}, {"name": "postedBy.lastName", "dtype": "string"}, {"name": "postedBy.about", "dtype": "string"}, {"name": "postedBy.cover", "dtype": "string"}, {"name": "postedBy.email", "dtype": "string"}, {"name": "postedBy.password", "dtype": "string"}, {"name": "postedBy.isAdmin", "dtype": "bool"}, {"name": "postedBy.savedListing", "sequence": "string"}, {"name": "postedBy.isVerified", "dtype": "bool"}, {"name": "postedBy.verifiedProfilePicture", "dtype": "float64"}, {"name": "postedBy.profilePicture", "dtype": "string"}, {"name": "postedBy.pronoun", "dtype": "float64"}, {"name": "postedBy.userType", "dtype": "int64"}, {"name": "postedBy.accountType", "dtype": "int64"}, {"name": "postedBy.subscribed", "dtype": "bool"}, {"name": "postedBy.noOfSubscription", "dtype": "int64"}, {"name": "postedBy.totalListing", "dtype": "int64"}, {"name": "postedBy.sellerType", "dtype": "int64"}, {"name": "postedBy.createdAt", "dtype": "string"}, {"name": "postedBy.updatedAt", "dtype": "string"}, {"name": "postedBy.__v", "dtype": "int64"}, {"name": "postedBy.address", "dtype": "string"}, {"name": "postedBy.city", "dtype": "string"}, {"name": "postedBy.country", "dtype": "string"}, {"name": "postedBy.gender", "dtype": "string"}, {"name": "postedBy.nationality", "dtype": "string"}, {"name": "postedBy.verificationType", "dtype": "float64"}, {"name": "postedBy.dob", "dtype": "string"}, {"name": "postedBy.locationISO", "dtype": "string"}, {"name": "postedBy.state", "dtype": "string"}, {"name": "postedBy.zipCode", "dtype": "float64"}, {"name": "postedBy.otherNames", "dtype": "string"}, {"name": "postedBy.facebookUrl", "dtype": "string"}, {"name": "postedBy.instagramUrl", "dtype": "string"}, {"name": "postedBy.phoneNumber1", "dtype": "string"}, {"name": "postedBy.phoneNumber2", "dtype": "string"}, {"name": "postedBy.websiteUrl", "dtype": "string"}, {"name": "postedBy.accountName", "dtype": "string"}, {"name": "postedBy.accountNo", "dtype": "string"}, {"name": "postedBy.bankName", "dtype": "string"}, {"name": "postedBy.verificationId", "dtype": "float64"}, {"name": "string_features", "dtype": "string"}, {"name": "complete_description", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1121870, "num_examples": 301}], "download_size": 404441, "dataset_size": 1121870}}
2023-11-23T11:04:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cream_listings" More Information needed
[ "# Dataset Card for \"cream_listings\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cream_listings\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cream_listings\"\n\nMore Information needed" ]
1333461dc76124e8d7c5546a423abc4ee8b21ecc
# Dataset Card for "Persian-Query-Paraphrasing" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SeyedAli/Persian-Quora-Question-Pairs
[ "region:us" ]
2023-09-28T09:35:31+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "q1", "dtype": "string"}, {"name": "q2", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 709992, "num_examples": 3715}, {"name": "test", "num_bytes": 178983, "num_examples": 929}], "download_size": 416549, "dataset_size": 888975}}
2023-09-28T14:14:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Persian-Query-Paraphrasing" More Information needed
[ "# Dataset Card for \"Persian-Query-Paraphrasing\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Persian-Query-Paraphrasing\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Persian-Query-Paraphrasing\"\n\nMore Information needed" ]
f654f775d7e422f9d153c4cf6f37dc09e9476fdd
# CNIL (Commission nationale de l'informatique et des libertés) All [CNIL](https://echanges.dila.gouv.fr/OPENDATA/CNIL/) decisions (opinions, recommendations, simplified standards, authorizations, etc.), since 2012, integration of authorization decisions (data processing, medical research) since the creation of the institution in 1978.
Nicolas-BZRD/CNIL_opendata
[ "size_categories:10K<n<100K", "language:fr", "license:odc-by", "legal", "region:us" ]
2023-09-28T09:49:15+00:00
{"language": ["fr"], "license": "odc-by", "size_categories": ["10K<n<100K"], "pretty_name": "CNIL", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 132353121, "num_examples": 18108}], "download_size": 49594572, "dataset_size": 132353121}, "tags": ["legal"]}
2023-09-28T09:59:20+00:00
[]
[ "fr" ]
TAGS #size_categories-10K<n<100K #language-French #license-odc-by #legal #region-us
# CNIL (Commission nationale de l'informatique et des libertés) All CNIL decisions (opinions, recommendations, simplified standards, authorizations, etc.), since 2012, integration of authorization decisions (data processing, medical research) since the creation of the institution in 1978.
[ "# CNIL (Commission nationale de l'informatique et des libertés)\n\nAll CNIL decisions (opinions, recommendations, simplified standards, authorizations, etc.), since 2012, integration of authorization decisions (data processing, medical research) since the creation of the institution in 1978." ]
[ "TAGS\n#size_categories-10K<n<100K #language-French #license-odc-by #legal #region-us \n", "# CNIL (Commission nationale de l'informatique et des libertés)\n\nAll CNIL decisions (opinions, recommendations, simplified standards, authorizations, etc.), since 2012, integration of authorization decisions (data processing, medical research) since the creation of the institution in 1978." ]
[ 34, 64 ]
[ "passage: TAGS\n#size_categories-10K<n<100K #language-French #license-odc-by #legal #region-us \n# CNIL (Commission nationale de l'informatique et des libertés)\n\nAll CNIL decisions (opinions, recommendations, simplified standards, authorizations, etc.), since 2012, integration of authorization decisions (data processing, medical research) since the creation of the institution in 1978." ]
4f82eb8f833b0f19143c9fd489b115b9325d9cc0
# Bengaluru Driving Dataset <img src="https://adityang.github.io/AdityaNG/BengaluruDrivingDataset/index_files/BDD_Iterator_Demo-2023-08-30_08.25.17.gif" > ## Dataset Summary We gathered a dataset spanning 114 minutes and 165K frames in Bengaluru, India. Our dataset consists of video data from a calibrated camera sensor with a resolution of 1920×1080 recorded at a framerate of 30 Hz. We utilize a Depth Dataset Generation pipeline that only uses videos as input to produce high-resolution disparity maps. ## Paper [Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios](https://arxiv.org/abs/2307.10934) ## Citation ```bibtex @misc{analgund2023octran, title={Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios}, author={Ganesh, Aditya N and Pobbathi Badrinath, Dhruval and Kumar, Harshith Mohan and S, Priya and Narayan, Surabhi }, year={2023}, howpublished={Spotlight Presentation at the Transformers for Vision Workshop, CVPR}, url={https://sites.google.com/view/t4v-cvpr23/papers#h.enx3bt45p649}, note={Transformers for Vision Workshop, CVPR 2023} }
AdityaNG/BengaluruDrivingDatasetRaw
[ "license:mit", "video", "driving", "Bengaluru", "disparity maps", "depth dataset", "arxiv:2307.10934", "region:us" ]
2023-09-28T09:49:58+00:00
{"license": "mit", "tags": ["video", "driving", "Bengaluru", "disparity maps", "depth dataset"], "homepage": "https://adityang.github.io/AdityaNG/BengaluruDrivingDataset/"}
2024-01-08T14:20:57+00:00
[ "2307.10934" ]
[]
TAGS #license-mit #video #driving #Bengaluru #disparity maps #depth dataset #arxiv-2307.10934 #region-us
# Bengaluru Driving Dataset <img src="URL > ## Dataset Summary We gathered a dataset spanning 114 minutes and 165K frames in Bengaluru, India. Our dataset consists of video data from a calibrated camera sensor with a resolution of 1920×1080 recorded at a framerate of 30 Hz. We utilize a Depth Dataset Generation pipeline that only uses videos as input to produce high-resolution disparity maps. ## Paper Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios '''bibtex @misc{analgund2023octran, title={Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios}, author={Ganesh, Aditya N and Pobbathi Badrinath, Dhruval and Kumar, Harshith Mohan and S, Priya and Narayan, Surabhi }, year={2023}, howpublished={Spotlight Presentation at the Transformers for Vision Workshop, CVPR}, url={URL note={Transformers for Vision Workshop, CVPR 2023} }
[ "# Bengaluru Driving Dataset\n\n<img src=\"URL >", "## Dataset Summary\n\nWe gathered a dataset spanning 114 minutes and 165K frames in Bengaluru, India. Our dataset consists of video data from a calibrated camera sensor with a resolution of 1920×1080 recorded at a framerate of 30 Hz. We utilize a Depth Dataset Generation pipeline that only uses videos as input to produce high-resolution disparity maps.", "## Paper\n\nBengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios\n\n'''bibtex\n@misc{analgund2023octran,\n title={Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios},\n author={Ganesh, Aditya N and Pobbathi Badrinath, Dhruval and\n Kumar, Harshith Mohan and S, Priya and Narayan, Surabhi\n },\n year={2023},\n howpublished={Spotlight Presentation at the Transformers for Vision Workshop, CVPR},\n url={URL\n note={Transformers for Vision Workshop, CVPR 2023}\n}" ]
[ "TAGS\n#license-mit #video #driving #Bengaluru #disparity maps #depth dataset #arxiv-2307.10934 #region-us \n", "# Bengaluru Driving Dataset\n\n<img src=\"URL >", "## Dataset Summary\n\nWe gathered a dataset spanning 114 minutes and 165K frames in Bengaluru, India. Our dataset consists of video data from a calibrated camera sensor with a resolution of 1920×1080 recorded at a framerate of 30 Hz. We utilize a Depth Dataset Generation pipeline that only uses videos as input to produce high-resolution disparity maps.", "## Paper\n\nBengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios\n\n'''bibtex\n@misc{analgund2023octran,\n title={Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios},\n author={Ganesh, Aditya N and Pobbathi Badrinath, Dhruval and\n Kumar, Harshith Mohan and S, Priya and Narayan, Surabhi\n },\n year={2023},\n howpublished={Spotlight Presentation at the Transformers for Vision Workshop, CVPR},\n url={URL\n note={Transformers for Vision Workshop, CVPR 2023}\n}" ]
[ 38, 16, 87, 168 ]
[ "passage: TAGS\n#license-mit #video #driving #Bengaluru #disparity maps #depth dataset #arxiv-2307.10934 #region-us \n# Bengaluru Driving Dataset\n\n<img src=\"URL >## Dataset Summary\n\nWe gathered a dataset spanning 114 minutes and 165K frames in Bengaluru, India. Our dataset consists of video data from a calibrated camera sensor with a resolution of 1920×1080 recorded at a framerate of 30 Hz. We utilize a Depth Dataset Generation pipeline that only uses videos as input to produce high-resolution disparity maps.## Paper\n\nBengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios\n\n'''bibtex\n@misc{analgund2023octran,\n title={Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios},\n author={Ganesh, Aditya N and Pobbathi Badrinath, Dhruval and\n Kumar, Harshith Mohan and S, Priya and Narayan, Surabhi\n },\n year={2023},\n howpublished={Spotlight Presentation at the Transformers for Vision Workshop, CVPR},\n url={URL\n note={Transformers for Vision Workshop, CVPR 2023}\n}" ]
597228b0cc4db797d731e726dad857dfaf2c8eb3
# KALI (Conventions collectives nationales) [All collective agreements and related texts](https://echanges.dila.gouv.fr/OPENDATA/KALI/). The database also provides access to certain national collective agreements that have not been extended, as well as regional and departmental collective agreements, whether or not they have been extended. The associated texts include agreements relating to a collective agreement, salaries and extension decrees. The data is updated from the Bulletin officiel "Conventions collectives" published under the responsibility of the Ministry of Labour, Solidarity and the Civil Service and distributed by the DILA.
Nicolas-BZRD/KALI_opendata
[ "size_categories:100K<n<1M", "language:fr", "license:odc-by", "legal", "region:us" ]
2023-09-28T10:07:36+00:00
{"language": ["fr"], "license": "odc-by", "size_categories": ["100K<n<1M"], "pretty_name": "Conventions collectives nationales", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 768806851, "num_examples": 430667}], "download_size": 298891657, "dataset_size": 768806851}, "tags": ["legal"]}
2023-09-28T10:15:14+00:00
[]
[ "fr" ]
TAGS #size_categories-100K<n<1M #language-French #license-odc-by #legal #region-us
# KALI (Conventions collectives nationales) All collective agreements and related texts. The database also provides access to certain national collective agreements that have not been extended, as well as regional and departmental collective agreements, whether or not they have been extended. The associated texts include agreements relating to a collective agreement, salaries and extension decrees. The data is updated from the Bulletin officiel "Conventions collectives" published under the responsibility of the Ministry of Labour, Solidarity and the Civil Service and distributed by the DILA.
[ "# KALI (Conventions collectives nationales)\n\nAll collective agreements and related texts. The database also provides access to certain national collective agreements that have not been extended, as well as regional and departmental collective agreements, whether or not they have been extended. The associated texts include agreements relating to a collective agreement, salaries and extension decrees.\nThe data is updated from the Bulletin officiel \"Conventions collectives\" published under the responsibility of the Ministry of Labour, Solidarity and the Civil Service and distributed by the DILA." ]
[ "TAGS\n#size_categories-100K<n<1M #language-French #license-odc-by #legal #region-us \n", "# KALI (Conventions collectives nationales)\n\nAll collective agreements and related texts. The database also provides access to certain national collective agreements that have not been extended, as well as regional and departmental collective agreements, whether or not they have been extended. The associated texts include agreements relating to a collective agreement, salaries and extension decrees.\nThe data is updated from the Bulletin officiel \"Conventions collectives\" published under the responsibility of the Ministry of Labour, Solidarity and the Civil Service and distributed by the DILA." ]
[ 34, 122 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-French #license-odc-by #legal #region-us \n# KALI (Conventions collectives nationales)\n\nAll collective agreements and related texts. The database also provides access to certain national collective agreements that have not been extended, as well as regional and departmental collective agreements, whether or not they have been extended. The associated texts include agreements relating to a collective agreement, salaries and extension decrees.\nThe data is updated from the Bulletin officiel \"Conventions collectives\" published under the responsibility of the Ministry of Labour, Solidarity and the Civil Service and distributed by the DILA." ]
14fcfd0a7edc323a96d993e7e0f620d5a2da146c
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
Tyzuesh/CustomQADraupadiMurmu
[ "region:us" ]
2023-09-28T10:12:20+00:00
{}
2023-09-29T07:08:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 8, 24, 32, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
1f105f2e97fe2e3de06b79826a6c2b8689da4c9a
# Q&R (National Assembly and ) The [database](https://echanges.dila.gouv.fr/OPENDATA/Questions-Reponses/) contains senators' questions with ministerial answers and questions from deputies wiht ministerial responses.
Nicolas-BZRD/QR_opendata
[ "task_categories:question-answering", "size_categories:n<1K", "language:fr", "license:odc-by", "legal", "region:us" ]
2023-09-28T10:30:32+00:00
{"language": ["fr"], "license": "odc-by", "size_categories": ["n<1K"], "task_categories": ["question-answering"], "pretty_name": "Q&R Assembl\u00e9e nationale et S\u00e9nat", "tags": ["legal"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 125908573, "num_examples": 630}], "download_size": 60098268, "dataset_size": 125908573}}
2023-09-28T11:13:03+00:00
[]
[ "fr" ]
TAGS #task_categories-question-answering #size_categories-n<1K #language-French #license-odc-by #legal #region-us
# Q&R (National Assembly and ) The database contains senators' questions with ministerial answers and questions from deputies wiht ministerial responses.
[ "# Q&R (National Assembly and )\nThe database contains senators' questions with ministerial answers and questions from deputies wiht ministerial responses." ]
[ "TAGS\n#task_categories-question-answering #size_categories-n<1K #language-French #license-odc-by #legal #region-us \n", "# Q&R (National Assembly and )\nThe database contains senators' questions with ministerial answers and questions from deputies wiht ministerial responses." ]
[ 44, 37 ]
[ "passage: TAGS\n#task_categories-question-answering #size_categories-n<1K #language-French #license-odc-by #legal #region-us \n# Q&R (National Assembly and )\nThe database contains senators' questions with ministerial answers and questions from deputies wiht ministerial responses." ]
a68c7015888fbf61c8e7dfcc6ca3ea13881cd11f
# Dataset Card for "ltafdb_preprocessed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SneakyInsect/ltafdb_preprocessed
[ "region:us" ]
2023-09-28T10:37:46+00:00
{"dataset_info": {"features": [{"name": "record_id", "dtype": "string"}, {"name": "signal", "dtype": {"array2_d": {"shape": [2, 1000], "dtype": "float32"}}}], "splits": [{"name": "train", "num_bytes": 5676208388.003276, "num_examples": 707906}, {"name": "validation", "num_bytes": 658761012.8742297, "num_examples": 82154}, {"name": "test", "num_bytes": 685864741.5388951, "num_examples": 85538}], "download_size": 2163597762, "dataset_size": 7020834142.416401}}
2023-09-28T10:47:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ltafdb_preprocessed" More Information needed
[ "# Dataset Card for \"ltafdb_preprocessed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ltafdb_preprocessed\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ltafdb_preprocessed\"\n\nMore Information needed" ]
25f91c6192fabd3b322978f95c2b64c2157f0bfb
# A corpus of rewritten pubmed abstracts This corpus contains a 1k example subset from the [pubmed](https://huggingface.co/datasets/pubmed) corpus and various rewritten versions. The rewritten versions change one aspect of the orginal text and keeps other aspects unchanged as much as possible. - **Paper:** [Dissecting learning and forgetting in language model finetuning](link pending) Another corpus of rewritten general text is provided here: [c4_derived](https://huggingface.co/datasets/pixel-coping/c4_derived) ### Data Splits - pubmed: a 1k example subset from the original pubmed corpus - nonbiomedical: main topic of text changed to nonbiomedical topic - counerfactual: factuals knowledge in text replaced by incorrect factuals - casual: style of text changed to a casual style - rap: style of text changed to a rap style ## Dataset Creation Text is generated by ChatGPT with corresponding prompts. Refer to the paper for the instructions used to generate text in each derived subsets. Please check the terms and conditions of pubmed data [here](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html). ### Citation Information ``` pending ```
pixel-coping/pubmed_derived
[ "language:en", "region:us" ]
2023-09-28T10:45:25+00:00
{"language": ["en"], "configs": [{"config_name": "default", "data_files": [{"split": "pubmed", "path": "data/pubmed-*"}, {"split": "nonbiomedical", "path": "data/nonbiomedical-*"}, {"split": "counterfactual", "path": "data/counterfactual-*"}, {"split": "casual", "path": "data/casual-*"}, {"split": "rap", "path": "data/rap-*"}]}], "dataset_info": {"features": [{"name": "PubmedData", "struct": [{"name": "ArticleIdList", "sequence": [{"name": "ArticleId", "sequence": "string"}]}, {"name": "PublicationStatus", "dtype": "string"}, {"name": "History", "struct": [{"name": "PubMedPubDate", "sequence": [{"name": "Year", "dtype": "int32"}, {"name": "Month", "dtype": "int32"}, {"name": "Day", "dtype": "int32"}]}]}, {"name": "ReferenceList", "sequence": [{"name": "Citation", "dtype": "string"}, {"name": "CitationId", "dtype": "int32"}]}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "pubmed", "num_bytes": 1166668, "num_examples": 1000}, {"name": "nonbiomedical", "num_bytes": 1141909, "num_examples": 1000}, {"name": "counterfactual", "num_bytes": 1179347, "num_examples": 991}, {"name": "casual", "num_bytes": 1205949, "num_examples": 1000}, {"name": "rap", "num_bytes": 1252260, "num_examples": 1000}], "download_size": 3357032, "dataset_size": 5946133}}
2023-10-06T01:26:15+00:00
[]
[ "en" ]
TAGS #language-English #region-us
# A corpus of rewritten pubmed abstracts This corpus contains a 1k example subset from the pubmed corpus and various rewritten versions. The rewritten versions change one aspect of the orginal text and keeps other aspects unchanged as much as possible. - Paper: Dissecting learning and forgetting in language model finetuning Another corpus of rewritten general text is provided here: c4_derived ### Data Splits - pubmed: a 1k example subset from the original pubmed corpus - nonbiomedical: main topic of text changed to nonbiomedical topic - counerfactual: factuals knowledge in text replaced by incorrect factuals - casual: style of text changed to a casual style - rap: style of text changed to a rap style ## Dataset Creation Text is generated by ChatGPT with corresponding prompts. Refer to the paper for the instructions used to generate text in each derived subsets. Please check the terms and conditions of pubmed data here.
[ "# A corpus of rewritten pubmed abstracts\n\nThis corpus contains a 1k example subset from the pubmed corpus and various rewritten versions. The rewritten versions change one aspect of the orginal text and keeps other aspects unchanged as much as possible.\n\n- Paper: Dissecting learning and forgetting in language model finetuning\n\nAnother corpus of rewritten general text is provided here: c4_derived", "### Data Splits\n\n- pubmed: a 1k example subset from the original pubmed corpus\n\n- nonbiomedical: main topic of text changed to nonbiomedical topic\n\n- counerfactual: factuals knowledge in text replaced by incorrect factuals\n\n- casual: style of text changed to a casual style\n\n- rap: style of text changed to a rap style", "## Dataset Creation\n\nText is generated by ChatGPT with corresponding prompts. Refer to the paper for the instructions used to generate text in each derived subsets.\n\nPlease check the terms and conditions of pubmed data here." ]
[ "TAGS\n#language-English #region-us \n", "# A corpus of rewritten pubmed abstracts\n\nThis corpus contains a 1k example subset from the pubmed corpus and various rewritten versions. The rewritten versions change one aspect of the orginal text and keeps other aspects unchanged as much as possible.\n\n- Paper: Dissecting learning and forgetting in language model finetuning\n\nAnother corpus of rewritten general text is provided here: c4_derived", "### Data Splits\n\n- pubmed: a 1k example subset from the original pubmed corpus\n\n- nonbiomedical: main topic of text changed to nonbiomedical topic\n\n- counerfactual: factuals knowledge in text replaced by incorrect factuals\n\n- casual: style of text changed to a casual style\n\n- rap: style of text changed to a rap style", "## Dataset Creation\n\nText is generated by ChatGPT with corresponding prompts. Refer to the paper for the instructions used to generate text in each derived subsets.\n\nPlease check the terms and conditions of pubmed data here." ]
[ 10, 96, 79, 50 ]
[ "passage: TAGS\n#language-English #region-us \n# A corpus of rewritten pubmed abstracts\n\nThis corpus contains a 1k example subset from the pubmed corpus and various rewritten versions. The rewritten versions change one aspect of the orginal text and keeps other aspects unchanged as much as possible.\n\n- Paper: Dissecting learning and forgetting in language model finetuning\n\nAnother corpus of rewritten general text is provided here: c4_derived### Data Splits\n\n- pubmed: a 1k example subset from the original pubmed corpus\n\n- nonbiomedical: main topic of text changed to nonbiomedical topic\n\n- counerfactual: factuals knowledge in text replaced by incorrect factuals\n\n- casual: style of text changed to a casual style\n\n- rap: style of text changed to a rap style## Dataset Creation\n\nText is generated by ChatGPT with corresponding prompts. Refer to the paper for the instructions used to generate text in each derived subsets.\n\nPlease check the terms and conditions of pubmed data here." ]
1e9edfd65ae8dddde519badc94d878a1998f9737
## Dataset Description - **Paper:** [More Information Needed] - **Point of Contact:** [email protected] ### Dataset Summary This dataset consists of 6591 tweets generated by GPT-3.5 model. The tweets are juxtaposed with a conspiracy theory related to COVID-19 pandemic. Each item consists of a label that represents the item's output class. The possible labels are support/deny/neutral. - **support**: the tweet suggests support for the conspiracy theory - **deny**: the tweet contradicts the conspiracy theory - **neutral**: the tweet is mostly informative, and does not show emotions against the conspiracy theory The dataset can be used to train a classification model. ### Languages English ## Dataset Structure ### Data Instances ``` { 'tweet': 'Is the Chinese government exploiting the pandemic to gain an economic advantage? #COVIDEconomy #ChineseTradeWar', 'conspiracy_theory': 'CT_3', 'label': 'support' } ``` ### Data Fields - `tweet`: a text generated by GPT-3.5 (input) - `conspiracy theory`: a conspiracy theory identifier - `label`: label, support/deny/neutral Conspiracy theories mapping: 1. **CT1: Deliberate strategy to create economic instability or benefit large corporations.** The coronavirus or the government's response to it is a deliberate strategy to create economic instability or to benefit large corporations over small businesses. 2. **CT2: Public was intentionally misled about the true nature of the virus and prevention.** The public is being intentionally misled about the true nature of the Coronavirus, its risks, or the efficacy of certain treatments or prevention methods. 3. **CT3: Human made and bioweapon.** The Coronavirus was created intentionally, made by humans, or as a bioweapon. 4. **CT4: Governments and politicians spread misinformation.** Politicians or government agencies are intentionally spreading false information, or they have some other motive for the way they are responding to the coronavirus. 5. **CT5: The Chinese intentionally spread the virus.** The Chinese government intentionally created or spread the coronavirus to harm other countries. 6. **CT6: Vaccines are unsafe.** The coronavirus vaccine is either unsafe or part of a larger plot to control people or reduce the population. ### Data Splits The dataset contains training split only which consists of 6591 items. ## Dataset Creation The dataset was generated with GPT-3.5 with the following prompts for support, deny, and neutral class respectively: **support** Consider the following conspiracy theory: X. Generate 50 tweets that support this conspiracy theory. Try to use hashtags that might promote this particular conspiracy theory. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list. **deny** Consider the following conspiracy theory: X. Generate 50 tweets that contradict this conspiracy theory. Try to use hashtags that might debunk this particular conspiracy theory. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list. **neutral** Consider the following conspiracy theory: X. Generate 50 tweets that are about COVID-19 but unrelated to the conspiracy theory. Try to use hashtags that might be used in such a tweet. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list. ### Known Limitations The generated tweets are sometimes formulaic and lack of diversity. ### Citation Information ``` @article{article_id, author = {Author List}, title = {Dataset Paper Title}, journal = {Publication Venue}, year = {2525} } ```
webimmunization/COVID-19-conspiracy-theories-tweets
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:cc-by-4.0", "twitter", "social_science", "misinformation", "fake_news", "conspiracy_theory", "region:us" ]
2023-09-28T10:49:47+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "tags": ["twitter", "social_science", "misinformation", "fake_news", "conspiracy_theory"]}
2024-02-11T19:14:06+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #twitter #social_science #misinformation #fake_news #conspiracy_theory #region-us
## Dataset Description - Paper: - Point of Contact: izabela.krysinska@URL ### Dataset Summary This dataset consists of 6591 tweets generated by GPT-3.5 model. The tweets are juxtaposed with a conspiracy theory related to COVID-19 pandemic. Each item consists of a label that represents the item's output class. The possible labels are support/deny/neutral. - support: the tweet suggests support for the conspiracy theory - deny: the tweet contradicts the conspiracy theory - neutral: the tweet is mostly informative, and does not show emotions against the conspiracy theory The dataset can be used to train a classification model. ### Languages English ## Dataset Structure ### Data Instances ### Data Fields - 'tweet': a text generated by GPT-3.5 (input) - 'conspiracy theory': a conspiracy theory identifier - 'label': label, support/deny/neutral Conspiracy theories mapping: 1. CT1: Deliberate strategy to create economic instability or benefit large corporations. The coronavirus or the government's response to it is a deliberate strategy to create economic instability or to benefit large corporations over small businesses. 2. CT2: Public was intentionally misled about the true nature of the virus and prevention. The public is being intentionally misled about the true nature of the Coronavirus, its risks, or the efficacy of certain treatments or prevention methods. 3. CT3: Human made and bioweapon. The Coronavirus was created intentionally, made by humans, or as a bioweapon. 4. CT4: Governments and politicians spread misinformation. Politicians or government agencies are intentionally spreading false information, or they have some other motive for the way they are responding to the coronavirus. 5. CT5: The Chinese intentionally spread the virus. The Chinese government intentionally created or spread the coronavirus to harm other countries. 6. CT6: Vaccines are unsafe. The coronavirus vaccine is either unsafe or part of a larger plot to control people or reduce the population. ### Data Splits The dataset contains training split only which consists of 6591 items. ## Dataset Creation The dataset was generated with GPT-3.5 with the following prompts for support, deny, and neutral class respectively: support Consider the following conspiracy theory: X. Generate 50 tweets that support this conspiracy theory. Try to use hashtags that might promote this particular conspiracy theory. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list. deny Consider the following conspiracy theory: X. Generate 50 tweets that contradict this conspiracy theory. Try to use hashtags that might debunk this particular conspiracy theory. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list. neutral Consider the following conspiracy theory: X. Generate 50 tweets that are about COVID-19 but unrelated to the conspiracy theory. Try to use hashtags that might be used in such a tweet. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list. ### Known Limitations The generated tweets are sometimes formulaic and lack of diversity.
[ "## Dataset Description\n- Paper: \n- Point of Contact: izabela.krysinska@URL", "### Dataset Summary\n\nThis dataset consists of 6591 tweets generated by GPT-3.5 model. The tweets are juxtaposed with a conspiracy theory related to COVID-19 pandemic. Each item consists of a label that represents the item's output class. The possible labels are support/deny/neutral.\n\n- support: the tweet suggests support for the conspiracy theory\n- deny: the tweet contradicts the conspiracy theory\n- neutral: the tweet is mostly informative, and does not show emotions against the conspiracy theory\n\nThe dataset can be used to train a classification model.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n\n- 'tweet': a text generated by GPT-3.5 (input)\n- 'conspiracy theory': a conspiracy theory identifier \n- 'label': label, support/deny/neutral\n\nConspiracy theories mapping:\n1. CT1: Deliberate strategy to create economic instability or benefit large corporations. The coronavirus or the government's response to it is a deliberate strategy to create economic instability or to benefit large corporations over small businesses. \n\n2. CT2: Public was intentionally misled about the true nature of the virus and prevention. The public is being intentionally misled about the true nature of the Coronavirus, its risks, or the efficacy of certain treatments or prevention methods. \n\n3. CT3: Human made and bioweapon. The Coronavirus was created intentionally, made by humans, or as a bioweapon. \n\n4. CT4: Governments and politicians spread misinformation. Politicians or government agencies are intentionally spreading false information, or they have some other motive for the way they are responding to the coronavirus. \n\n5. CT5: The Chinese intentionally spread the virus. The Chinese government intentionally created or spread the coronavirus to harm other countries. \n\n6. CT6: Vaccines are unsafe. The coronavirus vaccine is either unsafe or part of a larger plot to control people or reduce the population.", "### Data Splits\n\nThe dataset contains training split only which consists of 6591 items.", "## Dataset Creation\n\nThe dataset was generated with GPT-3.5 with the following prompts for support, deny, and neutral class respectively: \n\nsupport Consider the following conspiracy theory: X. Generate 50 tweets that support this conspiracy theory. Try to use hashtags that might promote this particular conspiracy theory. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list.\n\ndeny Consider the following conspiracy theory: X. Generate 50 tweets that contradict this conspiracy theory. Try to use hashtags that might debunk this particular conspiracy theory. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list.\n\nneutral Consider the following conspiracy theory: X. Generate 50 tweets that are about COVID-19 but unrelated to the conspiracy theory. Try to use hashtags that might be used in such a tweet. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list.", "### Known Limitations\n\nThe generated tweets are sometimes formulaic and lack of diversity." ]
[ "TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #twitter #social_science #misinformation #fake_news #conspiracy_theory #region-us \n", "## Dataset Description\n- Paper: \n- Point of Contact: izabela.krysinska@URL", "### Dataset Summary\n\nThis dataset consists of 6591 tweets generated by GPT-3.5 model. The tweets are juxtaposed with a conspiracy theory related to COVID-19 pandemic. Each item consists of a label that represents the item's output class. The possible labels are support/deny/neutral.\n\n- support: the tweet suggests support for the conspiracy theory\n- deny: the tweet contradicts the conspiracy theory\n- neutral: the tweet is mostly informative, and does not show emotions against the conspiracy theory\n\nThe dataset can be used to train a classification model.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n\n- 'tweet': a text generated by GPT-3.5 (input)\n- 'conspiracy theory': a conspiracy theory identifier \n- 'label': label, support/deny/neutral\n\nConspiracy theories mapping:\n1. CT1: Deliberate strategy to create economic instability or benefit large corporations. The coronavirus or the government's response to it is a deliberate strategy to create economic instability or to benefit large corporations over small businesses. \n\n2. CT2: Public was intentionally misled about the true nature of the virus and prevention. The public is being intentionally misled about the true nature of the Coronavirus, its risks, or the efficacy of certain treatments or prevention methods. \n\n3. CT3: Human made and bioweapon. The Coronavirus was created intentionally, made by humans, or as a bioweapon. \n\n4. CT4: Governments and politicians spread misinformation. Politicians or government agencies are intentionally spreading false information, or they have some other motive for the way they are responding to the coronavirus. \n\n5. CT5: The Chinese intentionally spread the virus. The Chinese government intentionally created or spread the coronavirus to harm other countries. \n\n6. CT6: Vaccines are unsafe. The coronavirus vaccine is either unsafe or part of a larger plot to control people or reduce the population.", "### Data Splits\n\nThe dataset contains training split only which consists of 6591 items.", "## Dataset Creation\n\nThe dataset was generated with GPT-3.5 with the following prompts for support, deny, and neutral class respectively: \n\nsupport Consider the following conspiracy theory: X. Generate 50 tweets that support this conspiracy theory. Try to use hashtags that might promote this particular conspiracy theory. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list.\n\ndeny Consider the following conspiracy theory: X. Generate 50 tweets that contradict this conspiracy theory. Try to use hashtags that might debunk this particular conspiracy theory. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list.\n\nneutral Consider the following conspiracy theory: X. Generate 50 tweets that are about COVID-19 but unrelated to the conspiracy theory. Try to use hashtags that might be used in such a tweet. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list.", "### Known Limitations\n\nThe generated tweets are sometimes formulaic and lack of diversity." ]
[ 63, 20, 133, 5, 6, 6, 305, 21, 317, 21 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #twitter #social_science #misinformation #fake_news #conspiracy_theory #region-us \n## Dataset Description\n- Paper: \n- Point of Contact: izabela.krysinska@URL### Dataset Summary\n\nThis dataset consists of 6591 tweets generated by GPT-3.5 model. The tweets are juxtaposed with a conspiracy theory related to COVID-19 pandemic. Each item consists of a label that represents the item's output class. The possible labels are support/deny/neutral.\n\n- support: the tweet suggests support for the conspiracy theory\n- deny: the tweet contradicts the conspiracy theory\n- neutral: the tweet is mostly informative, and does not show emotions against the conspiracy theory\n\nThe dataset can be used to train a classification model.### Languages\n\nEnglish## Dataset Structure### Data Instances" ]
e50b0b0f30b6b43fa30f45582073206758dfac5d
# Dataset Card for "Fingerprint_split_90_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ArwaAbdul/Fingerprint_split_90_10
[ "region:us" ]
2023-09-28T11:06:53+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "1", "1": "2", "2": "3", "3": "4"}}}}], "splits": [{"name": "train", "num_bytes": 504155396.6682027, "num_examples": 3000}, {"name": "test", "num_bytes": 77898517.33179724, "num_examples": 472}], "download_size": 337755809, "dataset_size": 582053914.0}}
2023-09-28T11:14:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Fingerprint_split_90_10" More Information needed
[ "# Dataset Card for \"Fingerprint_split_90_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Fingerprint_split_90_10\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Fingerprint_split_90_10\"\n\nMore Information needed" ]
e5132373ae1ab04444db9863d645fd91a2f56b31
# Bangumi Image Base of Citrus This is the image base of bangumi Citrus, we detected 18 characters, 1393 images in total. The full dataset is [here](all.zip). **Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview: | # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 | |:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------| | 0 | 374 | [Download](0/dataset.zip) | ![preview 1](0/preview_1.png) | ![preview 2](0/preview_2.png) | ![preview 3](0/preview_3.png) | ![preview 4](0/preview_4.png) | ![preview 5](0/preview_5.png) | ![preview 6](0/preview_6.png) | ![preview 7](0/preview_7.png) | ![preview 8](0/preview_8.png) | | 1 | 58 | [Download](1/dataset.zip) | ![preview 1](1/preview_1.png) | ![preview 2](1/preview_2.png) | ![preview 3](1/preview_3.png) | ![preview 4](1/preview_4.png) | ![preview 5](1/preview_5.png) | ![preview 6](1/preview_6.png) | ![preview 7](1/preview_7.png) | ![preview 8](1/preview_8.png) | | 2 | 49 | [Download](2/dataset.zip) | ![preview 1](2/preview_1.png) | ![preview 2](2/preview_2.png) | ![preview 3](2/preview_3.png) | ![preview 4](2/preview_4.png) | ![preview 5](2/preview_5.png) | ![preview 6](2/preview_6.png) | ![preview 7](2/preview_7.png) | ![preview 8](2/preview_8.png) | | 3 | 29 | [Download](3/dataset.zip) | ![preview 1](3/preview_1.png) | ![preview 2](3/preview_2.png) | ![preview 3](3/preview_3.png) | ![preview 4](3/preview_4.png) | ![preview 5](3/preview_5.png) | ![preview 6](3/preview_6.png) | ![preview 7](3/preview_7.png) | ![preview 8](3/preview_8.png) | | 4 | 17 | [Download](4/dataset.zip) | ![preview 1](4/preview_1.png) | ![preview 2](4/preview_2.png) | ![preview 3](4/preview_3.png) | ![preview 4](4/preview_4.png) | ![preview 5](4/preview_5.png) | ![preview 6](4/preview_6.png) | ![preview 7](4/preview_7.png) | ![preview 8](4/preview_8.png) | | 5 | 73 | [Download](5/dataset.zip) | ![preview 1](5/preview_1.png) | ![preview 2](5/preview_2.png) | ![preview 3](5/preview_3.png) | ![preview 4](5/preview_4.png) | ![preview 5](5/preview_5.png) | ![preview 6](5/preview_6.png) | ![preview 7](5/preview_7.png) | ![preview 8](5/preview_8.png) | | 6 | 241 | [Download](6/dataset.zip) | ![preview 1](6/preview_1.png) | ![preview 2](6/preview_2.png) | ![preview 3](6/preview_3.png) | ![preview 4](6/preview_4.png) | ![preview 5](6/preview_5.png) | ![preview 6](6/preview_6.png) | ![preview 7](6/preview_7.png) | ![preview 8](6/preview_8.png) | | 7 | 30 | [Download](7/dataset.zip) | ![preview 1](7/preview_1.png) | ![preview 2](7/preview_2.png) | ![preview 3](7/preview_3.png) | ![preview 4](7/preview_4.png) | ![preview 5](7/preview_5.png) | ![preview 6](7/preview_6.png) | ![preview 7](7/preview_7.png) | ![preview 8](7/preview_8.png) | | 8 | 97 | [Download](8/dataset.zip) | ![preview 1](8/preview_1.png) | ![preview 2](8/preview_2.png) | ![preview 3](8/preview_3.png) | ![preview 4](8/preview_4.png) | ![preview 5](8/preview_5.png) | ![preview 6](8/preview_6.png) | ![preview 7](8/preview_7.png) | ![preview 8](8/preview_8.png) | | 9 | 15 | [Download](9/dataset.zip) | ![preview 1](9/preview_1.png) | ![preview 2](9/preview_2.png) | ![preview 3](9/preview_3.png) | ![preview 4](9/preview_4.png) | ![preview 5](9/preview_5.png) | ![preview 6](9/preview_6.png) | ![preview 7](9/preview_7.png) | ![preview 8](9/preview_8.png) | | 10 | 7 | [Download](10/dataset.zip) | ![preview 1](10/preview_1.png) | ![preview 2](10/preview_2.png) | ![preview 3](10/preview_3.png) | ![preview 4](10/preview_4.png) | ![preview 5](10/preview_5.png) | ![preview 6](10/preview_6.png) | ![preview 7](10/preview_7.png) | N/A | | 11 | 24 | [Download](11/dataset.zip) | ![preview 1](11/preview_1.png) | ![preview 2](11/preview_2.png) | ![preview 3](11/preview_3.png) | ![preview 4](11/preview_4.png) | ![preview 5](11/preview_5.png) | ![preview 6](11/preview_6.png) | ![preview 7](11/preview_7.png) | ![preview 8](11/preview_8.png) | | 12 | 31 | [Download](12/dataset.zip) | ![preview 1](12/preview_1.png) | ![preview 2](12/preview_2.png) | ![preview 3](12/preview_3.png) | ![preview 4](12/preview_4.png) | ![preview 5](12/preview_5.png) | ![preview 6](12/preview_6.png) | ![preview 7](12/preview_7.png) | ![preview 8](12/preview_8.png) | | 13 | 11 | [Download](13/dataset.zip) | ![preview 1](13/preview_1.png) | ![preview 2](13/preview_2.png) | ![preview 3](13/preview_3.png) | ![preview 4](13/preview_4.png) | ![preview 5](13/preview_5.png) | ![preview 6](13/preview_6.png) | ![preview 7](13/preview_7.png) | ![preview 8](13/preview_8.png) | | 14 | 90 | [Download](14/dataset.zip) | ![preview 1](14/preview_1.png) | ![preview 2](14/preview_2.png) | ![preview 3](14/preview_3.png) | ![preview 4](14/preview_4.png) | ![preview 5](14/preview_5.png) | ![preview 6](14/preview_6.png) | ![preview 7](14/preview_7.png) | ![preview 8](14/preview_8.png) | | 15 | 76 | [Download](15/dataset.zip) | ![preview 1](15/preview_1.png) | ![preview 2](15/preview_2.png) | ![preview 3](15/preview_3.png) | ![preview 4](15/preview_4.png) | ![preview 5](15/preview_5.png) | ![preview 6](15/preview_6.png) | ![preview 7](15/preview_7.png) | ![preview 8](15/preview_8.png) | | 16 | 44 | [Download](16/dataset.zip) | ![preview 1](16/preview_1.png) | ![preview 2](16/preview_2.png) | ![preview 3](16/preview_3.png) | ![preview 4](16/preview_4.png) | ![preview 5](16/preview_5.png) | ![preview 6](16/preview_6.png) | ![preview 7](16/preview_7.png) | ![preview 8](16/preview_8.png) | | noise | 127 | [Download](-1/dataset.zip) | ![preview 1](-1/preview_1.png) | ![preview 2](-1/preview_2.png) | ![preview 3](-1/preview_3.png) | ![preview 4](-1/preview_4.png) | ![preview 5](-1/preview_5.png) | ![preview 6](-1/preview_6.png) | ![preview 7](-1/preview_7.png) | ![preview 8](-1/preview_8.png) |
BangumiBase/citrus
[ "size_categories:1K<n<10K", "license:mit", "art", "region:us" ]
2023-09-28T11:28:14+00:00
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
2023-09-29T12:04:11+00:00
[]
[]
TAGS #size_categories-1K<n<10K #license-mit #art #region-us
Bangumi Image Base of Citrus ============================ This is the image base of bangumi Citrus, we detected 18 characters, 1393 images in total. The full dataset is here. Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview:
[]
[ "TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n" ]
a57a998f5d8def4f95534236224aa7df4c8973f4
# Dataset Card for AGS ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Dataset Creation - Curation Rationale - Source Data - Personal and Sensitive Information ## Dataset Description - **Paper:** [Atef, A., Seddik, F., & Elbedewy, A. (2023). AGS: Arabic GPT Summarization Corpus]() - **Point of Contact:** [email protected] ### Dataset Summary AGS is the first publicly accessible abstractive summarization dataset for Arabic. It consists of 142,000 pairs of articles and summaries, all written in Modern Standard Arabic (MSA). The summaries are generated using GPT-3.5 Turbo, a large language model, through meticulous prompt engineering. The dataset covers a wide range of topics, such as politics, sports, culture, science, and technology. ### Supported Tasks and Leaderboards The supported task is abstractive text summarization, which involves generating a concise and informative summary from a longer text. The dataset can be used to train and evaluate models for this task, as well as to benchmark their performance against existing methods. There is no official leaderboard for this dataset, but the we report the results of several models on the test set, using Rouge-L, SS-Population mean, and Compression ratio metrics. The best performing model is mT5, which achieves 21.27, 82.65, and 62 scores on these metrics, respectively. ### Languages The dataset is in Arabic (ISO 639-1: ar). ## Dataset Structure ### Data Instances An example data instance is: ``` { “text”: “نظرية التعقيد هي فرع من فروع نظرية الحوسبة والرياضيات، وهذه النظرية تتركز في تصنيف المسائل الحاسوبية حسب صعوبتها وربط أقسام التعقيد ببعضها، والمسألة الحاسوبية هي المسألة التي يستطيع الحاسوب بحلها.ويمكن اعتبارها مسألة صعبة إذا استخدمت كمية مُعينة من الموارد أياً كانت الخوارزمية. ولعل النماذج الحسابية هي الطريقة الأمثل في هذه النظرية لدراسة هذه المسائل وتحديد كمية الموارد اللازمة مثل: الوقت أو حجم المكان الإضافي اللازم، وتوجد معايير تعقيد أخرى مثل: الاتصال (مستخدم في نظرية تعقيد الاتصال) وعدد البوابات في الدارات المنطقية (مستخدم في نظرية تعقيد الدارات المنطقية) وكذلك عدد المعالجات (مستخدم في الحساب المتوازي).”, “summary”: “نظرية التعقيد هي فرع من نظرية الحوسبة والرياضيات، تصنف المسائل الحاسوبية حسب صعوبتها وتربط أقسام التعقيد ببعضها. تحديد كمية الموارد اللازمة يتم باستخدام النماذج الحسابية، مثل الوقت وحجم المكان الإضافي وعدد البوابات في الدارات المنطقية.” } ``` ### Data Fields - 'id' : an identification number - `text`: the original text of the article, written in Arabic. - `summary`: the abstractive summary of the article, written in Arabic. ## Dataset Creation ### Curation Rationale The dataset was created to address the lack of abstractive summarization datasets for Arabic, which is a low-resource and under-studied language. The dataset aims to provide a large and diverse corpus of articles and summaries that can be used to train and evaluate models for this task, as well as to advance the research in this field. ### Source Data The source data was collected from Wikipedia & Youm7 websites, covering a wide range of topics, such as politics, sports, culture, science, and technology. The websites were selected based on their popularity, credibility, and content quality. The data collection process involved web crawling, text sampling, and prompt engineering. ### Personal and Sensitive Information The dataset does not contain any personal or sensitive information, as it only consists of articles and summaries that are publicly available on the web. The dataset creators are not responsible for any misuse or harm that may result from the use of this data.
FahdSeddik/AGS-Corpus
[ "task_categories:summarization", "size_categories:100K<n<1M", "language:ar", "license:cc-by-nc-4.0", "chemistry", "biology", "legal", "finance", "music", "art", "code", "climate", "medical", "region:us" ]
2023-09-28T12:01:41+00:00
{"language": ["ar"], "license": "cc-by-nc-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["summarization"], "pretty_name": "AGS Corpus", "tags": ["chemistry", "biology", "legal", "finance", "music", "art", "code", "climate", "medical"]}
2023-09-29T11:36:04+00:00
[]
[ "ar" ]
TAGS #task_categories-summarization #size_categories-100K<n<1M #language-Arabic #license-cc-by-nc-4.0 #chemistry #biology #legal #finance #music #art #code #climate #medical #region-us
# Dataset Card for AGS ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Dataset Creation - Curation Rationale - Source Data - Personal and Sensitive Information ## Dataset Description - Paper: [Atef, A., Seddik, F., & Elbedewy, A. (2023). AGS: Arabic GPT Summarization Corpus]() - Point of Contact: fahdseddik@URL ### Dataset Summary AGS is the first publicly accessible abstractive summarization dataset for Arabic. It consists of 142,000 pairs of articles and summaries, all written in Modern Standard Arabic (MSA). The summaries are generated using GPT-3.5 Turbo, a large language model, through meticulous prompt engineering. The dataset covers a wide range of topics, such as politics, sports, culture, science, and technology. ### Supported Tasks and Leaderboards The supported task is abstractive text summarization, which involves generating a concise and informative summary from a longer text. The dataset can be used to train and evaluate models for this task, as well as to benchmark their performance against existing methods. There is no official leaderboard for this dataset, but the we report the results of several models on the test set, using Rouge-L, SS-Population mean, and Compression ratio metrics. The best performing model is mT5, which achieves 21.27, 82.65, and 62 scores on these metrics, respectively. ### Languages The dataset is in Arabic (ISO 639-1: ar). ## Dataset Structure ### Data Instances An example data instance is: ### Data Fields - 'id' : an identification number - 'text': the original text of the article, written in Arabic. - 'summary': the abstractive summary of the article, written in Arabic. ## Dataset Creation ### Curation Rationale The dataset was created to address the lack of abstractive summarization datasets for Arabic, which is a low-resource and under-studied language. The dataset aims to provide a large and diverse corpus of articles and summaries that can be used to train and evaluate models for this task, as well as to advance the research in this field. ### Source Data The source data was collected from Wikipedia & Youm7 websites, covering a wide range of topics, such as politics, sports, culture, science, and technology. The websites were selected based on their popularity, credibility, and content quality. The data collection process involved web crawling, text sampling, and prompt engineering. ### Personal and Sensitive Information The dataset does not contain any personal or sensitive information, as it only consists of articles and summaries that are publicly available on the web. The dataset creators are not responsible for any misuse or harm that may result from the use of this data.
[ "# Dataset Card for AGS", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Personal and Sensitive Information", "## Dataset Description\n\n- Paper: [Atef, A., Seddik, F., & Elbedewy, A. (2023).\nAGS: Arabic GPT Summarization Corpus]()\n- Point of Contact: fahdseddik@URL", "### Dataset Summary\n\nAGS is the first publicly accessible abstractive summarization dataset for Arabic. It consists of 142,000 pairs of articles and summaries, all written in Modern Standard Arabic (MSA). The summaries are generated using GPT-3.5 Turbo, a large language model, through meticulous prompt engineering. The dataset covers a wide range of topics, such as politics, sports, culture, science, and technology.", "### Supported Tasks and Leaderboards\n\nThe supported task is abstractive text summarization, which involves generating a concise and informative summary from a longer text. The dataset can be used to train and evaluate models for this task, as well as to benchmark their performance against existing methods.\n\nThere is no official leaderboard for this dataset, but the we report the results of several models on the test set, using Rouge-L, SS-Population mean, and Compression ratio metrics. The best performing model is mT5, which achieves 21.27, 82.65, and 62 scores on these metrics, respectively.", "### Languages\n\nThe dataset is in Arabic (ISO 639-1: ar).", "## Dataset Structure", "### Data Instances\n\nAn example data instance is:", "### Data Fields\n\n- 'id' : an identification number\n- 'text': the original text of the article, written in Arabic.\n- 'summary': the abstractive summary of the article, written in Arabic.", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was created to address the lack of abstractive summarization datasets for Arabic, which is a low-resource and under-studied language. The dataset aims to provide a large and diverse corpus of articles and summaries that can be used to train and evaluate models for this task, as well as to advance the research in this field.", "### Source Data\n\nThe source data was collected from Wikipedia & Youm7 websites, covering a wide range of topics, such as politics, sports, culture, science, and technology. The websites were selected based on their popularity, credibility, and content quality. The data collection process involved web crawling, text sampling, and prompt engineering.", "### Personal and Sensitive Information\n\nThe dataset does not contain any personal or sensitive information, as it only consists of articles and summaries that are publicly available on the web. The dataset creators are not responsible for any misuse or harm that may result from the use of this data." ]
[ "TAGS\n#task_categories-summarization #size_categories-100K<n<1M #language-Arabic #license-cc-by-nc-4.0 #chemistry #biology #legal #finance #music #art #code #climate #medical #region-us \n", "# Dataset Card for AGS", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Personal and Sensitive Information", "## Dataset Description\n\n- Paper: [Atef, A., Seddik, F., & Elbedewy, A. (2023).\nAGS: Arabic GPT Summarization Corpus]()\n- Point of Contact: fahdseddik@URL", "### Dataset Summary\n\nAGS is the first publicly accessible abstractive summarization dataset for Arabic. It consists of 142,000 pairs of articles and summaries, all written in Modern Standard Arabic (MSA). The summaries are generated using GPT-3.5 Turbo, a large language model, through meticulous prompt engineering. The dataset covers a wide range of topics, such as politics, sports, culture, science, and technology.", "### Supported Tasks and Leaderboards\n\nThe supported task is abstractive text summarization, which involves generating a concise and informative summary from a longer text. The dataset can be used to train and evaluate models for this task, as well as to benchmark their performance against existing methods.\n\nThere is no official leaderboard for this dataset, but the we report the results of several models on the test set, using Rouge-L, SS-Population mean, and Compression ratio metrics. The best performing model is mT5, which achieves 21.27, 82.65, and 62 scores on these metrics, respectively.", "### Languages\n\nThe dataset is in Arabic (ISO 639-1: ar).", "## Dataset Structure", "### Data Instances\n\nAn example data instance is:", "### Data Fields\n\n- 'id' : an identification number\n- 'text': the original text of the article, written in Arabic.\n- 'summary': the abstractive summary of the article, written in Arabic.", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was created to address the lack of abstractive summarization datasets for Arabic, which is a low-resource and under-studied language. The dataset aims to provide a large and diverse corpus of articles and summaries that can be used to train and evaluate models for this task, as well as to advance the research in this field.", "### Source Data\n\nThe source data was collected from Wikipedia & Youm7 websites, covering a wide range of topics, such as politics, sports, culture, science, and technology. The websites were selected based on their popularity, credibility, and content quality. The data collection process involved web crawling, text sampling, and prompt engineering.", "### Personal and Sensitive Information\n\nThe dataset does not contain any personal or sensitive information, as it only consists of articles and summaries that are publicly available on the web. The dataset creators are not responsible for any misuse or harm that may result from the use of this data." ]
[ 69, 7, 62, 55, 99, 141, 17, 6, 12, 48, 5, 83, 76, 64 ]
[ "passage: TAGS\n#task_categories-summarization #size_categories-100K<n<1M #language-Arabic #license-cc-by-nc-4.0 #chemistry #biology #legal #finance #music #art #code #climate #medical #region-us \n# Dataset Card for AGS## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Personal and Sensitive Information## Dataset Description\n\n- Paper: [Atef, A., Seddik, F., & Elbedewy, A. (2023).\nAGS: Arabic GPT Summarization Corpus]()\n- Point of Contact: fahdseddik@URL### Dataset Summary\n\nAGS is the first publicly accessible abstractive summarization dataset for Arabic. It consists of 142,000 pairs of articles and summaries, all written in Modern Standard Arabic (MSA). The summaries are generated using GPT-3.5 Turbo, a large language model, through meticulous prompt engineering. The dataset covers a wide range of topics, such as politics, sports, culture, science, and technology.### Supported Tasks and Leaderboards\n\nThe supported task is abstractive text summarization, which involves generating a concise and informative summary from a longer text. The dataset can be used to train and evaluate models for this task, as well as to benchmark their performance against existing methods.\n\nThere is no official leaderboard for this dataset, but the we report the results of several models on the test set, using Rouge-L, SS-Population mean, and Compression ratio metrics. The best performing model is mT5, which achieves 21.27, 82.65, and 62 scores on these metrics, respectively.### Languages\n\nThe dataset is in Arabic (ISO 639-1: ar).## Dataset Structure### Data Instances\n\nAn example data instance is:" ]
7448b3c443aab2de5ad3382def5f912de1eb5b03
# Dataset Card for "WizardLM_evol_instruct_V2_code_filtered" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
reshinthadith/WizardLM_evol_instruct_V2_code_filtered
[ "region:us" ]
2023-09-28T12:31:53+00:00
{"dataset_info": {"features": [{"name": "idx", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 324678355.29963636, "num_examples": 137864}], "download_size": 154940112, "dataset_size": 324678355.29963636}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-28T12:32:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "WizardLM_evol_instruct_V2_code_filtered" More Information needed
[ "# Dataset Card for \"WizardLM_evol_instruct_V2_code_filtered\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"WizardLM_evol_instruct_V2_code_filtered\"\n\nMore Information needed" ]
[ 6, 28 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"WizardLM_evol_instruct_V2_code_filtered\"\n\nMore Information needed" ]
7724f86c7ad15b8310a43ad7b8a4e33691367570
# Dataset Card for "SD-CLIP-alignment-3000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Doub7e/SD-CLIP-alignment-3000
[ "region:us" ]
2023-09-28T12:34:45+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "clip_pred", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1385003606.0, "num_examples": 3000}], "download_size": 1385015330, "dataset_size": 1385003606.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-28T12:43:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "SD-CLIP-alignment-3000" More Information needed
[ "# Dataset Card for \"SD-CLIP-alignment-3000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"SD-CLIP-alignment-3000\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"SD-CLIP-alignment-3000\"\n\nMore Information needed" ]
b386637bc35233ac760c1a93bcdef66e595a2048
--- configs: - config_name: clean data_files: - split: clean path: clean/clean-* - config_name: default data_files: - split: train path: data/train-* dataset_info: config_name: distill_bert features: - name: headline dtype: string - name: summary dtype: string - name: headline_sentiment struct: - name: postive dtype: string - name: negative dtype: string - name: neutral dtype: string - name: summary_sentiment struct: - name: postive dtype: string - name: negative dtype: string - name: neutral dtype: string splits: - name: default num_bytes: 131086592 num_examples: 316086 download_size: 0 dataset_size: 131086592 dataset_info: - config_name: clean features: - name: datetime dtype: int64 - name: image dtype: string - name: related dtype: string - name: source dtype: string - name: summary dtype: string - name: url dtype: string - name: id dtype: int64 - name: category dtype: string - name: headline dtype: string splits: - name: clean num_bytes: 150902085 num_examples: 316086 download_size: 78262136 dataset_size: 150902085 - config_name: default features: - name: related dtype: string - name: datetime dtype: int64 - name: image dtype: string - name: url dtype: string - name: headline dtype: string - name: finbert_sentiment struct: - name: negative dtype: float64 - name: neutral dtype: float64 - name: postive dtype: float64 - name: source dtype: string - name: summary dtype: string - name: id dtype: int64 - name: category dtype: string splits: - name: train num_bytes: 251731744 num_examples: 515851 download_size: 113022298 dataset_size: 251731744 tags: - finance --- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sehyun66/Finnhub-News
[ "region:us" ]
2023-09-28T12:37:56+00:00
{"configs": [{"config_name": "clean", "data_files": [{"split": "clean", "path": "clean/clean-*"}]}, {"config_name": "default", "data_files": [{"split": "finbert", "path": "data/finbert-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"config_name": "clean", "features": [{"name": "datetime", "dtype": "int64"}, {"name": "image", "dtype": "string"}, {"name": "related", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "category", "dtype": "string"}, {"name": "headline", "dtype": "string"}], "splits": [{"name": "clean", "num_bytes": 150902085, "num_examples": 316086}], "download_size": 78262136, "dataset_size": 150902085}}
2023-10-12T10:55:56+00:00
[]
[]
TAGS #region-us
--- configs: - config_name: clean data_files: - split: clean path: clean/clean-* - config_name: default data_files: - split: train path: data/train-* dataset_info: config_name: distill_bert features: - name: headline dtype: string - name: summary dtype: string - name: headline_sentiment struct: - name: postive dtype: string - name: negative dtype: string - name: neutral dtype: string - name: summary_sentiment struct: - name: postive dtype: string - name: negative dtype: string - name: neutral dtype: string splits: - name: default num_bytes: 131086592 num_examples: 316086 download_size: 0 dataset_size: 131086592 dataset_info: - config_name: clean features: - name: datetime dtype: int64 - name: image dtype: string - name: related dtype: string - name: source dtype: string - name: summary dtype: string - name: url dtype: string - name: id dtype: int64 - name: category dtype: string - name: headline dtype: string splits: - name: clean num_bytes: 150902085 num_examples: 316086 download_size: 78262136 dataset_size: 150902085 - config_name: default features: - name: related dtype: string - name: datetime dtype: int64 - name: image dtype: string - name: url dtype: string - name: headline dtype: string - name: finbert_sentiment struct: - name: negative dtype: float64 - name: neutral dtype: float64 - name: postive dtype: float64 - name: source dtype: string - name: summary dtype: string - name: id dtype: int64 - name: category dtype: string splits: - name: train num_bytes: 251731744 num_examples: 515851 download_size: 113022298 dataset_size: 251731744 tags: - finance --- More Information needed
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
d6ad13c339176cb23bde62493ac2572d0d3d9287
# German+English Wikitext Wikitext_en_de is a replication of the `wikitext` dataset following the work by [Merity et al. (2016)](https://arxiv.org/abs/1609.07843). It contains (mostly) all articles that Wikipedia classifies as ["exzellent"](https://de.wikipedia.org/wiki/Wikipedia:Exzellente_Artikel) or ["featured"](https://en.wikipedia.org/wiki/Wikipedia:Featured_articles) and can be used for example for perplexity evaluation. This dataset was created by first scraping the names of the articles belonging to these categories from Wikipedia. Afterwards, we take a recent dump from wikipedia ("20230901.de" from [`graelo/wikipedia`](https://huggingface.co/datasets/graelo/wikipedia)) and filter the articles to only include those on either list. | Config Name | Num Documents | |-------------|--------------| | exzellent_de | 2822 | | featured_en | 6356 | | exzellent_de_small | 1024 | | featured_en_small | 1024 | The code for creating the datasets is available in this repository ("wikitext_de.py", "wikitext_en.py"). Be aware that this download a whole wikipedia dump, which might take a while depending on your connection.
LeoLM/wikitext-en-de
[ "size_categories:1K<n<10K", "language:de", "language:en", "license:cc-by-3.0", "arxiv:1609.07843", "region:us" ]
2023-09-28T12:39:48+00:00
{"language": ["de", "en"], "license": "cc-by-3.0", "size_categories": ["1K<n<10K"], "configs": [{"config_name": "exzellent_de", "data_files": "wiki_de_exzellent.parquet"}, {"config_name": "featured_en", "data_files": "wiki_en_featured.parquet"}, {"config_name": "exzellent_de_small", "data_files": "wiki_de_exzellent_small.parquet"}, {"config_name": "featured_en_small", "data_files": "wiki_en_featured_small.parquet"}]}
2023-09-28T13:04:12+00:00
[ "1609.07843" ]
[ "de", "en" ]
TAGS #size_categories-1K<n<10K #language-German #language-English #license-cc-by-3.0 #arxiv-1609.07843 #region-us
German+English Wikitext ======================= Wikitext\_en\_de is a replication of the 'wikitext' dataset following the work by Merity et al. (2016). It contains (mostly) all articles that Wikipedia classifies as "exzellent" or "featured" and can be used for example for perplexity evaluation. This dataset was created by first scraping the names of the articles belonging to these categories from Wikipedia. Afterwards, we take a recent dump from wikipedia ("URL" from 'graelo/wikipedia') and filter the articles to only include those on either list. The code for creating the datasets is available in this repository ("wikitext\_de.py", "wikitext\_en.py"). Be aware that this download a whole wikipedia dump, which might take a while depending on your connection.
[]
[ "TAGS\n#size_categories-1K<n<10K #language-German #language-English #license-cc-by-3.0 #arxiv-1609.07843 #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-German #language-English #license-cc-by-3.0 #arxiv-1609.07843 #region-us \n" ]
cd6d47ff36fad8bda20c5f5441ecf86a80b020f0
Like the Greeks, the Romans told their own story of a refugee from the Trojan War. Their hero was Aeneas, son of Anchises and the goddess Aphrodite, who sailed from the burning ruins of Troy to found a new city in Italy. It was foretold by the gods that this second Troy would give birth to a race that would rule the world. These people were the Romans, and while they borrowed much from the myths of the Greeks, they gave their own names to the gods. Zeus became Jupiter, Hera was known as Juno, Aphrodite became Venus, and Poseidon ruled over the seas as Neptune. But by whatever names they were called, the gods still ruled the universe and played their endless games with the lives of mortals. Juno was especially vengeful and slow to forgive. She had never forgotten that the Trojan prince Paris chose to give the apple for the most beautiful goddess to Venus instead of her. She became the implacable enemy of Troy and was still not satisfied when the city lay in ashes. It may have been prophesied that Aeneas would found a new and glorious city in the west, but she was determined to make life difficult for the Trojan fugitive—and perhaps even prevent the will of the Fates. Seeing the Trojan fleet sailing the placid sea as it made its way toward the setting sun, Juno flew down to the island of Aeolus, king of the winds, to ask a favor of her old friend. As the Trojans were crossing the sea, she asked of Aeolus to blow them off course in return for a wife. The king of the winds quickly agreed and stirred up a storm to crash against the Trojan fleet. The ships were tossed and scattered as the sky grew black, driving them away from Italian shores toward Africa. After a long struggle, a few of the Trojan ships were cast up together on a desert coast, though none could say where they were. The rest of the fleet was lost, with Aeneas fearing these men and their families were all dead. Aeneas took his steadfast comrade Achates and headed inland to discover what they could learn of this unknown land. Soon they met a young girl with bow and arrows hunting in the brush. They called to her and told her not to be afraid. They were merely castaways who wanted to learn what sort of country they had come to. Could she tell them what king ruled this land and where they might find him? The girl laughed and said there was no king in this realm but a queen—Dido, ruler of Carthage, lately come from the Phoenician city of Sidon to found a new country in the west.
zozos/Passage_1.1
[ "region:us" ]
2023-09-28T12:41:23+00:00
{}
2023-09-28T12:43:04+00:00
[]
[]
TAGS #region-us
Like the Greeks, the Romans told their own story of a refugee from the Trojan War. Their hero was Aeneas, son of Anchises and the goddess Aphrodite, who sailed from the burning ruins of Troy to found a new city in Italy. It was foretold by the gods that this second Troy would give birth to a race that would rule the world. These people were the Romans, and while they borrowed much from the myths of the Greeks, they gave their own names to the gods. Zeus became Jupiter, Hera was known as Juno, Aphrodite became Venus, and Poseidon ruled over the seas as Neptune. But by whatever names they were called, the gods still ruled the universe and played their endless games with the lives of mortals. Juno was especially vengeful and slow to forgive. She had never forgotten that the Trojan prince Paris chose to give the apple for the most beautiful goddess to Venus instead of her. She became the implacable enemy of Troy and was still not satisfied when the city lay in ashes. It may have been prophesied that Aeneas would found a new and glorious city in the west, but she was determined to make life difficult for the Trojan fugitive—and perhaps even prevent the will of the Fates. Seeing the Trojan fleet sailing the placid sea as it made its way toward the setting sun, Juno flew down to the island of Aeolus, king of the winds, to ask a favor of her old friend. As the Trojans were crossing the sea, she asked of Aeolus to blow them off course in return for a wife. The king of the winds quickly agreed and stirred up a storm to crash against the Trojan fleet. The ships were tossed and scattered as the sky grew black, driving them away from Italian shores toward Africa. After a long struggle, a few of the Trojan ships were cast up together on a desert coast, though none could say where they were. The rest of the fleet was lost, with Aeneas fearing these men and their families were all dead. Aeneas took his steadfast comrade Achates and headed inland to discover what they could learn of this unknown land. Soon they met a young girl with bow and arrows hunting in the brush. They called to her and told her not to be afraid. They were merely castaways who wanted to learn what sort of country they had come to. Could she tell them what king ruled this land and where they might find him? The girl laughed and said there was no king in this realm but a queen—Dido, ruler of Carthage, lately come from the Phoenician city of Sidon to found a new country in the west.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
1e17d174d083bd53185a92ca60415f4d575e6e49
# AutoTrain Dataset for project: jump_up_down ## Dataset Description This dataset has been automatically processed by AutoTrain for project jump_up_down. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<437x960 RGB PIL image>", "target": 0 }, { "image": "<369x800 RGB PIL image>", "target": 1 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(names=['beijing', 'down', 'up'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 432 | | valid | 110 |
xuming/autotrain-data-jump_up_down
[ "task_categories:image-classification", "region:us" ]
2023-09-28T12:45:35+00:00
{"task_categories": ["image-classification"]}
2023-09-28T13:45:47+00:00
[]
[]
TAGS #task_categories-image-classification #region-us
AutoTrain Dataset for project: jump\_up\_down ============================================= Dataset Description ------------------- This dataset has been automatically processed by AutoTrain for project jump\_up\_down. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-image-classification #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ 17, 27, 17, 23, 27 ]
[ "passage: TAGS\n#task_categories-image-classification #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
8047551e57590f7a23f38f38a4ed36f0eb6ba8bd
# Dataset of Aihara Yuzu This is the dataset of Aihara Yuzu, containing 300 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 691 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 891 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 691 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 691 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 599 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 891 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 891 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/aihara_yuzu_citrus
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-28T13:09:46+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-28T13:19:06+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Aihara Yuzu ====================== This is the dataset of Aihara Yuzu, containing 300 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
b2711658daba367a79789003c384bf55fce248b7
# Dataset Card for "guanaco-llama2-200_1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mehta77/guanaco-llama2-200_1
[ "region:us" ]
2023-09-28T13:12:42+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 338808, "num_examples": 200}], "download_size": 201258, "dataset_size": 338808}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-28T13:12:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "guanaco-llama2-200_1" More Information needed
[ "# Dataset Card for \"guanaco-llama2-200_1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"guanaco-llama2-200_1\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"guanaco-llama2-200_1\"\n\nMore Information needed" ]
bee211539751147019c08a3ef79ebd82dcbfebbc
# Dataset Card for "AO3_fandom_chai" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ebony59/AO3_fandom_chai
[ "region:us" ]
2023-09-28T13:15:51+00:00
{"dataset_info": {"features": [{"name": "personalities", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "character_1", "dtype": "string"}, {"name": "character_2", "dtype": "string"}, {"name": "conversations", "list": [{"name": "content", "dtype": "string"}, {"name": "do_train", "dtype": "bool"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3955628, "num_examples": 956}], "download_size": 0, "dataset_size": 3955628}}
2023-09-28T17:11:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "AO3_fandom_chai" More Information needed
[ "# Dataset Card for \"AO3_fandom_chai\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"AO3_fandom_chai\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"AO3_fandom_chai\"\n\nMore Information needed" ]
06184d646d67b0c183eec7c18cfedcf446eacee8
# Dataset Card for "dataset_train_nli" Dataset for training a universal classifier. Additional information and training code available here: https://github.com/MoritzLaurer/zeroshot-classifier
MoritzLaurer/dataset_train_nli_old
[ "region:us" ]
2023-09-28T13:28:44+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "task_name", "dtype": "string"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 315013288.0, "num_examples": 1018733}], "download_size": 206032209, "dataset_size": 315013288.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-21T13:49:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dataset_train_nli" Dataset for training a universal classifier. Additional information and training code available here: URL
[ "# Dataset Card for \"dataset_train_nli\"\n\nDataset for training a universal classifier. Additional information and training code available here: URL" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dataset_train_nli\"\n\nDataset for training a universal classifier. Additional information and training code available here: URL" ]
[ 6, 35 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dataset_train_nli\"\n\nDataset for training a universal classifier. Additional information and training code available here: URL" ]
571eb20866a3ddcc83495ffc6e668d3a8b0a7068
# Dataset Card for "dataset_test_concat_nli" Dataset for testing a universal classifier. Additional information and training code available here: https://github.com/MoritzLaurer/zeroshot-classifier
MoritzLaurer/dataset_test_concat_nli
[ "region:us" ]
2023-09-28T13:30:12+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "task_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15114416, "num_examples": 59140}], "download_size": 8715544, "dataset_size": 15114416}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-29T18:40:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dataset_test_concat_nli" Dataset for testing a universal classifier. Additional information and training code available here: URL
[ "# Dataset Card for \"dataset_test_concat_nli\"\n\nDataset for testing a universal classifier. Additional information and training code available here: URL" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dataset_test_concat_nli\"\n\nDataset for testing a universal classifier. Additional information and training code available here: URL" ]
[ 6, 37 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dataset_test_concat_nli\"\n\nDataset for testing a universal classifier. Additional information and training code available here: URL" ]
323e03758cc13d928b60fd10d95b5cfb4bf46ddc
# Generated E-mail Spam The dataset consists of a **CSV file** containing of 300 generated email spam messages. Each row in the file represents a separate email message, its *title and text.* The dataset aims to facilitate the analysis and detection of spam emails. The dataset can be used for various purposes, such as *training machine learning algorithms to classify and filter spam emails, studying spam email patterns, or analyzing text-based features of spam messages*. ![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12421376%2Fdefd7209a4510c98e556ca384c8ace68%2Finbox_618942_4d1fdedb2827152696dd0c0af05fd8da_f.png?generation=1695221394608089&alt=media) # Get the dataset ### This is just an example of the data Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=generated-e-mail-spam) to discuss your requirements, learn about the price and buy the dataset. # Content ### File with the extension .csv (utf-8) includes the following information: - **title**: title of the email, - **text**: text of the email # Email spam might be generated in accordance with your requirements. ## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=generated-e-mail-spam)** provides high-quality data annotation tailored to your needs More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets** TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
TrainingDataPro/generated-e-mail-spam
[ "task_categories:text-generation", "task_categories:text-classification", "language:en", "license:cc-by-nc-nd-4.0", "code", "finance", "region:us" ]
2023-09-28T13:36:07+00:00
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["text-generation", "text-classification"], "tags": ["code", "finance"], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "large_string"}], "splits": [{"name": "train", "num_bytes": 233533, "num_examples": 300}], "download_size": 230500, "dataset_size": 233533}}
2023-09-28T14:29:45+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-text-classification #language-English #license-cc-by-nc-nd-4.0 #code #finance #region-us
# Generated E-mail Spam The dataset consists of a CSV file containing of 300 generated email spam messages. Each row in the file represents a separate email message, its *title and text.* The dataset aims to facilitate the analysis and detection of spam emails. The dataset can be used for various purposes, such as *training machine learning algorithms to classify and filter spam emails, studying spam email patterns, or analyzing text-based features of spam messages*. ![](URL # Get the dataset ### This is just an example of the data Leave a request on URL to discuss your requirements, learn about the price and buy the dataset. # Content ### File with the extension .csv (utf-8) includes the following information: - title: title of the email, - text: text of the email # Email spam might be generated in accordance with your requirements. ## TrainingData provides high-quality data annotation tailored to your needs More datasets in TrainingData's Kaggle account: URL TrainingData's GitHub: URL
[ "# Generated E-mail Spam\n\nThe dataset consists of a CSV file containing of 300 generated email spam messages. Each row in the file represents a separate email message, its *title and text.* The dataset aims to facilitate the analysis and detection of spam emails.\n\nThe dataset can be used for various purposes, such as *training machine learning algorithms to classify and filter spam emails, studying spam email patterns, or analyzing text-based features of spam messages*.\n\n![](URL", "# Get the dataset", "### This is just an example of the data\n\nLeave a request on URL to discuss your requirements, learn about the price and buy the dataset.", "# Content", "### File with the extension .csv (utf-8)\nincludes the following information:\n\n- title: title of the email,\n- text: text of the email", "# Email spam might be generated in accordance with your requirements.", "## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL" ]
[ "TAGS\n#task_categories-text-generation #task_categories-text-classification #language-English #license-cc-by-nc-nd-4.0 #code #finance #region-us \n", "# Generated E-mail Spam\n\nThe dataset consists of a CSV file containing of 300 generated email spam messages. Each row in the file represents a separate email message, its *title and text.* The dataset aims to facilitate the analysis and detection of spam emails.\n\nThe dataset can be used for various purposes, such as *training machine learning algorithms to classify and filter spam emails, studying spam email patterns, or analyzing text-based features of spam messages*.\n\n![](URL", "# Get the dataset", "### This is just an example of the data\n\nLeave a request on URL to discuss your requirements, learn about the price and buy the dataset.", "# Content", "### File with the extension .csv (utf-8)\nincludes the following information:\n\n- title: title of the email,\n- text: text of the email", "# Email spam might be generated in accordance with your requirements.", "## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL" ]
[ 50, 118, 5, 30, 2, 35, 13, 39 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-text-classification #language-English #license-cc-by-nc-nd-4.0 #code #finance #region-us \n# Generated E-mail Spam\n\nThe dataset consists of a CSV file containing of 300 generated email spam messages. Each row in the file represents a separate email message, its *title and text.* The dataset aims to facilitate the analysis and detection of spam emails.\n\nThe dataset can be used for various purposes, such as *training machine learning algorithms to classify and filter spam emails, studying spam email patterns, or analyzing text-based features of spam messages*.\n\n![](URL# Get the dataset### This is just an example of the data\n\nLeave a request on URL to discuss your requirements, learn about the price and buy the dataset.# Content### File with the extension .csv (utf-8)\nincludes the following information:\n\n- title: title of the email,\n- text: text of the email# Email spam might be generated in accordance with your requirements.## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL" ]
b17c8e0c0c4e35d7beddf7e43b5a222ffade52d6
# Dataset Card for Reflexive Polyhedra of Calabi-Yau Threefolds ## Table of Contents - [Dataset Description](#dataset-description) - [General Information](#general-information) - [Dataset Origin](#dataset-origin) - [Dataset Characteristics](#dataset-characteristics) - [Schema](#schema) - [Data Fields](#data-fields) - [Data Format](#data-format) - [Usage](#usage) - [Getting Started](#getting-started) - [Machine Learning Applications](#machine-learning-applications) - [Citations](#citations) ## Dataset Description ### General Information Calabi-Yau threefolds are a special class of smooth, compact three-dimensional spaces that have become fundamental objects in both mathematics and theoretical physics. In the context of string theory, they serve as the internal dimensions over which strings compactify, leading to a four-dimensional effective theory. The geometry of these threefolds is closely related to many physical phenomena, including the number of particle generations, gauge symmetries, and the cosmological constant. This dataset encompasses all 4319 reflexive polyhedra in 3 dimensions, offering a comprehensive view of potential Calabi-Yau geometries. The reflexive polyhedra serve as dual representations of these threefolds and are crucial in understanding their topological and geometric properties. ### Dataset Origin The dataset is derived from the original work documented in [hep-th/9805190](https://arxiv.org/abs/hep-th/9805190). While the original dataset was in a PALP-compatible structure, this version has been converted to a nested JSON format to better accommodate machine learning applications. The PALP-compatible version of the dataset can be accessed at [CYk3](http://hep.itp.tuwien.ac.at/~kreuzer/CY/CYk3.html). ## Dataset Characteristics ### Schema The dataset is presented in a nested JSON format, with each entry containing both metadata and a matrix representing the vertices of the corresponding polyhedron. ### Data Fields - `M1`, `M2`: These are point and vertex numbers in the M lattice, which is a mathematical lattice in the context of toric geometry. This lattice serves as the foundational geometric space from which the polyhedron is constructed. - `N1`, `N2`: Similar to the M lattice, these are point and vertex numbers in the N lattice, which is dual to the M lattice. The N lattice provides a different but equally important geometric perspective for understanding the polyhedron. - `Pic`: The Picard number is a topological invariant that measures the rank of the Néron-Severi group of a manifold. In the context of Calabi-Yau threefolds, it helps to determine the number of independent 2-cycles, which has physical implications like the number of U(1) gauge fields in the effective theory. - `Cor`: The correction term is a specific mathematical entity that adjusts the Picard number to account for certain topological peculiarities. The Picard numbers of a polyhedron and its dual add up to \( 20 + \text{correction} \). - `Matrix`: A 3xN matrix containing the coordinates of the vertices of the polyhedron. Each row represents a dimension in 3D space, and each column represents a vertex. ### Data Format Each entry in the dataset is structured as follows: ```json { "M1": ..., "M2": ..., "N1": ..., "N2": ..., "Pic": ..., "Cor": ..., "Matrix": [ [...], [...], [...] ] } ``` ## Usage ### Getting Started To access the dataset using the Hugging Face `datasets` library, the following Python code can be used: ```python from datasets import load_dataset dataset = load_dataset("calabi-yau-threefolds") ``` ### Machine Learning Applications This dataset provides rich opportunities for various machine learning tasks: - Geometric deep learning for topological invariant prediction. - Unsupervised learning techniques for polyhedra clustering. - Graph neural networks to model vertex connections. ### Citations For dataset usage, please cite the original paper using the following BibTeX entry: ```bibtex @misc{kreuzer1998classification, title={Classification of Reflexive Polyhedra in Three Dimensions}, author={M. Kreuzer and H. Skarke}, year={1998}, eprint={hep-th/9805190}, archivePrefix={arXiv}, primaryClass={hep-th} } ```
avermeersch/calabi-yau-threefolds
[ "size_categories:1K<n<10K", "license:other", "Calabi-Yau", "Toric Geometry", "String Theory", "Polyhedra", "Geometry", "Physics", "region:us" ]
2023-09-28T13:38:46+00:00
{"license": "other", "size_categories": ["1K<n<10K"], "pretty_name": "Calabi-Yau 3-Folds", "name": "Reflexive Polyhedra of Calabi-Yau Threefolds", "date": "2023-09-28", "domain": "Physics", "tags": ["Calabi-Yau", "Toric Geometry", "String Theory", "Polyhedra", "Geometry", "Physics"]}
2023-12-13T15:04:21+00:00
[]
[]
TAGS #size_categories-1K<n<10K #license-other #Calabi-Yau #Toric Geometry #String Theory #Polyhedra #Geometry #Physics #region-us
# Dataset Card for Reflexive Polyhedra of Calabi-Yau Threefolds ## Table of Contents - Dataset Description - General Information - Dataset Origin - Dataset Characteristics - Schema - Data Fields - Data Format - Usage - Getting Started - Machine Learning Applications - Citations ## Dataset Description ### General Information Calabi-Yau threefolds are a special class of smooth, compact three-dimensional spaces that have become fundamental objects in both mathematics and theoretical physics. In the context of string theory, they serve as the internal dimensions over which strings compactify, leading to a four-dimensional effective theory. The geometry of these threefolds is closely related to many physical phenomena, including the number of particle generations, gauge symmetries, and the cosmological constant. This dataset encompasses all 4319 reflexive polyhedra in 3 dimensions, offering a comprehensive view of potential Calabi-Yau geometries. The reflexive polyhedra serve as dual representations of these threefolds and are crucial in understanding their topological and geometric properties. ### Dataset Origin The dataset is derived from the original work documented in hep-th/9805190. While the original dataset was in a PALP-compatible structure, this version has been converted to a nested JSON format to better accommodate machine learning applications. The PALP-compatible version of the dataset can be accessed at CYk3. ## Dataset Characteristics ### Schema The dataset is presented in a nested JSON format, with each entry containing both metadata and a matrix representing the vertices of the corresponding polyhedron. ### Data Fields - 'M1', 'M2': These are point and vertex numbers in the M lattice, which is a mathematical lattice in the context of toric geometry. This lattice serves as the foundational geometric space from which the polyhedron is constructed. - 'N1', 'N2': Similar to the M lattice, these are point and vertex numbers in the N lattice, which is dual to the M lattice. The N lattice provides a different but equally important geometric perspective for understanding the polyhedron. - 'Pic': The Picard number is a topological invariant that measures the rank of the Néron-Severi group of a manifold. In the context of Calabi-Yau threefolds, it helps to determine the number of independent 2-cycles, which has physical implications like the number of U(1) gauge fields in the effective theory. - 'Cor': The correction term is a specific mathematical entity that adjusts the Picard number to account for certain topological peculiarities. The Picard numbers of a polyhedron and its dual add up to \( 20 + \text{correction} \). - 'Matrix': A 3xN matrix containing the coordinates of the vertices of the polyhedron. Each row represents a dimension in 3D space, and each column represents a vertex. ### Data Format Each entry in the dataset is structured as follows: ## Usage ### Getting Started To access the dataset using the Hugging Face 'datasets' library, the following Python code can be used: ### Machine Learning Applications This dataset provides rich opportunities for various machine learning tasks: - Geometric deep learning for topological invariant prediction. - Unsupervised learning techniques for polyhedra clustering. - Graph neural networks to model vertex connections. s For dataset usage, please cite the original paper using the following BibTeX entry:
[ "# Dataset Card for Reflexive Polyhedra of Calabi-Yau Threefolds", "## Table of Contents\n\n- Dataset Description\n - General Information\n - Dataset Origin\n- Dataset Characteristics\n - Schema\n - Data Fields\n - Data Format\n- Usage\n - Getting Started\n - Machine Learning Applications\n - Citations", "## Dataset Description", "### General Information\n\nCalabi-Yau threefolds are a special class of smooth, compact three-dimensional spaces that have become fundamental objects in both mathematics and theoretical physics. In the context of string theory, they serve as the internal dimensions over which strings compactify, leading to a four-dimensional effective theory. The geometry of these threefolds is closely related to many physical phenomena, including the number of particle generations, gauge symmetries, and the cosmological constant. This dataset encompasses all 4319 reflexive polyhedra in 3 dimensions, offering a comprehensive view of potential Calabi-Yau geometries. The reflexive polyhedra serve as dual representations of these threefolds and are crucial in understanding their topological and geometric properties.", "### Dataset Origin\n\nThe dataset is derived from the original work documented in hep-th/9805190. While the original dataset was in a PALP-compatible structure, this version has been converted to a nested JSON format to better accommodate machine learning applications. The PALP-compatible version of the dataset can be accessed at CYk3.", "## Dataset Characteristics", "### Schema\n\nThe dataset is presented in a nested JSON format, with each entry containing both metadata and a matrix representing the vertices of the corresponding polyhedron.", "### Data Fields\n\n- 'M1', 'M2': These are point and vertex numbers in the M lattice, which is a mathematical lattice in the context of toric geometry. This lattice serves as the foundational geometric space from which the polyhedron is constructed.\n \n- 'N1', 'N2': Similar to the M lattice, these are point and vertex numbers in the N lattice, which is dual to the M lattice. The N lattice provides a different but equally important geometric perspective for understanding the polyhedron.\n \n- 'Pic': The Picard number is a topological invariant that measures the rank of the Néron-Severi group of a manifold. In the context of Calabi-Yau threefolds, it helps to determine the number of independent 2-cycles, which has physical implications like the number of U(1) gauge fields in the effective theory.\n\n- 'Cor': The correction term is a specific mathematical entity that adjusts the Picard number to account for certain topological peculiarities. The Picard numbers of a polyhedron and its dual add up to \\( 20 + \\text{correction} \\).\n\n- 'Matrix': A 3xN matrix containing the coordinates of the vertices of the polyhedron. Each row represents a dimension in 3D space, and each column represents a vertex.", "### Data Format\n\nEach entry in the dataset is structured as follows:", "## Usage", "### Getting Started\n\nTo access the dataset using the Hugging Face 'datasets' library, the following Python code can be used:", "### Machine Learning Applications\n\nThis dataset provides rich opportunities for various machine learning tasks:\n\n- Geometric deep learning for topological invariant prediction.\n- Unsupervised learning techniques for polyhedra clustering.\n- Graph neural networks to model vertex connections.\n\ns\n\nFor dataset usage, please cite the original paper using the following BibTeX entry:" ]
[ "TAGS\n#size_categories-1K<n<10K #license-other #Calabi-Yau #Toric Geometry #String Theory #Polyhedra #Geometry #Physics #region-us \n", "# Dataset Card for Reflexive Polyhedra of Calabi-Yau Threefolds", "## Table of Contents\n\n- Dataset Description\n - General Information\n - Dataset Origin\n- Dataset Characteristics\n - Schema\n - Data Fields\n - Data Format\n- Usage\n - Getting Started\n - Machine Learning Applications\n - Citations", "## Dataset Description", "### General Information\n\nCalabi-Yau threefolds are a special class of smooth, compact three-dimensional spaces that have become fundamental objects in both mathematics and theoretical physics. In the context of string theory, they serve as the internal dimensions over which strings compactify, leading to a four-dimensional effective theory. The geometry of these threefolds is closely related to many physical phenomena, including the number of particle generations, gauge symmetries, and the cosmological constant. This dataset encompasses all 4319 reflexive polyhedra in 3 dimensions, offering a comprehensive view of potential Calabi-Yau geometries. The reflexive polyhedra serve as dual representations of these threefolds and are crucial in understanding their topological and geometric properties.", "### Dataset Origin\n\nThe dataset is derived from the original work documented in hep-th/9805190. While the original dataset was in a PALP-compatible structure, this version has been converted to a nested JSON format to better accommodate machine learning applications. The PALP-compatible version of the dataset can be accessed at CYk3.", "## Dataset Characteristics", "### Schema\n\nThe dataset is presented in a nested JSON format, with each entry containing both metadata and a matrix representing the vertices of the corresponding polyhedron.", "### Data Fields\n\n- 'M1', 'M2': These are point and vertex numbers in the M lattice, which is a mathematical lattice in the context of toric geometry. This lattice serves as the foundational geometric space from which the polyhedron is constructed.\n \n- 'N1', 'N2': Similar to the M lattice, these are point and vertex numbers in the N lattice, which is dual to the M lattice. The N lattice provides a different but equally important geometric perspective for understanding the polyhedron.\n \n- 'Pic': The Picard number is a topological invariant that measures the rank of the Néron-Severi group of a manifold. In the context of Calabi-Yau threefolds, it helps to determine the number of independent 2-cycles, which has physical implications like the number of U(1) gauge fields in the effective theory.\n\n- 'Cor': The correction term is a specific mathematical entity that adjusts the Picard number to account for certain topological peculiarities. The Picard numbers of a polyhedron and its dual add up to \\( 20 + \\text{correction} \\).\n\n- 'Matrix': A 3xN matrix containing the coordinates of the vertices of the polyhedron. Each row represents a dimension in 3D space, and each column represents a vertex.", "### Data Format\n\nEach entry in the dataset is structured as follows:", "## Usage", "### Getting Started\n\nTo access the dataset using the Hugging Face 'datasets' library, the following Python code can be used:", "### Machine Learning Applications\n\nThis dataset provides rich opportunities for various machine learning tasks:\n\n- Geometric deep learning for topological invariant prediction.\n- Unsupervised learning techniques for polyhedra clustering.\n- Graph neural networks to model vertex connections.\n\ns\n\nFor dataset usage, please cite the original paper using the following BibTeX entry:" ]
[ 53, 19, 48, 4, 178, 84, 7, 43, 323, 17, 3, 31, 80 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #license-other #Calabi-Yau #Toric Geometry #String Theory #Polyhedra #Geometry #Physics #region-us \n# Dataset Card for Reflexive Polyhedra of Calabi-Yau Threefolds## Table of Contents\n\n- Dataset Description\n - General Information\n - Dataset Origin\n- Dataset Characteristics\n - Schema\n - Data Fields\n - Data Format\n- Usage\n - Getting Started\n - Machine Learning Applications\n - Citations## Dataset Description### General Information\n\nCalabi-Yau threefolds are a special class of smooth, compact three-dimensional spaces that have become fundamental objects in both mathematics and theoretical physics. In the context of string theory, they serve as the internal dimensions over which strings compactify, leading to a four-dimensional effective theory. The geometry of these threefolds is closely related to many physical phenomena, including the number of particle generations, gauge symmetries, and the cosmological constant. This dataset encompasses all 4319 reflexive polyhedra in 3 dimensions, offering a comprehensive view of potential Calabi-Yau geometries. The reflexive polyhedra serve as dual representations of these threefolds and are crucial in understanding their topological and geometric properties.### Dataset Origin\n\nThe dataset is derived from the original work documented in hep-th/9805190. While the original dataset was in a PALP-compatible structure, this version has been converted to a nested JSON format to better accommodate machine learning applications. The PALP-compatible version of the dataset can be accessed at CYk3.## Dataset Characteristics### Schema\n\nThe dataset is presented in a nested JSON format, with each entry containing both metadata and a matrix representing the vertices of the corresponding polyhedron." ]
9ddf867bbe4f15bb13f6f0a8b40ee42d0b5d2389
# Dataset Card for "dataset_test_disaggregated_nli" Dataset for testing a universal classifier. Additional information and training code available here: https://github.com/MoritzLaurer/zeroshot-classifier
MoritzLaurer/dataset_test_disaggregated_nli
[ "region:us" ]
2023-09-28T13:43:00+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "mnli_m", "path": "data/mnli_m-*"}, {"split": "mnli_mm", "path": "data/mnli_mm-*"}, {"split": "fevernli", "path": "data/fevernli-*"}, {"split": "anli_r1", "path": "data/anli_r1-*"}, {"split": "anli_r2", "path": "data/anli_r2-*"}, {"split": "anli_r3", "path": "data/anli_r3-*"}, {"split": "wanli", "path": "data/wanli-*"}, {"split": "lingnli", "path": "data/lingnli-*"}, {"split": "wellformedquery", "path": "data/wellformedquery-*"}, {"split": "rottentomatoes", "path": "data/rottentomatoes-*"}, {"split": "amazonpolarity", "path": "data/amazonpolarity-*"}, {"split": "imdb", "path": "data/imdb-*"}, {"split": "yelpreviews", "path": "data/yelpreviews-*"}, {"split": "hatexplain", "path": "data/hatexplain-*"}, {"split": "massive", "path": "data/massive-*"}, {"split": "banking77", "path": "data/banking77-*"}, {"split": "emotiondair", "path": "data/emotiondair-*"}, {"split": "emocontext", "path": "data/emocontext-*"}, {"split": "empathetic", "path": "data/empathetic-*"}, {"split": "agnews", "path": "data/agnews-*"}, {"split": "yahootopics", "path": "data/yahootopics-*"}, {"split": "biasframes_sex", "path": "data/biasframes_sex-*"}, {"split": "biasframes_offensive", "path": "data/biasframes_offensive-*"}, {"split": "biasframes_intent", "path": "data/biasframes_intent-*"}, {"split": "financialphrasebank", "path": "data/financialphrasebank-*"}, {"split": "appreviews", "path": "data/appreviews-*"}, {"split": "hateoffensive", "path": "data/hateoffensive-*"}, {"split": "trueteacher", "path": "data/trueteacher-*"}, {"split": "spam", "path": "data/spam-*"}, {"split": "wikitoxic_toxicaggregated", "path": "data/wikitoxic_toxicaggregated-*"}, {"split": "wikitoxic_obscene", "path": "data/wikitoxic_obscene-*"}, {"split": "wikitoxic_identityhate", "path": "data/wikitoxic_identityhate-*"}, {"split": "wikitoxic_threat", "path": "data/wikitoxic_threat-*"}, {"split": "wikitoxic_insult", "path": "data/wikitoxic_insult-*"}, {"split": "manifesto", "path": "data/manifesto-*"}, {"split": "capsotu", "path": "data/capsotu-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "task_name", "dtype": "string"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "mnli_m", "num_bytes": 2055427, "num_examples": 9815}, {"name": "mnli_mm", "num_bytes": 2181179, "num_examples": 9832}, {"name": "fevernli", "num_bytes": 7532028, "num_examples": 19652}, {"name": "anli_r1", "num_bytes": 433064, "num_examples": 1000}, {"name": "anli_r2", "num_bytes": 432927, "num_examples": 1000}, {"name": "anli_r3", "num_bytes": 501290, "num_examples": 1200}, {"name": "wanli", "num_bytes": 940472, "num_examples": 5000}, {"name": "lingnli", "num_bytes": 1078241, "num_examples": 4893}, {"name": "wellformedquery", "num_bytes": 815799, "num_examples": 5934}, {"name": "rottentomatoes", "num_bytes": 493664, "num_examples": 2132}, {"name": "amazonpolarity", "num_bytes": 10798222, "num_examples": 20000}, {"name": "imdb", "num_bytes": 27862150, "num_examples": 20000}, {"name": "yelpreviews", "num_bytes": 15688830, "num_examples": 20000}, {"name": "hatexplain", "num_bytes": 710204, "num_examples": 2922}, {"name": "massive", "num_bytes": 23911774, "num_examples": 175466}, {"name": "banking77", "num_bytes": 40018400, "num_examples": 221760}, {"name": "emotiondair", "num_bytes": 2202560, "num_examples": 12000}, {"name": "emocontext", "num_bytes": 3575972, "num_examples": 22036}, {"name": "empathetic", "num_bytes": 52139926, "num_examples": 81344}, {"name": "agnews", "num_bytes": 9630696, "num_examples": 30400}, {"name": "yahootopics", "num_bytes": 343270530, "num_examples": 500000}, {"name": "biasframes_sex", "num_bytes": 1830030, "num_examples": 8808}, {"name": "biasframes_offensive", "num_bytes": 1785704, "num_examples": 7676}, {"name": "biasframes_intent", "num_bytes": 1592094, "num_examples": 7296}, {"name": "financialphrasebank", "num_bytes": 514854, "num_examples": 2070}, {"name": "appreviews", "num_bytes": 2414054, "num_examples": 8000}, {"name": "hateoffensive", "num_bytes": 493480, "num_examples": 2586}, {"name": "trueteacher", "num_bytes": 24821652, "num_examples": 17910}, {"name": "spam", "num_bytes": 292810, "num_examples": 2070}, {"name": "wikitoxic_toxicaggregated", "num_bytes": 9026954, "num_examples": 20000}, {"name": "wikitoxic_obscene", "num_bytes": 7951550, "num_examples": 17382}, {"name": "wikitoxic_identityhate", "num_bytes": 5734460, "num_examples": 11424}, {"name": "wikitoxic_threat", "num_bytes": 5174652, "num_examples": 10422}, {"name": "wikitoxic_insult", "num_bytes": 7364528, "num_examples": 16854}, {"name": "manifesto", "num_bytes": 417565056, "num_examples": 953008}, {"name": "capsotu", "num_bytes": 24646828, "num_examples": 70455}], "download_size": 10536386, "dataset_size": 1057482061}}
2023-11-29T18:40:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dataset_test_disaggregated_nli" Dataset for testing a universal classifier. Additional information and training code available here: URL
[ "# Dataset Card for \"dataset_test_disaggregated_nli\"\n\nDataset for testing a universal classifier. Additional information and training code available here: URL" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dataset_test_disaggregated_nli\"\n\nDataset for testing a universal classifier. Additional information and training code available here: URL" ]
[ 6, 38 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dataset_test_disaggregated_nli\"\n\nDataset for testing a universal classifier. Additional information and training code available here: URL" ]
6f6d34c4a303bc3c0555ecd47cbb49b4988f5034
# ACCO (Collecttive Company Agreements) [Company agreements](https://echanges.dila.gouv.fr/OPENDATA/ACCO/) published in accordance with article of decree no. 2017-752 of 3 May 2017 on the publication of collective agreements. These agreements may concern: - groups - companies - establishments The following are published: - agreements concluded - their amendment(s) - their deletion The database contains company agreements concluded on or after 1 September 2017. As a transitional measure until 1 October 2018, the data does not include the first and last names of the negotiators and signatories. After this date, the data will be published by default, unless anonymisation is requested from the Direction Générale du Travail and carried out at source by the latter before publication.
Nicolas-BZRD/ACCO_opendata
[ "size_categories:100K<n<1M", "language:fr", "license:odc-by", "legal", "region:us" ]
2023-09-28T13:46:58+00:00
{"language": ["fr"], "license": "odc-by", "size_categories": ["100K<n<1M"], "pretty_name": "Collecttive Company Agreements", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3677709236, "num_examples": 254140}], "download_size": 1076143081, "dataset_size": 3677709236}, "tags": ["legal"]}
2023-09-28T18:01:30+00:00
[]
[ "fr" ]
TAGS #size_categories-100K<n<1M #language-French #license-odc-by #legal #region-us
# ACCO (Collecttive Company Agreements) Company agreements published in accordance with article of decree no. 2017-752 of 3 May 2017 on the publication of collective agreements. These agreements may concern: - groups - companies - establishments The following are published: - agreements concluded - their amendment(s) - their deletion The database contains company agreements concluded on or after 1 September 2017. As a transitional measure until 1 October 2018, the data does not include the first and last names of the negotiators and signatories. After this date, the data will be published by default, unless anonymisation is requested from the Direction Générale du Travail and carried out at source by the latter before publication.
[ "# ACCO (Collecttive Company Agreements)\n\nCompany agreements published in accordance with article of decree no. 2017-752 of 3 May 2017 on the publication of collective agreements.\nThese agreements may concern:\n- groups\n- companies\n- establishments\n \nThe following are published:\n- agreements concluded\n- their amendment(s)\n- their deletion\n \nThe database contains company agreements concluded on or after 1 September 2017.\nAs a transitional measure until 1 October 2018, the data does not include the first and last names of the negotiators and signatories.\nAfter this date, the data will be published by default, unless anonymisation is requested from the Direction Générale du Travail and carried out at source by the latter before publication." ]
[ "TAGS\n#size_categories-100K<n<1M #language-French #license-odc-by #legal #region-us \n", "# ACCO (Collecttive Company Agreements)\n\nCompany agreements published in accordance with article of decree no. 2017-752 of 3 May 2017 on the publication of collective agreements.\nThese agreements may concern:\n- groups\n- companies\n- establishments\n \nThe following are published:\n- agreements concluded\n- their amendment(s)\n- their deletion\n \nThe database contains company agreements concluded on or after 1 September 2017.\nAs a transitional measure until 1 October 2018, the data does not include the first and last names of the negotiators and signatories.\nAfter this date, the data will be published by default, unless anonymisation is requested from the Direction Générale du Travail and carried out at source by the latter before publication." ]
[ 34, 159 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-French #license-odc-by #legal #region-us \n# ACCO (Collecttive Company Agreements)\n\nCompany agreements published in accordance with article of decree no. 2017-752 of 3 May 2017 on the publication of collective agreements.\nThese agreements may concern:\n- groups\n- companies\n- establishments\n \nThe following are published:\n- agreements concluded\n- their amendment(s)\n- their deletion\n \nThe database contains company agreements concluded on or after 1 September 2017.\nAs a transitional measure until 1 October 2018, the data does not include the first and last names of the negotiators and signatories.\nAfter this date, the data will be published by default, unless anonymisation is requested from the Direction Générale du Travail and carried out at source by the latter before publication." ]
7296e870ffb6c3ea87dfe5756c4918cc830502d5
# Dataset of Aihara Mei This is the dataset of Aihara Mei, containing 237 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 237 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 493 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 621 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 237 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 237 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 237 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 493 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 493 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 442 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 621 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 621 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/aihara_mei_citrus
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-28T13:47:31+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-28T13:54:18+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Aihara Mei ===================== This is the dataset of Aihara Mei, containing 237 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
3652393f2639e1bd2d6389a4527f9ca160e5e1cb
# Introduction AGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. This benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams. For a full description of the benchmark, please refer to our paper: [AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models](https://arxiv.org/pdf/2304.06364.pdf). More info and details at the homepage of the dataset: https://github.com/ruixiangcui/AGIEval
lighteval/agi_eval_en
[ "arxiv:2304.06364", "region:us" ]
2023-09-28T13:59:03+00:00
{}
2023-10-17T13:46:49+00:00
[ "2304.06364" ]
[]
TAGS #arxiv-2304.06364 #region-us
# Introduction AGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. This benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams. For a full description of the benchmark, please refer to our paper: AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models. More info and details at the homepage of the dataset: URL
[ "# Introduction\nAGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. \nThis benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams. \nFor a full description of the benchmark, please refer to our paper: AGIEval: A Human-Centric Benchmark for\nEvaluating Foundation Models.\n\nMore info and details at the homepage of the dataset: URL" ]
[ "TAGS\n#arxiv-2304.06364 #region-us \n", "# Introduction\nAGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. \nThis benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams. \nFor a full description of the benchmark, please refer to our paper: AGIEval: A Human-Centric Benchmark for\nEvaluating Foundation Models.\n\nMore info and details at the homepage of the dataset: URL" ]
[ 15, 164 ]
[ "passage: TAGS\n#arxiv-2304.06364 #region-us \n# Introduction\nAGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. \nThis benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams. \nFor a full description of the benchmark, please refer to our paper: AGIEval: A Human-Centric Benchmark for\nEvaluating Foundation Models.\n\nMore info and details at the homepage of the dataset: URL" ]
04a9002c0e2eeec39bad78c9c64bf1e504bd13e9
# Dataset of Taniguchi Harumi This is the dataset of Taniguchi Harumi, containing 72 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 72 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 173 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 192 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 72 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 72 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 72 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 173 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 173 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 142 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 192 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 192 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/taniguchi_harumi_citrus
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-28T14:02:00+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-28T14:04:25+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Taniguchi Harumi =========================== This is the dataset of Taniguchi Harumi, containing 72 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
c9d1a6f10f40f93451114efc06873135e1dd2bed
# Dataset Card for "squad_rare_v4_train_10_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_rare_v4_train_10_eval_10
[ "region:us" ]
2023-09-28T14:08:06+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 200420, "num_examples": 138}, {"name": "validation", "num_bytes": 49683, "num_examples": 50}], "download_size": 64345, "dataset_size": 250103}}
2023-09-28T14:08:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "squad_rare_v4_train_10_eval_10" More Information needed
[ "# Dataset Card for \"squad_rare_v4_train_10_eval_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"squad_rare_v4_train_10_eval_10\"\n\nMore Information needed" ]
[ 6, 27 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"squad_rare_v4_train_10_eval_10\"\n\nMore Information needed" ]
bd467aa88fd4d6c3f182e833ce9b8f01f9a63a0a
# Dataset Card for "squad_wrong_rare_v4_train_10_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_wrong_rare_v4_train_10_eval_10
[ "region:us" ]
2023-09-28T14:08:24+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 200420, "num_examples": 138}, {"name": "validation", "num_bytes": 50258, "num_examples": 50}], "download_size": 64429, "dataset_size": 250678}}
2023-09-28T14:08:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "squad_wrong_rare_v4_train_10_eval_10" More Information needed
[ "# Dataset Card for \"squad_wrong_rare_v4_train_10_eval_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"squad_wrong_rare_v4_train_10_eval_10\"\n\nMore Information needed" ]
[ 6, 30 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"squad_wrong_rare_v4_train_10_eval_10\"\n\nMore Information needed" ]
e8d24bfbe00d9f50724c14c9e4c59018d48ba5bf
# Dataset Card for "squad_no_rare_v4_train_10_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_no_rare_v4_train_10_eval_10
[ "region:us" ]
2023-09-28T14:08:42+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 200420, "num_examples": 138}, {"name": "validation", "num_bytes": 48145, "num_examples": 50}], "download_size": 63869, "dataset_size": 248565}}
2023-09-28T14:08:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "squad_no_rare_v4_train_10_eval_10" More Information needed
[ "# Dataset Card for \"squad_no_rare_v4_train_10_eval_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"squad_no_rare_v4_train_10_eval_10\"\n\nMore Information needed" ]
[ 6, 29 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"squad_no_rare_v4_train_10_eval_10\"\n\nMore Information needed" ]
260f140a3654556ce672e226c98f0bb2f4b1899f
# Dataset Card for "squad_no_rare_strict_v4_train_10_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_no_rare_strict_v4_train_10_eval_10
[ "region:us" ]
2023-09-28T14:08:51+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 199078, "num_examples": 138}, {"name": "validation", "num_bytes": 48145, "num_examples": 50}], "download_size": 63640, "dataset_size": 247223}}
2023-09-28T14:08:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "squad_no_rare_strict_v4_train_10_eval_10" More Information needed
[ "# Dataset Card for \"squad_no_rare_strict_v4_train_10_eval_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"squad_no_rare_strict_v4_train_10_eval_10\"\n\nMore Information needed" ]
[ 6, 31 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"squad_no_rare_strict_v4_train_10_eval_10\"\n\nMore Information needed" ]
e1e8429697648b01b14ce61753f94fc2c2a55453
# Face Emotion Classification Dataset This dataset contain about 35000 images which are belongs to 7 classes. This dataset can be used to train deep learning models for human emotion classification problems.
manojdilz/facial_emotion_detection_dataset
[ "region:us" ]
2023-09-28T14:09:48+00:00
{}
2023-10-14T13:30:28+00:00
[]
[]
TAGS #region-us
# Face Emotion Classification Dataset This dataset contain about 35000 images which are belongs to 7 classes. This dataset can be used to train deep learning models for human emotion classification problems.
[ "# Face Emotion Classification Dataset \nThis dataset contain about 35000 images which are belongs to 7 classes. This dataset can be used to train deep learning models for human emotion classification problems." ]
[ "TAGS\n#region-us \n", "# Face Emotion Classification Dataset \nThis dataset contain about 35000 images which are belongs to 7 classes. This dataset can be used to train deep learning models for human emotion classification problems." ]
[ 6, 43 ]
[ "passage: TAGS\n#region-us \n# Face Emotion Classification Dataset \nThis dataset contain about 35000 images which are belongs to 7 classes. This dataset can be used to train deep learning models for human emotion classification problems." ]
f9cf09448f797179951a5e8248e653c80182fc51
# Dataset of Momokino Himeko This is the dataset of Momokino Himeko, containing 97 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 97 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 232 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 286 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 97 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 97 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 97 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 232 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 232 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 181 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 286 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 286 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/momokino_himeko_citrus
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-28T14:15:35+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-28T14:18:43+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Momokino Himeko ========================== This is the dataset of Momokino Himeko, containing 97 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
d67d1931d7382e49415c4be58c38b330a66c93c5
# Dataset Card for "IteraTeR_v2_fixed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
reza-alipour/IteraTeR_v2_fixed
[ "region:us" ]
2023-09-28T14:28:03+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "before_sent", "dtype": "string"}, {"name": "after_sent", "dtype": "string"}, {"name": "before_sent_with_intent", "dtype": "string"}, {"name": "labels", "dtype": "string"}, {"name": "confidence", "dtype": "string"}, {"name": "doc_id", "dtype": "string"}, {"name": "revision_depth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 310170436, "num_examples": 293929}, {"name": "validation", "num_bytes": 35516792, "num_examples": 34026}, {"name": "test", "num_bytes": 41970653, "num_examples": 39816}], "download_size": 119234711, "dataset_size": 387657881}}
2023-09-28T14:28:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "IteraTeR_v2_fixed" More Information needed
[ "# Dataset Card for \"IteraTeR_v2_fixed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"IteraTeR_v2_fixed\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"IteraTeR_v2_fixed\"\n\nMore Information needed" ]
a070ad817f5299ec7d21a8a423370c6f52ec8f45
# Dataset of Mizusawa Matsuri This is the dataset of Mizusawa Matsuri, containing 90 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 90 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 197 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 233 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 90 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 90 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 90 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 197 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 197 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 171 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 233 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 233 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/mizusawa_matsuri_citrus
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-28T14:29:30+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-28T14:32:26+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Mizusawa Matsuri =========================== This is the dataset of Mizusawa Matsuri, containing 90 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
7924dd15217a732fa6b11b711e2593a69c6fe27c
# Dataset of Tachibana Sara This is the dataset of Tachibana Sara, containing 69 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 69 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 150 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 181 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 69 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 69 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 69 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 150 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 150 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 126 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 181 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 181 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/tachibana_sara_citrus
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-28T14:40:50+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-28T14:43:21+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Tachibana Sara ========================= This is the dataset of Tachibana Sara, containing 69 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
7f858c15e3e423390abcc4d453ee9dd97e02c592
# Dataset Card for "Zeroshot_Train-20K_other_tweet-format" This dataset is a train dataset for the Zeroshot models. It has 20.000 data in a prompt format exclusively for train with class 'other' in Brazilian Portuguese. Prompt: ``` "Classifique o tweet entre 'classe1', 'classe2', 'classe3', 'classe4', 'other' \\n\\nTweet: frase \\n\\nLabel: 'other' ``` The dataset was divided as follows: <br> ``` - 6,000 data: prompt with class option without target class (other) - 7,000 data: prompt with class option + target class included as an option. target class is not correct - 7,000 data: prompt with class option + target class. target class is correct ``` ## How to load and use this dataset: ``` from datasets import load_dataset dataset = load_dataset("Weni/Zeroshot_Train-20K_other_tweet-format") dataset ``` [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Weni/Zeroshot_Train-20K_other_tweet-format
[ "task_categories:zero-shot-classification", "size_categories:10K<n<100K", "language:pt", "region:us" ]
2023-09-28T14:42:14+00:00
{"language": ["pt"], "size_categories": ["10K<n<100K"], "task_categories": ["zero-shot-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4369715, "num_examples": 20000}], "download_size": 1752054, "dataset_size": 4369715}}
2023-09-28T17:41:59+00:00
[]
[ "pt" ]
TAGS #task_categories-zero-shot-classification #size_categories-10K<n<100K #language-Portuguese #region-us
# Dataset Card for "Zeroshot_Train-20K_other_tweet-format" This dataset is a train dataset for the Zeroshot models. It has 20.000 data in a prompt format exclusively for train with class 'other' in Brazilian Portuguese. Prompt: The dataset was divided as follows: <br> ## How to load and use this dataset: More Information needed
[ "# Dataset Card for \"Zeroshot_Train-20K_other_tweet-format\"\n\nThis dataset is a train dataset for the Zeroshot models. \nIt has 20.000 data in a prompt format exclusively for train with class 'other' in Brazilian Portuguese.\n\nPrompt:\n\n\nThe dataset was divided as follows: <br>", "## How to load and use this dataset:\n\n\nMore Information needed" ]
[ "TAGS\n#task_categories-zero-shot-classification #size_categories-10K<n<100K #language-Portuguese #region-us \n", "# Dataset Card for \"Zeroshot_Train-20K_other_tweet-format\"\n\nThis dataset is a train dataset for the Zeroshot models. \nIt has 20.000 data in a prompt format exclusively for train with class 'other' in Brazilian Portuguese.\n\nPrompt:\n\n\nThe dataset was divided as follows: <br>", "## How to load and use this dataset:\n\n\nMore Information needed" ]
[ 37, 76, 13 ]
[ "passage: TAGS\n#task_categories-zero-shot-classification #size_categories-10K<n<100K #language-Portuguese #region-us \n# Dataset Card for \"Zeroshot_Train-20K_other_tweet-format\"\n\nThis dataset is a train dataset for the Zeroshot models. \nIt has 20.000 data in a prompt format exclusively for train with class 'other' in Brazilian Portuguese.\n\nPrompt:\n\n\nThe dataset was divided as follows: <br>## How to load and use this dataset:\n\n\nMore Information needed" ]
64670690de42424366b8a47954bb8f72192417ea
# Dataset Card for "test_data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Rs9000/test_data
[ "region:us" ]
2023-09-28T14:44:05+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "original_prompt", "dtype": "string"}, {"name": "positive_prompt", "dtype": "string"}, {"name": "negative_prompt", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "model_gen0", "dtype": "string"}, {"name": "model_gen1", "dtype": "string"}, {"name": "model_gen2", "dtype": "string"}, {"name": "model_gen3", "dtype": "string"}, {"name": "width_gen0", "dtype": "int64"}, {"name": "width_gen1", "dtype": "int64"}, {"name": "width_gen2", "dtype": "int64"}, {"name": "width_gen3", "dtype": "int64"}, {"name": "height_gen0", "dtype": "int64"}, {"name": "height_gen1", "dtype": "int64"}, {"name": "height_gen2", "dtype": "int64"}, {"name": "height_gen3", "dtype": "int64"}, {"name": "num_inference_steps_gen0", "dtype": "int64"}, {"name": "num_inference_steps_gen1", "dtype": "int64"}, {"name": "num_inference_steps_gen2", "dtype": "int64"}, {"name": "num_inference_steps_gen3", "dtype": "int64"}, {"name": "filepath_gen0", "dtype": "string"}, {"name": "filepath_gen1", "dtype": "string"}, {"name": "filepath_gen2", "dtype": "string"}, {"name": "filepath_gen3", "dtype": "string"}, {"name": "image_gen0", "dtype": "image"}, {"name": "image_gen1", "dtype": "image"}, {"name": "image_gen2", "dtype": "image"}, {"name": "image_gen3", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 802487704.0, "num_examples": 3000}], "download_size": 801510839, "dataset_size": 802487704.0}}
2023-09-28T14:47:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test_data" More Information needed
[ "# Dataset Card for \"test_data\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test_data\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"test_data\"\n\nMore Information needed" ]
57de7e747826183adeb65033a86c39c42baa5061
# Dataset of Tachibana Nina This is the dataset of Tachibana Nina, containing 44 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 44 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 102 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 133 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 44 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 44 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 44 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 102 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 102 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 83 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 133 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 133 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/tachibana_nina_citrus
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-28T14:48:55+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-28T14:51:08+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Tachibana Nina ========================= This is the dataset of Tachibana Nina, containing 44 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
153fbbce1b6b969905621c64579bab941b54a7bb
# Dataset Card for "llama-7b__model__one_million_instructions__reconstructions_sample" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jxm/llama-7b__model__one_million_instructions__reconstructions_sample
[ "region:us" ]
2023-09-28T15:23:27+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "length", "dtype": "int64"}, {"name": "embedder_input_ids", "sequence": "int64"}, {"name": "embedder_attention_mask", "sequence": "int64"}, {"name": "frozen_embeddings", "sequence": "float32"}, {"name": "idx", "dtype": "int64"}, {"name": "str_original", "dtype": "string"}, {"name": "str_reconstruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13289065, "num_examples": 100}], "download_size": 0, "dataset_size": 13289065}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-29T01:24:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "llama-7b__model__one_million_instructions__reconstructions_sample" More Information needed
[ "# Dataset Card for \"llama-7b__model__one_million_instructions__reconstructions_sample\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"llama-7b__model__one_million_instructions__reconstructions_sample\"\n\nMore Information needed" ]
[ 6, 31 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"llama-7b__model__one_million_instructions__reconstructions_sample\"\n\nMore Information needed" ]
3cf267b78fdfdfdf7654aa1c9abb5ab708470fb8
# Udmurt-Russian dataset from Tatoeba Contains 888 Russian-Udmurt sentences. Punctuation added to some sentences. Dump downloaded 28.09.2023. ## Usage ```py from datasets import load_dataset dataset = load_dataset("udmurtNLP/tatoeba-rus-udm-parallel-corpora") ```
udmurtNLP/tatoeba-rus-udm-parallel-corpora
[ "task_categories:translation", "size_categories:n<1K", "language:udm", "region:us" ]
2023-09-28T15:23:39+00:00
{"language": ["udm"], "size_categories": ["n<1K"], "task_categories": ["translation"], "dataset_info": {"features": [{"name": "rus", "dtype": "string"}, {"name": "udm", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80931, "num_examples": 889}], "download_size": 39673, "dataset_size": 80931}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-28T15:30:59+00:00
[]
[ "udm" ]
TAGS #task_categories-translation #size_categories-n<1K #language-Udmurt #region-us
# Udmurt-Russian dataset from Tatoeba Contains 888 Russian-Udmurt sentences. Punctuation added to some sentences. Dump downloaded 28.09.2023. ## Usage
[ "# Udmurt-Russian dataset from Tatoeba\n\nContains 888 Russian-Udmurt sentences. Punctuation added to some sentences. Dump downloaded 28.09.2023.", "## Usage" ]
[ "TAGS\n#task_categories-translation #size_categories-n<1K #language-Udmurt #region-us \n", "# Udmurt-Russian dataset from Tatoeba\n\nContains 888 Russian-Udmurt sentences. Punctuation added to some sentences. Dump downloaded 28.09.2023.", "## Usage" ]
[ 32, 43, 3 ]
[ "passage: TAGS\n#task_categories-translation #size_categories-n<1K #language-Udmurt #region-us \n# Udmurt-Russian dataset from Tatoeba\n\nContains 888 Russian-Udmurt sentences. Punctuation added to some sentences. Dump downloaded 28.09.2023.## Usage" ]
ce0d005ce80954b61c6ce55130559148c79ddc47
# Similing or Not A dataset comprised of closeups of people's faces, belonging to 2 binary classes. - 600 smiling faces in the "smile" folder. - 603 non smiling faces in the "non_smile" folder. We can build a smile detector with this dataset, and even a "smile transformer" via a Style Transfer algorithm. The "test" folder contains ~12k unlabeled faces. If someone wants go through the work of labeling these faces as smile/nonsmile and republish a greater version of this dataset, please be my guest! <hr> *Reupload from [original dataset](https://www.kaggle.com/datasets/chazzer/smiling-or-not-face-data/) on Kaggle*
zrthxn/SmilingOrNot
[ "region:us" ]
2023-09-28T15:29:50+00:00
{}
2023-09-28T15:41:56+00:00
[]
[]
TAGS #region-us
# Similing or Not A dataset comprised of closeups of people's faces, belonging to 2 binary classes. - 600 smiling faces in the "smile" folder. - 603 non smiling faces in the "non_smile" folder. We can build a smile detector with this dataset, and even a "smile transformer" via a Style Transfer algorithm. The "test" folder contains ~12k unlabeled faces. If someone wants go through the work of labeling these faces as smile/nonsmile and republish a greater version of this dataset, please be my guest! <hr> *Reupload from original dataset on Kaggle*
[ "# Similing or Not\n\nA dataset comprised of closeups of people's faces, belonging to 2 binary classes.\n- 600 smiling faces in the \"smile\" folder.\n- 603 non smiling faces in the \"non_smile\" folder.\n\nWe can build a smile detector with this dataset, and even a \"smile transformer\" via a Style Transfer algorithm. \n\nThe \"test\" folder contains ~12k unlabeled faces. If someone wants go through the work of labeling these faces as smile/nonsmile and republish a greater version of this dataset, please be my guest!\n\n<hr>\n\n*Reupload from original dataset on Kaggle*" ]
[ "TAGS\n#region-us \n", "# Similing or Not\n\nA dataset comprised of closeups of people's faces, belonging to 2 binary classes.\n- 600 smiling faces in the \"smile\" folder.\n- 603 non smiling faces in the \"non_smile\" folder.\n\nWe can build a smile detector with this dataset, and even a \"smile transformer\" via a Style Transfer algorithm. \n\nThe \"test\" folder contains ~12k unlabeled faces. If someone wants go through the work of labeling these faces as smile/nonsmile and republish a greater version of this dataset, please be my guest!\n\n<hr>\n\n*Reupload from original dataset on Kaggle*" ]
[ 6, 152 ]
[ "passage: TAGS\n#region-us \n# Similing or Not\n\nA dataset comprised of closeups of people's faces, belonging to 2 binary classes.\n- 600 smiling faces in the \"smile\" folder.\n- 603 non smiling faces in the \"non_smile\" folder.\n\nWe can build a smile detector with this dataset, and even a \"smile transformer\" via a Style Transfer algorithm. \n\nThe \"test\" folder contains ~12k unlabeled faces. If someone wants go through the work of labeling these faces as smile/nonsmile and republish a greater version of this dataset, please be my guest!\n\n<hr>\n\n*Reupload from original dataset on Kaggle*" ]
54b537570983a7547325a722a2ee3a51e9dce7b5
# Dataset Card for "dreambooth" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kewu93/dreambooth
[ "region:us" ]
2023-09-28T15:38:17+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 63956933.0, "num_examples": 90}, {"name": "val", "num_bytes": 47721308.0, "num_examples": 68}], "download_size": 111584859, "dataset_size": 111678241.0}}
2023-09-28T15:38:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth" More Information needed
[ "# Dataset Card for \"dreambooth\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dreambooth\"\n\nMore Information needed" ]
f786730fd11197cc38b618e7dc1c74a8d8e7230f
# Dataset Card for "adj_extension" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
loremipsum3658/adj_extension
[ "region:us" ]
2023-09-28T16:02:18+00:00
{"dataset_info": {"features": [{"name": "data", "dtype": "string"}, {"name": "titulo", "dtype": "string"}, {"name": "andamento", "dtype": "string"}, {"name": "nup", "dtype": "null"}, {"name": "classificacao_andamento", "sequence": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 71124, "num_examples": 135}], "download_size": 23610, "dataset_size": 71124}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-28T16:03:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "adj_extension" More Information needed
[ "# Dataset Card for \"adj_extension\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"adj_extension\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"adj_extension\"\n\nMore Information needed" ]
a0eb3201a20c063da0fdc1b23ea9fac2e4379f75
# Dataset Card for "694df328" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/694df328
[ "region:us" ]
2023-09-28T16:05:55+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 162, "num_examples": 10}], "download_size": 1318, "dataset_size": 162}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-28T16:05:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "694df328" More Information needed
[ "# Dataset Card for \"694df328\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"694df328\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"694df328\"\n\nMore Information needed" ]
fe1eb517d1f3a6fdbb41b892169ad11468c34e45
# D3FEND: A knowledge graph of cybersecurity countermeasures ### Overview D3FEND encodes a countermeasure knowledge base in the form of a knowledge graph. It meticulously organizes key concepts and relations in the cybersecurity countermeasure domain, linking each to pertinent references in the cybersecurity literature. ### Use-cases Researchers and cybersecurity enthusiasts can leverage D3FEND to: - Develop sophisticated graph-based models. - Fine-tune large language models, focusing on cybersecurity knowledge graph completion. - Explore the complexities and nuances of defensive techniques, mappings to MITRE ATT&CK, weaknesses (CWEs), and cybersecurity taxonomies. - Gain insight into ontology development and modeling in the cybersecurity domain. ### Dataset construction and pre-processing ### Source: - [Dataset Repository - 0.13.0-BETA-1](https://github.com/d3fend/d3fend-ontology/tree/release/0.13.0-BETA-1) - [Commit Details](https://github.com/d3fend/d3fend-ontology/commit/3dcc495879bb62cee5c4109e9b784dd4a2de3c9d) - [CWE Extension](https://github.com/d3fend/d3fend-ontology/tree/release/0.13.0-BETA-1/extensions/cwe) #### Building and Verification: 1. **Construction**: The ontology, denoted as `d3fend-full.owl`, was built from the beta version of the D3FEND ontology referenced above using documented README in d3fend-ontology. This includes the CWE extensions. 2. **Import and Reasoning**: Imported into Protege version 5.6.1, utilizing the Pellet reasoner plugin for logical reasoning and verification. 3. **Coherence Check**: Utilized the Debug Ontology plugin in Protege to ensure the ontology's coherence and consistency. #### Exporting, Transformation, and Compression: Note: The following steps were performed using Apache Jena's command line tools. (https://jena.apache.org/documentation/tools/) 1. **Exporting Inferred Axioms**: Post-verification, I exported inferred axioms along with asserted axioms and annotations. [Detailed Process](https://www.michaeldebellis.com/post/export-inferred-axioms) 2. **Filtering**: The materialized ontology was filtered using `d3fend.rq` to retain relevant triples. 3. **Format Transformation**: Subsequently transformed to Turtle and N-Triples formats for diverse usability. Note: I export in Turtle first because it is easier to read and verify. Then I convert to N-Triples. ```shell arq --query=d3fend.rq --data=d3fend.owl --results=turtle > d3fend.ttl riot --output=nt d3fend.ttl > d3fend.nt ``` 4. **Compression**: Compressed the resulting ontology files using gzip. ## Features The D3FEND dataset is composed of triples representing the relationships between different cybersecurity countermeasures. Each triple is a representation of a statement about a cybersecurity concept or a relationship between concepts. The dataset includes the following features: ### 1. **Subject** (`string`) The subject of a triple is the entity that the statement is about. In this dataset, the subject represents a cybersecurity concept or entity, such as a specific countermeasure or ATT&CK technique. ### 2. **Predicate** (`string`) The predicate of a triple represents the property or characteristic of the subject, or the nature of the relationship between the subject and the object. For instance, it might represent a specific type of relationship like "may-be-associated-with" or "has a reference." ### 3. **Object** (`string`) The object of a triple is the entity that is related to the subject by the predicate. It can be another cybersecurity concept, such as an ATT&CK technique, or a literal value representing a property of the subject, such as a name or a description. ### Usage First make sure you have the requirements installed: ```python pip install datasets pip install rdflib ``` You can load the dataset using the Hugging Face Datasets library with the following Python code: ```python from datasets import load_dataset dataset = load_dataset('wikipunk/d3fend', split='train') ``` #### Note on Format: The subject, predicate, and object are stored in N3 notation, a verbose serialization for RDF. This allows users to unambiguously parse each component using `rdflib.util.from_n3` from the RDFLib Python library. For example: ```python from rdflib.util import from_n3 subject_node = from_n3(dataset[0]['subject']) predicate_node = from_n3(dataset[0]['predicate']) object_node = from_n3(dataset[0]['object']) ``` Once loaded, each example in the dataset will be a dictionary with `subject`, `predicate`, and `object` keys corresponding to the features described above. ### Example Here is an example of a triple in the dataset: - Subject: `"<http://d3fend.mitre.org/ontologies/d3fend.owl#T1550.002>"` - Predicate: `"<http://d3fend.mitre.org/ontologies/d3fend.owl#may-be-associated-with>"` - Object: `"<http://d3fend.mitre.org/ontologies/d3fend.owl#T1218.014>"` This triple represents the statement that the ATT&CK technique identified by `T1550.002` may be associated with the ATT&CK technique identified by `T1218.014`. ### Acknowledgements This ontology is developed by MITRE Corporation and is licensed under the MIT license. I would like to thank the authors for their work which has opened my eyes to a new world of cybersecurity modeling. If you are a cybersecurity expert please consider [contributing to D3FEND](https://d3fend.mitre.org/contribute/). [D3FEND Resources](https://d3fend.mitre.org/resources/) ### Citation ```bibtex @techreport{kaloroumakis2021d3fend, title={Toward a Knowledge Graph of Cybersecurity Countermeasures}, author={Kaloroumakis, Peter E. and Smith, Michael J.}, institution={The MITRE Corporation}, year={2021}, url={https://d3fend.mitre.org/resources/D3FEND.pdf} } ```
wikipunk/d3fend
[ "task_categories:graph-ml", "annotations_creators:expert-generated", "size_categories:100K<n<1M", "language:en", "license:mit", "knowledge-graph", "rdf", "owl", "ontology", "cybersecurity", "region:us" ]
2023-09-28T16:31:03+00:00
{"annotations_creators": ["expert-generated"], "language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["graph-ml"], "pretty_name": "D3FEND", "tags": ["knowledge-graph", "rdf", "owl", "ontology", "cybersecurity"], "dataset_info": {"features": [{"name": "subject", "dtype": "string"}, {"name": "predicate", "dtype": "string"}, {"name": "object", "dtype": "string"}], "config_name": "default", "splits": [{"name": "train", "num_bytes": 46899451, "num_examples": 231842}], "dataset_size": 46899451}, "viewer": false}
2023-09-29T14:08:51+00:00
[]
[ "en" ]
TAGS #task_categories-graph-ml #annotations_creators-expert-generated #size_categories-100K<n<1M #language-English #license-mit #knowledge-graph #rdf #owl #ontology #cybersecurity #region-us
# D3FEND: A knowledge graph of cybersecurity countermeasures ### Overview D3FEND encodes a countermeasure knowledge base in the form of a knowledge graph. It meticulously organizes key concepts and relations in the cybersecurity countermeasure domain, linking each to pertinent references in the cybersecurity literature. ### Use-cases Researchers and cybersecurity enthusiasts can leverage D3FEND to: - Develop sophisticated graph-based models. - Fine-tune large language models, focusing on cybersecurity knowledge graph completion. - Explore the complexities and nuances of defensive techniques, mappings to MITRE ATT&CK, weaknesses (CWEs), and cybersecurity taxonomies. - Gain insight into ontology development and modeling in the cybersecurity domain. ### Dataset construction and pre-processing ### Source: - Dataset Repository - 0.13.0-BETA-1 - Commit Details - CWE Extension #### Building and Verification: 1. Construction: The ontology, denoted as 'URL', was built from the beta version of the D3FEND ontology referenced above using documented README in d3fend-ontology. This includes the CWE extensions. 2. Import and Reasoning: Imported into Protege version 5.6.1, utilizing the Pellet reasoner plugin for logical reasoning and verification. 3. Coherence Check: Utilized the Debug Ontology plugin in Protege to ensure the ontology's coherence and consistency. #### Exporting, Transformation, and Compression: Note: The following steps were performed using Apache Jena's command line tools. (URL 1. Exporting Inferred Axioms: Post-verification, I exported inferred axioms along with asserted axioms and annotations. Detailed Process 2. Filtering: The materialized ontology was filtered using 'URL' to retain relevant triples. 3. Format Transformation: Subsequently transformed to Turtle and N-Triples formats for diverse usability. Note: I export in Turtle first because it is easier to read and verify. Then I convert to N-Triples. 4. Compression: Compressed the resulting ontology files using gzip. ## Features The D3FEND dataset is composed of triples representing the relationships between different cybersecurity countermeasures. Each triple is a representation of a statement about a cybersecurity concept or a relationship between concepts. The dataset includes the following features: ### 1. Subject ('string') The subject of a triple is the entity that the statement is about. In this dataset, the subject represents a cybersecurity concept or entity, such as a specific countermeasure or ATT&CK technique. ### 2. Predicate ('string') The predicate of a triple represents the property or characteristic of the subject, or the nature of the relationship between the subject and the object. For instance, it might represent a specific type of relationship like "may-be-associated-with" or "has a reference." ### 3. Object ('string') The object of a triple is the entity that is related to the subject by the predicate. It can be another cybersecurity concept, such as an ATT&CK technique, or a literal value representing a property of the subject, such as a name or a description. ### Usage First make sure you have the requirements installed: You can load the dataset using the Hugging Face Datasets library with the following Python code: #### Note on Format: The subject, predicate, and object are stored in N3 notation, a verbose serialization for RDF. This allows users to unambiguously parse each component using 'URL.from_n3' from the RDFLib Python library. For example: Once loaded, each example in the dataset will be a dictionary with 'subject', 'predicate', and 'object' keys corresponding to the features described above. ### Example Here is an example of a triple in the dataset: - Subject: '"<URL - Predicate: '"<URL - Object: '"<URL This triple represents the statement that the ATT&CK technique identified by 'T1550.002' may be associated with the ATT&CK technique identified by 'T1218.014'. ### Acknowledgements This ontology is developed by MITRE Corporation and is licensed under the MIT license. I would like to thank the authors for their work which has opened my eyes to a new world of cybersecurity modeling. If you are a cybersecurity expert please consider contributing to D3FEND. D3FEND Resources
[ "# D3FEND: A knowledge graph of cybersecurity countermeasures", "### Overview\nD3FEND encodes a countermeasure knowledge base in the form of a\nknowledge graph. It meticulously organizes key concepts and relations\nin the cybersecurity countermeasure domain, linking each to pertinent\nreferences in the cybersecurity literature.", "### Use-cases\nResearchers and cybersecurity enthusiasts can leverage D3FEND to:\n- Develop sophisticated graph-based models.\n- Fine-tune large language models, focusing on cybersecurity knowledge\n graph completion.\n- Explore the complexities and nuances of defensive techniques,\n mappings to MITRE ATT&CK, weaknesses (CWEs), and cybersecurity\n taxonomies.\n- Gain insight into ontology development and modeling in the\n cybersecurity domain.", "### Dataset construction and pre-processing", "### Source:\n- Dataset Repository - 0.13.0-BETA-1\n- Commit Details\n- CWE Extension", "#### Building and Verification:\n1. Construction: The ontology, denoted as 'URL', was\n built from the beta version of the D3FEND ontology referenced\n above using documented README in d3fend-ontology. This includes the\n CWE extensions. \n2. Import and Reasoning: Imported into Protege version 5.6.1,\n utilizing the Pellet reasoner plugin for logical reasoning and\n verification.\n3. Coherence Check: Utilized the Debug Ontology plugin in Protege\n to ensure the ontology's coherence and consistency.", "#### Exporting, Transformation, and Compression:\nNote: The following steps were performed using Apache Jena's command\nline tools. (URL\n1. Exporting Inferred Axioms: Post-verification, I exported\n inferred axioms along with asserted axioms and\n annotations. Detailed\n Process\n2. Filtering: The materialized ontology was filtered using\n 'URL' to retain relevant triples.\n3. Format Transformation: Subsequently transformed to Turtle and\n N-Triples formats for diverse usability. Note: I export in Turtle\n first because it is easier to read and verify. Then I convert to\n N-Triples.\n \n4. Compression: Compressed the resulting ontology files using\n gzip.", "## Features\nThe D3FEND dataset is composed of triples representing the\nrelationships between different cybersecurity countermeasures. Each\ntriple is a representation of a statement about a cybersecurity\nconcept or a relationship between concepts. The dataset includes the\nfollowing features:", "### 1. Subject ('string')\nThe subject of a triple is the entity that the statement is about. In\nthis dataset, the subject represents a cybersecurity concept or\nentity, such as a specific countermeasure or ATT&CK technique.", "### 2. Predicate ('string')\nThe predicate of a triple represents the property or characteristic of\nthe subject, or the nature of the relationship between the subject and\nthe object. For instance, it might represent a specific type of\nrelationship like \"may-be-associated-with\" or \"has a reference.\"", "### 3. Object ('string')\nThe object of a triple is the entity that is related to the subject by\nthe predicate. It can be another cybersecurity concept, such as an\nATT&CK technique, or a literal value representing a property of the\nsubject, such as a name or a description.", "### Usage\nFirst make sure you have the requirements installed:\n\n\n\nYou can load the dataset using the Hugging Face Datasets library with\nthe following Python code:", "#### Note on Format:\nThe subject, predicate, and object are stored in N3 notation, a\nverbose serialization for RDF. This allows users to unambiguously\nparse each component using 'URL.from_n3' from the RDFLib\nPython library. For example:\n\n\n\nOnce loaded, each example in the dataset will be a dictionary with\n'subject', 'predicate', and 'object' keys corresponding to the\nfeatures described above.", "### Example\n\nHere is an example of a triple in the dataset:\n- Subject: '\"<URL\n- Predicate: '\"<URL\n- Object: '\"<URL\n\nThis triple represents the statement that the ATT&CK technique\nidentified by 'T1550.002' may be associated with the ATT&CK technique\nidentified by 'T1218.014'.", "### Acknowledgements\nThis ontology is developed by MITRE Corporation and is licensed under\nthe MIT license. I would like to thank the authors for their work\nwhich has opened my eyes to a new world of cybersecurity modeling.\n\nIf you are a cybersecurity expert please consider contributing to\nD3FEND.\n\nD3FEND Resources" ]
[ "TAGS\n#task_categories-graph-ml #annotations_creators-expert-generated #size_categories-100K<n<1M #language-English #license-mit #knowledge-graph #rdf #owl #ontology #cybersecurity #region-us \n", "# D3FEND: A knowledge graph of cybersecurity countermeasures", "### Overview\nD3FEND encodes a countermeasure knowledge base in the form of a\nknowledge graph. It meticulously organizes key concepts and relations\nin the cybersecurity countermeasure domain, linking each to pertinent\nreferences in the cybersecurity literature.", "### Use-cases\nResearchers and cybersecurity enthusiasts can leverage D3FEND to:\n- Develop sophisticated graph-based models.\n- Fine-tune large language models, focusing on cybersecurity knowledge\n graph completion.\n- Explore the complexities and nuances of defensive techniques,\n mappings to MITRE ATT&CK, weaknesses (CWEs), and cybersecurity\n taxonomies.\n- Gain insight into ontology development and modeling in the\n cybersecurity domain.", "### Dataset construction and pre-processing", "### Source:\n- Dataset Repository - 0.13.0-BETA-1\n- Commit Details\n- CWE Extension", "#### Building and Verification:\n1. Construction: The ontology, denoted as 'URL', was\n built from the beta version of the D3FEND ontology referenced\n above using documented README in d3fend-ontology. This includes the\n CWE extensions. \n2. Import and Reasoning: Imported into Protege version 5.6.1,\n utilizing the Pellet reasoner plugin for logical reasoning and\n verification.\n3. Coherence Check: Utilized the Debug Ontology plugin in Protege\n to ensure the ontology's coherence and consistency.", "#### Exporting, Transformation, and Compression:\nNote: The following steps were performed using Apache Jena's command\nline tools. (URL\n1. Exporting Inferred Axioms: Post-verification, I exported\n inferred axioms along with asserted axioms and\n annotations. Detailed\n Process\n2. Filtering: The materialized ontology was filtered using\n 'URL' to retain relevant triples.\n3. Format Transformation: Subsequently transformed to Turtle and\n N-Triples formats for diverse usability. Note: I export in Turtle\n first because it is easier to read and verify. Then I convert to\n N-Triples.\n \n4. Compression: Compressed the resulting ontology files using\n gzip.", "## Features\nThe D3FEND dataset is composed of triples representing the\nrelationships between different cybersecurity countermeasures. Each\ntriple is a representation of a statement about a cybersecurity\nconcept or a relationship between concepts. The dataset includes the\nfollowing features:", "### 1. Subject ('string')\nThe subject of a triple is the entity that the statement is about. In\nthis dataset, the subject represents a cybersecurity concept or\nentity, such as a specific countermeasure or ATT&CK technique.", "### 2. Predicate ('string')\nThe predicate of a triple represents the property or characteristic of\nthe subject, or the nature of the relationship between the subject and\nthe object. For instance, it might represent a specific type of\nrelationship like \"may-be-associated-with\" or \"has a reference.\"", "### 3. Object ('string')\nThe object of a triple is the entity that is related to the subject by\nthe predicate. It can be another cybersecurity concept, such as an\nATT&CK technique, or a literal value representing a property of the\nsubject, such as a name or a description.", "### Usage\nFirst make sure you have the requirements installed:\n\n\n\nYou can load the dataset using the Hugging Face Datasets library with\nthe following Python code:", "#### Note on Format:\nThe subject, predicate, and object are stored in N3 notation, a\nverbose serialization for RDF. This allows users to unambiguously\nparse each component using 'URL.from_n3' from the RDFLib\nPython library. For example:\n\n\n\nOnce loaded, each example in the dataset will be a dictionary with\n'subject', 'predicate', and 'object' keys corresponding to the\nfeatures described above.", "### Example\n\nHere is an example of a triple in the dataset:\n- Subject: '\"<URL\n- Predicate: '\"<URL\n- Object: '\"<URL\n\nThis triple represents the statement that the ATT&CK technique\nidentified by 'T1550.002' may be associated with the ATT&CK technique\nidentified by 'T1218.014'.", "### Acknowledgements\nThis ontology is developed by MITRE Corporation and is licensed under\nthe MIT license. I would like to thank the authors for their work\nwhich has opened my eyes to a new world of cybersecurity modeling.\n\nIf you are a cybersecurity expert please consider contributing to\nD3FEND.\n\nD3FEND Resources" ]
[ 68, 17, 59, 112, 10, 25, 127, 166, 58, 55, 71, 68, 36, 108, 81, 73 ]
[ "passage: TAGS\n#task_categories-graph-ml #annotations_creators-expert-generated #size_categories-100K<n<1M #language-English #license-mit #knowledge-graph #rdf #owl #ontology #cybersecurity #region-us \n# D3FEND: A knowledge graph of cybersecurity countermeasures### Overview\nD3FEND encodes a countermeasure knowledge base in the form of a\nknowledge graph. It meticulously organizes key concepts and relations\nin the cybersecurity countermeasure domain, linking each to pertinent\nreferences in the cybersecurity literature.### Use-cases\nResearchers and cybersecurity enthusiasts can leverage D3FEND to:\n- Develop sophisticated graph-based models.\n- Fine-tune large language models, focusing on cybersecurity knowledge\n graph completion.\n- Explore the complexities and nuances of defensive techniques,\n mappings to MITRE ATT&CK, weaknesses (CWEs), and cybersecurity\n taxonomies.\n- Gain insight into ontology development and modeling in the\n cybersecurity domain.### Dataset construction and pre-processing### Source:\n- Dataset Repository - 0.13.0-BETA-1\n- Commit Details\n- CWE Extension#### Building and Verification:\n1. Construction: The ontology, denoted as 'URL', was\n built from the beta version of the D3FEND ontology referenced\n above using documented README in d3fend-ontology. This includes the\n CWE extensions. \n2. Import and Reasoning: Imported into Protege version 5.6.1,\n utilizing the Pellet reasoner plugin for logical reasoning and\n verification.\n3. Coherence Check: Utilized the Debug Ontology plugin in Protege\n to ensure the ontology's coherence and consistency.", "passage: #### Exporting, Transformation, and Compression:\nNote: The following steps were performed using Apache Jena's command\nline tools. (URL\n1. Exporting Inferred Axioms: Post-verification, I exported\n inferred axioms along with asserted axioms and\n annotations. Detailed\n Process\n2. Filtering: The materialized ontology was filtered using\n 'URL' to retain relevant triples.\n3. Format Transformation: Subsequently transformed to Turtle and\n N-Triples formats for diverse usability. Note: I export in Turtle\n first because it is easier to read and verify. Then I convert to\n N-Triples.\n \n4. Compression: Compressed the resulting ontology files using\n gzip.## Features\nThe D3FEND dataset is composed of triples representing the\nrelationships between different cybersecurity countermeasures. Each\ntriple is a representation of a statement about a cybersecurity\nconcept or a relationship between concepts. The dataset includes the\nfollowing features:### 1. Subject ('string')\nThe subject of a triple is the entity that the statement is about. In\nthis dataset, the subject represents a cybersecurity concept or\nentity, such as a specific countermeasure or ATT&CK technique.### 2. Predicate ('string')\nThe predicate of a triple represents the property or characteristic of\nthe subject, or the nature of the relationship between the subject and\nthe object. For instance, it might represent a specific type of\nrelationship like \"may-be-associated-with\" or \"has a reference.\"### 3. Object ('string')\nThe object of a triple is the entity that is related to the subject by\nthe predicate. It can be another cybersecurity concept, such as an\nATT&CK technique, or a literal value representing a property of the\nsubject, such as a name or a description.### Usage\nFirst make sure you have the requirements installed:\n\n\n\nYou can load the dataset using the Hugging Face Datasets library with\nthe following Python code:#### Note on Format:\nThe subject, predicate, and object are stored in N3 notation, a\nverbose serialization for RDF. This allows users to unambiguously\nparse each component using 'URL.from_n3' from the RDFLib\nPython library. For example:\n\n\n\nOnce loaded, each example in the dataset will be a dictionary with\n'subject', 'predicate', and 'object' keys corresponding to the\nfeatures described above.### Example\n\nHere is an example of a triple in the dataset:\n- Subject: '\"<URL\n- Predicate: '\"<URL\n- Object: '\"<URL\n\nThis triple represents the statement that the ATT&CK technique\nidentified by 'T1550.002' may be associated with the ATT&CK technique\nidentified by 'T1218.014'." ]
1d6ff041966912d4a8c4289a70734d68530fd1a8
# Dataset Card for "3af02cc5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/3af02cc5
[ "region:us" ]
2023-09-28T16:37:52+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 164, "num_examples": 10}], "download_size": 1315, "dataset_size": 164}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-28T16:37:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "3af02cc5" More Information needed
[ "# Dataset Card for \"3af02cc5\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"3af02cc5\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"3af02cc5\"\n\nMore Information needed" ]
3f6ff8048e30092b553648679aa1a56664286aab
# Dataset Card for "Semantic-Search-V1-14K" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Weni/Semantic-Search-V1-14K
[ "region:us" ]
2023-09-28T16:56:40+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "produto", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 821874, "num_examples": 14037}], "download_size": 421707, "dataset_size": 821874}}
2023-09-28T17:33:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Semantic-Search-V1-14K" More Information needed
[ "# Dataset Card for \"Semantic-Search-V1-14K\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Semantic-Search-V1-14K\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Semantic-Search-V1-14K\"\n\nMore Information needed" ]
306a4e35d491b63ecf20f0747723414dba01a57b
# Dataset Card for UniProtKB/Swiss-Prot ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
zgcarvalho/swiss-prot-test
[ "size_categories:100k<n<1M", "license:cc-by-4.0", "biology", "protein", "region:us" ]
2023-09-28T16:58:05+00:00
{"license": "cc-by-4.0", "size_categories": "100k<n<1M", "pretty_name": "UniProtKB/Swiss-Prot", "tags": ["biology", "protein"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "accession", "dtype": "string"}, {"name": "sequence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 171338188.2167982, "num_examples": 456125}, {"name": "test", "num_bytes": 42834828.78320182, "num_examples": 114032}], "download_size": 0, "dataset_size": 214173017.0}}
2023-09-28T17:19:57+00:00
[]
[]
TAGS #size_categories-100k<n<1M #license-cc-by-4.0 #biology #protein #region-us
# Dataset Card for UniProtKB/Swiss-Prot ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for UniProtKB/Swiss-Prot", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#size_categories-100k<n<1M #license-cc-by-4.0 #biology #protein #region-us \n", "# Dataset Card for UniProtKB/Swiss-Prot", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 33, 16, 24, 6, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#size_categories-100k<n<1M #license-cc-by-4.0 #biology #protein #region-us \n# Dataset Card for UniProtKB/Swiss-Prot## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
1192f87c89078271aca4b5ee963c9ae6653af2c2
This is a remote sensing image Military Aircraft Recognition dataset that include 3842 images, 20 types, and 22341 instances annotated with horizontal bounding boxes and oriented bounding boxes.
Alex5666/Military-Aircraft-Recognition-dataset
[ "task_categories:image-classification", "task_categories:image-segmentation", "task_categories:image-to-text", "task_categories:image-to-image", "task_categories:object-detection", "task_categories:depth-estimation", "size_categories:1M<n<10M", "license:apache-2.0", "legal", "region:us" ]
2023-09-28T17:14:30+00:00
{"license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["image-classification", "image-segmentation", "image-to-text", "image-to-image", "object-detection", "depth-estimation"], "tags": ["legal"]}
2023-09-28T17:17:32+00:00
[]
[]
TAGS #task_categories-image-classification #task_categories-image-segmentation #task_categories-image-to-text #task_categories-image-to-image #task_categories-object-detection #task_categories-depth-estimation #size_categories-1M<n<10M #license-apache-2.0 #legal #region-us
This is a remote sensing image Military Aircraft Recognition dataset that include 3842 images, 20 types, and 22341 instances annotated with horizontal bounding boxes and oriented bounding boxes.
[]
[ "TAGS\n#task_categories-image-classification #task_categories-image-segmentation #task_categories-image-to-text #task_categories-image-to-image #task_categories-object-detection #task_categories-depth-estimation #size_categories-1M<n<10M #license-apache-2.0 #legal #region-us \n" ]
[ 97 ]
[ "passage: TAGS\n#task_categories-image-classification #task_categories-image-segmentation #task_categories-image-to-text #task_categories-image-to-image #task_categories-object-detection #task_categories-depth-estimation #size_categories-1M<n<10M #license-apache-2.0 #legal #region-us \n" ]
4bad426956953d81b1eeaf0136fa611e4b730282
# Dataset Card for "Zeroshot_Train-20K_bias_tweet-format" This dataset is a train dataset for the Zeroshot models. It has 20.000 data in a prompt format exclusively for train with class 'bias' in Brazilian Portuguese. Prompt: ``` "Classifique o tweet entre 'classe1', 'classe2', 'classe3', 'classe4', 'bias' \\n\\nTweet: frase \\n\\nLabel: 'other' ``` The dataset was divided as follows: <br> ``` - 6,000 data: prompt with class option without target class (bias) - 7,000 data: prompt with class option + target class included as an option. target class is not correct - 7,000 data: prompt with class option + target class. target class is correct ``` ## How to load and use this dataset: ``` from datasets import load_dataset dataset = load_dataset("Weni/Zeroshot_Train-20K_bias_tweet-format") dataset ``` [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Weni/Zeroshot_Train-20K_bias_tweet-format
[ "task_categories:zero-shot-classification", "size_categories:10K<n<100K", "language:pt", "region:us" ]
2023-09-28T17:27:59+00:00
{"language": ["pt"], "size_categories": ["10K<n<100K"], "task_categories": ["zero-shot-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4338493, "num_examples": 20000}], "download_size": 1744022, "dataset_size": 4338493}}
2023-09-28T17:41:12+00:00
[]
[ "pt" ]
TAGS #task_categories-zero-shot-classification #size_categories-10K<n<100K #language-Portuguese #region-us
# Dataset Card for "Zeroshot_Train-20K_bias_tweet-format" This dataset is a train dataset for the Zeroshot models. It has 20.000 data in a prompt format exclusively for train with class 'bias' in Brazilian Portuguese. Prompt: The dataset was divided as follows: <br> ## How to load and use this dataset: More Information needed
[ "# Dataset Card for \"Zeroshot_Train-20K_bias_tweet-format\"\nThis dataset is a train dataset for the Zeroshot models. \nIt has 20.000 data in a prompt format exclusively for train with class 'bias' in Brazilian Portuguese.\n\nPrompt:\n\n\nThe dataset was divided as follows: <br>", "## How to load and use this dataset:\n\n\nMore Information needed" ]
[ "TAGS\n#task_categories-zero-shot-classification #size_categories-10K<n<100K #language-Portuguese #region-us \n", "# Dataset Card for \"Zeroshot_Train-20K_bias_tweet-format\"\nThis dataset is a train dataset for the Zeroshot models. \nIt has 20.000 data in a prompt format exclusively for train with class 'bias' in Brazilian Portuguese.\n\nPrompt:\n\n\nThe dataset was divided as follows: <br>", "## How to load and use this dataset:\n\n\nMore Information needed" ]
[ 37, 78, 13 ]
[ "passage: TAGS\n#task_categories-zero-shot-classification #size_categories-10K<n<100K #language-Portuguese #region-us \n# Dataset Card for \"Zeroshot_Train-20K_bias_tweet-format\"\nThis dataset is a train dataset for the Zeroshot models. \nIt has 20.000 data in a prompt format exclusively for train with class 'bias' in Brazilian Portuguese.\n\nPrompt:\n\n\nThe dataset was divided as follows: <br>## How to load and use this dataset:\n\n\nMore Information needed" ]
7149775b0f5a1c3d5e25e0019709306faa086a29
# Dataset Card for "Zeroshot_Train-20K_nenhuma_tweet-format" This dataset is a train dataset for the Zeroshot models. It has 20.000 data in a prompt format exclusively for train with class 'nenhuma' in Brazilian Portuguese. Prompt: ``` "Classifique o tweet entre 'classe1', 'classe2', 'classe3', 'classe4', 'nenhuma' \\n\\nTweet: frase \\n\\nLabel: 'other' ``` The dataset was divided as follows: <br> ``` - 6,000 data: prompt with class option without target class (nenhuma) - 7,000 data: prompt with class option + target class included as an option. target class is not correct - 7,000 data: prompt with class option + target class. target class is correct ``` ## How to load and use this dataset: ``` from datasets import load_dataset dataset = load_dataset("Weni/Zeroshot_Train-20K_nenhuma_tweet-format") dataset ``` [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Weni/Zeroshot_Train-20K_nenhuma_tweet-format
[ "task_categories:zero-shot-classification", "size_categories:10K<n<100K", "language:pt", "region:us" ]
2023-09-28T17:42:51+00:00
{"language": ["pt"], "size_categories": ["10K<n<100K"], "task_categories": ["zero-shot-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4411602, "num_examples": 20000}], "download_size": 1748719, "dataset_size": 4411602}}
2023-09-28T17:44:36+00:00
[]
[ "pt" ]
TAGS #task_categories-zero-shot-classification #size_categories-10K<n<100K #language-Portuguese #region-us
# Dataset Card for "Zeroshot_Train-20K_nenhuma_tweet-format" This dataset is a train dataset for the Zeroshot models. It has 20.000 data in a prompt format exclusively for train with class 'nenhuma' in Brazilian Portuguese. Prompt: The dataset was divided as follows: <br> ## How to load and use this dataset: More Information needed
[ "# Dataset Card for \"Zeroshot_Train-20K_nenhuma_tweet-format\"\n\nThis dataset is a train dataset for the Zeroshot models. \nIt has 20.000 data in a prompt format exclusively for train with class 'nenhuma' in Brazilian Portuguese.\n\nPrompt:\n\n\nThe dataset was divided as follows: <br>", "## How to load and use this dataset:\n\n\nMore Information needed" ]
[ "TAGS\n#task_categories-zero-shot-classification #size_categories-10K<n<100K #language-Portuguese #region-us \n", "# Dataset Card for \"Zeroshot_Train-20K_nenhuma_tweet-format\"\n\nThis dataset is a train dataset for the Zeroshot models. \nIt has 20.000 data in a prompt format exclusively for train with class 'nenhuma' in Brazilian Portuguese.\n\nPrompt:\n\n\nThe dataset was divided as follows: <br>", "## How to load and use this dataset:\n\n\nMore Information needed" ]
[ 37, 78, 13 ]
[ "passage: TAGS\n#task_categories-zero-shot-classification #size_categories-10K<n<100K #language-Portuguese #region-us \n# Dataset Card for \"Zeroshot_Train-20K_nenhuma_tweet-format\"\n\nThis dataset is a train dataset for the Zeroshot models. \nIt has 20.000 data in a prompt format exclusively for train with class 'nenhuma' in Brazilian Portuguese.\n\nPrompt:\n\n\nThe dataset was divided as follows: <br>## How to load and use this dataset:\n\n\nMore Information needed" ]
23fb4fc91dbc13be9d59813ac28de05bfbdbe29e
# Dataset Card for "code_instructions_7_5k_alpaca_spanish" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Rodr16020/code_instructions_7_5k_alpaca_spanish
[ "region:us" ]
2023-09-28T18:05:41+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction_text", "dtype": "string"}, {"name": "llama2_chat_inst", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15796815, "num_examples": 7500}], "download_size": 7459672, "dataset_size": 15796815}}
2023-10-30T20:53:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "code_instructions_7_5k_alpaca_spanish" More Information needed
[ "# Dataset Card for \"code_instructions_7_5k_alpaca_spanish\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"code_instructions_7_5k_alpaca_spanish\"\n\nMore Information needed" ]
[ 6, 25 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"code_instructions_7_5k_alpaca_spanish\"\n\nMore Information needed" ]
a7033c1993a5789156a272c278cd235c25b0ab2d
# INCA [Texts of unpublished judgments](https://echanges.dila.gouv.fr/OPENDATA/INCA/) (not published in the Bulletin) distributed by the Court of Cassation's competition fund since 1989. In accordance with the CNIL recommendation of 29 November 2001, personal data concerning individuals (parties and witnesses) is pseudonymised.
Nicolas-BZRD/INCA_opendata
[ "size_categories:100K<n<1M", "language:fr", "license:odc-by", "legal", "region:us" ]
2023-09-28T18:17:07+00:00
{"language": ["fr"], "license": "odc-by", "size_categories": ["100K<n<1M"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2816739990, "num_examples": 373751}], "download_size": 1125426154, "dataset_size": 2816739990}, "tags": ["legal"]}
2023-09-29T08:39:59+00:00
[]
[ "fr" ]
TAGS #size_categories-100K<n<1M #language-French #license-odc-by #legal #region-us
# INCA Texts of unpublished judgments (not published in the Bulletin) distributed by the Court of Cassation's competition fund since 1989. In accordance with the CNIL recommendation of 29 November 2001, personal data concerning individuals (parties and witnesses) is pseudonymised.
[ "# INCA\n\nTexts of unpublished judgments (not published in the Bulletin) distributed by the Court of Cassation's competition fund since 1989.\nIn accordance with the CNIL recommendation of 29 November 2001, personal data concerning individuals (parties and witnesses) is pseudonymised." ]
[ "TAGS\n#size_categories-100K<n<1M #language-French #license-odc-by #legal #region-us \n", "# INCA\n\nTexts of unpublished judgments (not published in the Bulletin) distributed by the Court of Cassation's competition fund since 1989.\nIn accordance with the CNIL recommendation of 29 November 2001, personal data concerning individuals (parties and witnesses) is pseudonymised." ]
[ 34, 63 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-French #license-odc-by #legal #region-us \n# INCA\n\nTexts of unpublished judgments (not published in the Bulletin) distributed by the Court of Cassation's competition fund since 1989.\nIn accordance with the CNIL recommendation of 29 November 2001, personal data concerning individuals (parties and witnesses) is pseudonymised." ]
6ad82a7e6f18318bb406372abff999935d401c3b
# Code documentation dataset This dataset aims leverage usage of lm to automatically generate documenation to undocumented python code. Dataset consists of pairs code and its documenation Content of dataset is created from CodeSearchNet dataset.
juraj-juraj/doc_gen
[ "task_categories:text-generation", "language:en", "license:mit", "region:us" ]
2023-09-28T18:51:32+00:00
{"language": ["en"], "license": "mit", "task_categories": ["text-generation"], "pretty_name": "py_code_doc", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "docstring", "dtype": "string"}, {"name": "function", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 270465917, "num_examples": 313366}, {"name": "validation", "num_bytes": 763140, "num_examples": 1000}, {"name": "test", "num_bytes": 878385, "num_examples": 1000}], "download_size": 107450380, "dataset_size": 272107442}}
2023-11-27T18:34:33+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #language-English #license-mit #region-us
# Code documentation dataset This dataset aims leverage usage of lm to automatically generate documenation to undocumented python code. Dataset consists of pairs code and its documenation Content of dataset is created from CodeSearchNet dataset.
[ "# Code documentation dataset\nThis dataset aims leverage usage of lm to automatically generate documenation to undocumented python code. Dataset consists of pairs code and its documenation\n\nContent of dataset is created from CodeSearchNet dataset." ]
[ "TAGS\n#task_categories-text-generation #language-English #license-mit #region-us \n", "# Code documentation dataset\nThis dataset aims leverage usage of lm to automatically generate documenation to undocumented python code. Dataset consists of pairs code and its documenation\n\nContent of dataset is created from CodeSearchNet dataset." ]
[ 26, 57 ]
[ "passage: TAGS\n#task_categories-text-generation #language-English #license-mit #region-us \n# Code documentation dataset\nThis dataset aims leverage usage of lm to automatically generate documenation to undocumented python code. Dataset consists of pairs code and its documenation\n\nContent of dataset is created from CodeSearchNet dataset." ]
7dc5836275a496d0f10eb24430264d33a550119d
# Bangumi Image Base of Kobayashi-san Chi No Maidragon This is the image base of bangumi Kobayashi-san Chi no Maidragon, we detected 33 characters, 3524 images in total. The full dataset is [here](all.zip). **Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview: | # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 | |:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------| | 0 | 497 | [Download](0/dataset.zip) | ![preview 1](0/preview_1.png) | ![preview 2](0/preview_2.png) | ![preview 3](0/preview_3.png) | ![preview 4](0/preview_4.png) | ![preview 5](0/preview_5.png) | ![preview 6](0/preview_6.png) | ![preview 7](0/preview_7.png) | ![preview 8](0/preview_8.png) | | 1 | 31 | [Download](1/dataset.zip) | ![preview 1](1/preview_1.png) | ![preview 2](1/preview_2.png) | ![preview 3](1/preview_3.png) | ![preview 4](1/preview_4.png) | ![preview 5](1/preview_5.png) | ![preview 6](1/preview_6.png) | ![preview 7](1/preview_7.png) | ![preview 8](1/preview_8.png) | | 2 | 53 | [Download](2/dataset.zip) | ![preview 1](2/preview_1.png) | ![preview 2](2/preview_2.png) | ![preview 3](2/preview_3.png) | ![preview 4](2/preview_4.png) | ![preview 5](2/preview_5.png) | ![preview 6](2/preview_6.png) | ![preview 7](2/preview_7.png) | ![preview 8](2/preview_8.png) | | 3 | 29 | [Download](3/dataset.zip) | ![preview 1](3/preview_1.png) | ![preview 2](3/preview_2.png) | ![preview 3](3/preview_3.png) | ![preview 4](3/preview_4.png) | ![preview 5](3/preview_5.png) | ![preview 6](3/preview_6.png) | ![preview 7](3/preview_7.png) | ![preview 8](3/preview_8.png) | | 4 | 13 | [Download](4/dataset.zip) | ![preview 1](4/preview_1.png) | ![preview 2](4/preview_2.png) | ![preview 3](4/preview_3.png) | ![preview 4](4/preview_4.png) | ![preview 5](4/preview_5.png) | ![preview 6](4/preview_6.png) | ![preview 7](4/preview_7.png) | ![preview 8](4/preview_8.png) | | 5 | 561 | [Download](5/dataset.zip) | ![preview 1](5/preview_1.png) | ![preview 2](5/preview_2.png) | ![preview 3](5/preview_3.png) | ![preview 4](5/preview_4.png) | ![preview 5](5/preview_5.png) | ![preview 6](5/preview_6.png) | ![preview 7](5/preview_7.png) | ![preview 8](5/preview_8.png) | | 6 | 13 | [Download](6/dataset.zip) | ![preview 1](6/preview_1.png) | ![preview 2](6/preview_2.png) | ![preview 3](6/preview_3.png) | ![preview 4](6/preview_4.png) | ![preview 5](6/preview_5.png) | ![preview 6](6/preview_6.png) | ![preview 7](6/preview_7.png) | ![preview 8](6/preview_8.png) | | 7 | 9 | [Download](7/dataset.zip) | ![preview 1](7/preview_1.png) | ![preview 2](7/preview_2.png) | ![preview 3](7/preview_3.png) | ![preview 4](7/preview_4.png) | ![preview 5](7/preview_5.png) | ![preview 6](7/preview_6.png) | ![preview 7](7/preview_7.png) | ![preview 8](7/preview_8.png) | | 8 | 18 | [Download](8/dataset.zip) | ![preview 1](8/preview_1.png) | ![preview 2](8/preview_2.png) | ![preview 3](8/preview_3.png) | ![preview 4](8/preview_4.png) | ![preview 5](8/preview_5.png) | ![preview 6](8/preview_6.png) | ![preview 7](8/preview_7.png) | ![preview 8](8/preview_8.png) | | 9 | 170 | [Download](9/dataset.zip) | ![preview 1](9/preview_1.png) | ![preview 2](9/preview_2.png) | ![preview 3](9/preview_3.png) | ![preview 4](9/preview_4.png) | ![preview 5](9/preview_5.png) | ![preview 6](9/preview_6.png) | ![preview 7](9/preview_7.png) | ![preview 8](9/preview_8.png) | | 10 | 375 | [Download](10/dataset.zip) | ![preview 1](10/preview_1.png) | ![preview 2](10/preview_2.png) | ![preview 3](10/preview_3.png) | ![preview 4](10/preview_4.png) | ![preview 5](10/preview_5.png) | ![preview 6](10/preview_6.png) | ![preview 7](10/preview_7.png) | ![preview 8](10/preview_8.png) | | 11 | 133 | [Download](11/dataset.zip) | ![preview 1](11/preview_1.png) | ![preview 2](11/preview_2.png) | ![preview 3](11/preview_3.png) | ![preview 4](11/preview_4.png) | ![preview 5](11/preview_5.png) | ![preview 6](11/preview_6.png) | ![preview 7](11/preview_7.png) | ![preview 8](11/preview_8.png) | | 12 | 57 | [Download](12/dataset.zip) | ![preview 1](12/preview_1.png) | ![preview 2](12/preview_2.png) | ![preview 3](12/preview_3.png) | ![preview 4](12/preview_4.png) | ![preview 5](12/preview_5.png) | ![preview 6](12/preview_6.png) | ![preview 7](12/preview_7.png) | ![preview 8](12/preview_8.png) | | 13 | 150 | [Download](13/dataset.zip) | ![preview 1](13/preview_1.png) | ![preview 2](13/preview_2.png) | ![preview 3](13/preview_3.png) | ![preview 4](13/preview_4.png) | ![preview 5](13/preview_5.png) | ![preview 6](13/preview_6.png) | ![preview 7](13/preview_7.png) | ![preview 8](13/preview_8.png) | | 14 | 46 | [Download](14/dataset.zip) | ![preview 1](14/preview_1.png) | ![preview 2](14/preview_2.png) | ![preview 3](14/preview_3.png) | ![preview 4](14/preview_4.png) | ![preview 5](14/preview_5.png) | ![preview 6](14/preview_6.png) | ![preview 7](14/preview_7.png) | ![preview 8](14/preview_8.png) | | 15 | 134 | [Download](15/dataset.zip) | ![preview 1](15/preview_1.png) | ![preview 2](15/preview_2.png) | ![preview 3](15/preview_3.png) | ![preview 4](15/preview_4.png) | ![preview 5](15/preview_5.png) | ![preview 6](15/preview_6.png) | ![preview 7](15/preview_7.png) | ![preview 8](15/preview_8.png) | | 16 | 137 | [Download](16/dataset.zip) | ![preview 1](16/preview_1.png) | ![preview 2](16/preview_2.png) | ![preview 3](16/preview_3.png) | ![preview 4](16/preview_4.png) | ![preview 5](16/preview_5.png) | ![preview 6](16/preview_6.png) | ![preview 7](16/preview_7.png) | ![preview 8](16/preview_8.png) | | 17 | 68 | [Download](17/dataset.zip) | ![preview 1](17/preview_1.png) | ![preview 2](17/preview_2.png) | ![preview 3](17/preview_3.png) | ![preview 4](17/preview_4.png) | ![preview 5](17/preview_5.png) | ![preview 6](17/preview_6.png) | ![preview 7](17/preview_7.png) | ![preview 8](17/preview_8.png) | | 18 | 71 | [Download](18/dataset.zip) | ![preview 1](18/preview_1.png) | ![preview 2](18/preview_2.png) | ![preview 3](18/preview_3.png) | ![preview 4](18/preview_4.png) | ![preview 5](18/preview_5.png) | ![preview 6](18/preview_6.png) | ![preview 7](18/preview_7.png) | ![preview 8](18/preview_8.png) | | 19 | 20 | [Download](19/dataset.zip) | ![preview 1](19/preview_1.png) | ![preview 2](19/preview_2.png) | ![preview 3](19/preview_3.png) | ![preview 4](19/preview_4.png) | ![preview 5](19/preview_5.png) | ![preview 6](19/preview_6.png) | ![preview 7](19/preview_7.png) | ![preview 8](19/preview_8.png) | | 20 | 12 | [Download](20/dataset.zip) | ![preview 1](20/preview_1.png) | ![preview 2](20/preview_2.png) | ![preview 3](20/preview_3.png) | ![preview 4](20/preview_4.png) | ![preview 5](20/preview_5.png) | ![preview 6](20/preview_6.png) | ![preview 7](20/preview_7.png) | ![preview 8](20/preview_8.png) | | 21 | 11 | [Download](21/dataset.zip) | ![preview 1](21/preview_1.png) | ![preview 2](21/preview_2.png) | ![preview 3](21/preview_3.png) | ![preview 4](21/preview_4.png) | ![preview 5](21/preview_5.png) | ![preview 6](21/preview_6.png) | ![preview 7](21/preview_7.png) | ![preview 8](21/preview_8.png) | | 22 | 12 | [Download](22/dataset.zip) | ![preview 1](22/preview_1.png) | ![preview 2](22/preview_2.png) | ![preview 3](22/preview_3.png) | ![preview 4](22/preview_4.png) | ![preview 5](22/preview_5.png) | ![preview 6](22/preview_6.png) | ![preview 7](22/preview_7.png) | ![preview 8](22/preview_8.png) | | 23 | 15 | [Download](23/dataset.zip) | ![preview 1](23/preview_1.png) | ![preview 2](23/preview_2.png) | ![preview 3](23/preview_3.png) | ![preview 4](23/preview_4.png) | ![preview 5](23/preview_5.png) | ![preview 6](23/preview_6.png) | ![preview 7](23/preview_7.png) | ![preview 8](23/preview_8.png) | | 24 | 11 | [Download](24/dataset.zip) | ![preview 1](24/preview_1.png) | ![preview 2](24/preview_2.png) | ![preview 3](24/preview_3.png) | ![preview 4](24/preview_4.png) | ![preview 5](24/preview_5.png) | ![preview 6](24/preview_6.png) | ![preview 7](24/preview_7.png) | ![preview 8](24/preview_8.png) | | 25 | 11 | [Download](25/dataset.zip) | ![preview 1](25/preview_1.png) | ![preview 2](25/preview_2.png) | ![preview 3](25/preview_3.png) | ![preview 4](25/preview_4.png) | ![preview 5](25/preview_5.png) | ![preview 6](25/preview_6.png) | ![preview 7](25/preview_7.png) | ![preview 8](25/preview_8.png) | | 26 | 171 | [Download](26/dataset.zip) | ![preview 1](26/preview_1.png) | ![preview 2](26/preview_2.png) | ![preview 3](26/preview_3.png) | ![preview 4](26/preview_4.png) | ![preview 5](26/preview_5.png) | ![preview 6](26/preview_6.png) | ![preview 7](26/preview_7.png) | ![preview 8](26/preview_8.png) | | 27 | 14 | [Download](27/dataset.zip) | ![preview 1](27/preview_1.png) | ![preview 2](27/preview_2.png) | ![preview 3](27/preview_3.png) | ![preview 4](27/preview_4.png) | ![preview 5](27/preview_5.png) | ![preview 6](27/preview_6.png) | ![preview 7](27/preview_7.png) | ![preview 8](27/preview_8.png) | | 28 | 167 | [Download](28/dataset.zip) | ![preview 1](28/preview_1.png) | ![preview 2](28/preview_2.png) | ![preview 3](28/preview_3.png) | ![preview 4](28/preview_4.png) | ![preview 5](28/preview_5.png) | ![preview 6](28/preview_6.png) | ![preview 7](28/preview_7.png) | ![preview 8](28/preview_8.png) | | 29 | 64 | [Download](29/dataset.zip) | ![preview 1](29/preview_1.png) | ![preview 2](29/preview_2.png) | ![preview 3](29/preview_3.png) | ![preview 4](29/preview_4.png) | ![preview 5](29/preview_5.png) | ![preview 6](29/preview_6.png) | ![preview 7](29/preview_7.png) | ![preview 8](29/preview_8.png) | | 30 | 7 | [Download](30/dataset.zip) | ![preview 1](30/preview_1.png) | ![preview 2](30/preview_2.png) | ![preview 3](30/preview_3.png) | ![preview 4](30/preview_4.png) | ![preview 5](30/preview_5.png) | ![preview 6](30/preview_6.png) | ![preview 7](30/preview_7.png) | N/A | | 31 | 11 | [Download](31/dataset.zip) | ![preview 1](31/preview_1.png) | ![preview 2](31/preview_2.png) | ![preview 3](31/preview_3.png) | ![preview 4](31/preview_4.png) | ![preview 5](31/preview_5.png) | ![preview 6](31/preview_6.png) | ![preview 7](31/preview_7.png) | ![preview 8](31/preview_8.png) | | noise | 433 | [Download](-1/dataset.zip) | ![preview 1](-1/preview_1.png) | ![preview 2](-1/preview_2.png) | ![preview 3](-1/preview_3.png) | ![preview 4](-1/preview_4.png) | ![preview 5](-1/preview_5.png) | ![preview 6](-1/preview_6.png) | ![preview 7](-1/preview_7.png) | ![preview 8](-1/preview_8.png) |
BangumiBase/kobayashisanchinomaidragon
[ "size_categories:1K<n<10K", "license:mit", "art", "region:us" ]
2023-09-28T19:20:25+00:00
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
2023-09-29T12:15:05+00:00
[]
[]
TAGS #size_categories-1K<n<10K #license-mit #art #region-us
Bangumi Image Base of Kobayashi-san Chi No Maidragon ==================================================== This is the image base of bangumi Kobayashi-san Chi no Maidragon, we detected 33 characters, 3524 images in total. The full dataset is here. Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview:
[]
[ "TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n" ]
0e0e4f88b99cf30f5ec2432c1c38c9908a08e5e8
# Dataset Card for UniRef50 ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
zgcarvalho/uniref50-test
[ "size_categories:10M<n<100M", "license:cc-by-4.0", "biology", "protein", "region:us" ]
2023-09-28T19:55:09+00:00
{"license": "cc-by-4.0", "size_categories": "10M<n<100M", "pretty_name": "UniRef50", "tags": ["biology", "protein"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15468741441.32825, "num_examples": 49719601}, {"name": "test", "num_bytes": 3867185593.6717486, "num_examples": 12429901}], "download_size": 18625264941, "dataset_size": 19335927035.0}}
2023-09-28T23:47:52+00:00
[]
[]
TAGS #size_categories-10M<n<100M #license-cc-by-4.0 #biology #protein #region-us
# Dataset Card for UniRef50 ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for UniRef50", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#size_categories-10M<n<100M #license-cc-by-4.0 #biology #protein #region-us \n", "# Dataset Card for UniRef50", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 33, 8, 24, 6, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#size_categories-10M<n<100M #license-cc-by-4.0 #biology #protein #region-us \n# Dataset Card for UniRef50## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
94dec6a357e8098493fd37cc213a4ed188103f4a
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset is built on the iNaturalist 2021 dataset and is used for the Incremental Generalized Category Discovery task. For more information about the task, please checkout [this paper](https://arxiv.org/abs/2304.14310). ### Supported Tasks and Leaderboards [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The initial data are collected by the iNaturalist community. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
tennant/iNatIGCD
[ "arxiv:2304.14310", "region:us" ]
2023-09-28T20:00:30+00:00
{}
2023-09-28T20:53:34+00:00
[ "2304.14310" ]
[]
TAGS #arxiv-2304.14310 #region-us
# Dataset Card for Dataset Name ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This dataset is built on the iNaturalist 2021 dataset and is used for the Incremental Generalized Category Discovery task. For more information about the task, please checkout this paper. ### Supported Tasks and Leaderboards ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The initial data are collected by the iNaturalist community. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset is built on the iNaturalist 2021 dataset and is used for the Incremental Generalized Category Discovery task. \nFor more information about the task, please checkout this paper.", "### Supported Tasks and Leaderboards", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe initial data are collected by the iNaturalist community.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#arxiv-2304.14310 #region-us \n", "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset is built on the iNaturalist 2021 dataset and is used for the Incremental Generalized Category Discovery task. \nFor more information about the task, please checkout this paper.", "### Supported Tasks and Leaderboards", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe initial data are collected by the iNaturalist community.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 14, 8, 24, 47, 10, 6, 6, 5, 5, 5, 7, 4, 24, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#arxiv-2304.14310 #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset is built on the iNaturalist 2021 dataset and is used for the Incremental Generalized Category Discovery task. \nFor more information about the task, please checkout this paper.### Supported Tasks and Leaderboards## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization\n\nThe initial data are collected by the iNaturalist community.#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
ce7f0fa8206b580030d5b3712f388f504560d2c9
# DIARC-LLM-Parser-Embodied-NLU-Styled-4K This dataset contains about ~4k utterances together with their semantic parses as interpretable by the DIARC cognitive robotic architecture. The parses are meant to capture the speech-theoretic aspects of NL and parse the intent, referents, and descriptors in the utterance. This dataset is one in a set of datasets. For this particular one, we programmatically built 127 utterances and semantics that are groundable in a robotic architecture (DIARC)/ These 127 utterances were then expanded into ~4k style variations across four dimensions 1. Directness/Indirectness 2. Formality 3. Familiarity (whether it was uttered by a native speaker or a second-language speaker) 4. Word choice
vsarathy/DIARC-embodied-nlu-styled-4k
[ "language:en", "license:mit", "region:us" ]
2023-09-28T20:03:36+00:00
{"language": ["en"], "license": "mit", "pretty_name": "DIARC-embodied-nlu-styled-4k "}
2023-09-30T00:02:53+00:00
[]
[ "en" ]
TAGS #language-English #license-mit #region-us
# DIARC-LLM-Parser-Embodied-NLU-Styled-4K This dataset contains about ~4k utterances together with their semantic parses as interpretable by the DIARC cognitive robotic architecture. The parses are meant to capture the speech-theoretic aspects of NL and parse the intent, referents, and descriptors in the utterance. This dataset is one in a set of datasets. For this particular one, we programmatically built 127 utterances and semantics that are groundable in a robotic architecture (DIARC)/ These 127 utterances were then expanded into ~4k style variations across four dimensions 1. Directness/Indirectness 2. Formality 3. Familiarity (whether it was uttered by a native speaker or a second-language speaker) 4. Word choice
[ "# DIARC-LLM-Parser-Embodied-NLU-Styled-4K\n\nThis dataset contains about ~4k utterances together with their semantic parses as interpretable by the DIARC cognitive robotic architecture.\n\nThe parses are meant to capture the speech-theoretic aspects of NL and parse the intent, referents, and descriptors in the utterance. \n\nThis dataset is one in a set of datasets. For this particular one, we programmatically built 127 utterances and semantics that are groundable in a robotic architecture (DIARC)/\nThese 127 utterances were then expanded into ~4k style variations across four dimensions\n\n1. Directness/Indirectness\n2. Formality\n3. Familiarity (whether it was uttered by a native speaker or a second-language speaker)\n4. Word choice" ]
[ "TAGS\n#language-English #license-mit #region-us \n", "# DIARC-LLM-Parser-Embodied-NLU-Styled-4K\n\nThis dataset contains about ~4k utterances together with their semantic parses as interpretable by the DIARC cognitive robotic architecture.\n\nThe parses are meant to capture the speech-theoretic aspects of NL and parse the intent, referents, and descriptors in the utterance. \n\nThis dataset is one in a set of datasets. For this particular one, we programmatically built 127 utterances and semantics that are groundable in a robotic architecture (DIARC)/\nThese 127 utterances were then expanded into ~4k style variations across four dimensions\n\n1. Directness/Indirectness\n2. Formality\n3. Familiarity (whether it was uttered by a native speaker or a second-language speaker)\n4. Word choice" ]
[ 15, 193 ]
[ "passage: TAGS\n#language-English #license-mit #region-us \n# DIARC-LLM-Parser-Embodied-NLU-Styled-4K\n\nThis dataset contains about ~4k utterances together with their semantic parses as interpretable by the DIARC cognitive robotic architecture.\n\nThe parses are meant to capture the speech-theoretic aspects of NL and parse the intent, referents, and descriptors in the utterance. \n\nThis dataset is one in a set of datasets. For this particular one, we programmatically built 127 utterances and semantics that are groundable in a robotic architecture (DIARC)/\nThese 127 utterances were then expanded into ~4k style variations across four dimensions\n\n1. Directness/Indirectness\n2. Formality\n3. Familiarity (whether it was uttered by a native speaker or a second-language speaker)\n4. Word choice" ]
57af95f205acc0de72a60f0d76ed1c13c9749aa2
# Dataset Card for "MetalDam_NoBright_Augmented_Cropped" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ironchanchellor/MetalDam_NoBright_Augmented_Cropped
[ "region:us" ]
2023-09-28T20:05:51+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 309419533.104, "num_examples": 1088}, {"name": "validation", "num_bytes": 78805194.0, "num_examples": 272}], "download_size": 390268940, "dataset_size": 388224727.104}}
2023-10-02T13:41:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "MetalDam_NoBright_Augmented_Cropped" More Information needed
[ "# Dataset Card for \"MetalDam_NoBright_Augmented_Cropped\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"MetalDam_NoBright_Augmented_Cropped\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"MetalDam_NoBright_Augmented_Cropped\"\n\nMore Information needed" ]
6bace9f0e01e3fcbe30ffc509878614f248cc1d5
# Dataset Card for "llama-7b__model__one_million_instructions__emb__sample" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jxm/llama-7b__model__one_million_instructions__emb__sample
[ "region:us" ]
2023-09-28T20:18:21+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "length", "dtype": "int64"}, {"name": "embedder_input_ids", "sequence": "int64"}, {"name": "embedder_attention_mask", "sequence": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "frozen_embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1325271843, "num_examples": 10000}], "download_size": 870130332, "dataset_size": 1325271843}}
2023-09-28T20:19:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for "llama-7b__model__one_million_instructions__emb__sample" More Information needed
[ "# Dataset Card for \"llama-7b__model__one_million_instructions__emb__sample\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"llama-7b__model__one_million_instructions__emb__sample\"\n\nMore Information needed" ]
[ 6, 30 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"llama-7b__model__one_million_instructions__emb__sample\"\n\nMore Information needed" ]
eb1ea65de11ce73d2960df55be0b6f7643694267
# Dataset Card for "SD-CLIP-alignment-composition" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Doub7e/SD-CLIP-alignment-composition
[ "region:us" ]
2023-09-28T20:23:16+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "clip_pred", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 405174703.0, "num_examples": 900}], "download_size": 405155460, "dataset_size": 405174703.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-28T20:56:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "SD-CLIP-alignment-composition" More Information needed
[ "# Dataset Card for \"SD-CLIP-alignment-composition\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"SD-CLIP-alignment-composition\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"SD-CLIP-alignment-composition\"\n\nMore Information needed" ]
693b41bae035fd6b79669563429bb5b7f41844d9
# Dataset Card for "dataset_complete3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zooxsmartufpb/dataset_complete3
[ "region:us" ]
2023-09-28T20:28:07+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 81060969, "num_examples": 46099}], "download_size": 8042824, "dataset_size": 81060969}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-28T20:28:09+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dataset_complete3" More Information needed
[ "# Dataset Card for \"dataset_complete3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dataset_complete3\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dataset_complete3\"\n\nMore Information needed" ]
81a2e69081be6796ecea92c47983e46ec27e51b5
# LEGI (CODES, LAWS AND REGULATIONS) [The full consolidated text of national legislation and regulations.](https://echanges.dila.gouv.fr/OPENDATA/LEGI/)<br> It consists essentially of : - official codes - laws - decree-laws - ordinances - decrees - a selection of decrees Consolidation of texts involves rewriting an article of a text (or code) to incorporate the change made. Amended or repealed versions are included in the document collection in the same way as current versions.
Nicolas-BZRD/LEGI_opendata
[ "size_categories:1M<n<10M", "language:fr", "license:odc-by", "legal", "region:us" ]
2023-09-28T21:49:10+00:00
{"language": ["fr"], "license": "odc-by", "size_categories": ["1M<n<10M"], "pretty_name": "Codes, Lois et R\u00e9glements Consolid\u00e9s", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4054244489, "num_examples": 2373798}], "download_size": 1112659274, "dataset_size": 4054244489}, "tags": ["legal"]}
2023-09-29T09:10:54+00:00
[]
[ "fr" ]
TAGS #size_categories-1M<n<10M #language-French #license-odc-by #legal #region-us
# LEGI (CODES, LAWS AND REGULATIONS) The full consolidated text of national legislation and regulations.<br> It consists essentially of : - official codes - laws - decree-laws - ordinances - decrees - a selection of decrees Consolidation of texts involves rewriting an article of a text (or code) to incorporate the change made. Amended or repealed versions are included in the document collection in the same way as current versions.
[ "# LEGI (CODES, LAWS AND REGULATIONS)\n\nThe full consolidated text of national legislation and regulations.<br>\nIt consists essentially of : \n- official codes\n- laws\n- decree-laws\n- ordinances\n- decrees\n- a selection of decrees\n\nConsolidation of texts involves rewriting an article of a text (or code) to incorporate the change made. Amended or repealed versions are included in the document collection in the same way as current versions." ]
[ "TAGS\n#size_categories-1M<n<10M #language-French #license-odc-by #legal #region-us \n", "# LEGI (CODES, LAWS AND REGULATIONS)\n\nThe full consolidated text of national legislation and regulations.<br>\nIt consists essentially of : \n- official codes\n- laws\n- decree-laws\n- ordinances\n- decrees\n- a selection of decrees\n\nConsolidation of texts involves rewriting an article of a text (or code) to incorporate the change made. Amended or repealed versions are included in the document collection in the same way as current versions." ]
[ 34, 111 ]
[ "passage: TAGS\n#size_categories-1M<n<10M #language-French #license-odc-by #legal #region-us \n# LEGI (CODES, LAWS AND REGULATIONS)\n\nThe full consolidated text of national legislation and regulations.<br>\nIt consists essentially of : \n- official codes\n- laws\n- decree-laws\n- ordinances\n- decrees\n- a selection of decrees\n\nConsolidation of texts involves rewriting an article of a text (or code) to incorporate the change made. Amended or repealed versions are included in the document collection in the same way as current versions." ]
c500518503577a333b05bc49cd8c1b6d66d5c5ad
# Dataset Card for "80bca589" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/80bca589
[ "region:us" ]
2023-09-28T21:53:51+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 242, "num_examples": 10}], "download_size": 1409, "dataset_size": 242}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-28T21:53:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "80bca589" More Information needed
[ "# Dataset Card for \"80bca589\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"80bca589\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"80bca589\"\n\nMore Information needed" ]
3fca922ac4858e51ba17f701aebaaacec99beebc
# Dataset Card for "dda30fff" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/dda30fff
[ "region:us" ]
2023-09-28T21:55:07+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 164, "num_examples": 10}], "download_size": 1316, "dataset_size": 164}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-28T21:55:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dda30fff" More Information needed
[ "# Dataset Card for \"dda30fff\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dda30fff\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dda30fff\"\n\nMore Information needed" ]
e0595ccc44d568bd353cc69eb6bf8181adc79afd
# Dataset Card for "bfc3e463" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/bfc3e463
[ "region:us" ]
2023-09-28T21:55:09+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 164, "num_examples": 10}], "download_size": 1316, "dataset_size": 164}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-28T21:55:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bfc3e463" More Information needed
[ "# Dataset Card for \"bfc3e463\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bfc3e463\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bfc3e463\"\n\nMore Information needed" ]
ec249f7ba68b9e59457f3994da0c16c0237aeb72
# Dataset Card for "autotree_automl_electricity_dim7_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_automl_electricity_dim7_sd0
[ "region:us" ]
2023-09-28T22:56:24+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": "float32"}, {"name": "input_y", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1400412, "num_examples": 26931}, {"name": "validation", "num_bytes": 600236, "num_examples": 11543}], "download_size": 1231734, "dataset_size": 2000648}}
2023-09-29T21:58:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "autotree_automl_electricity_dim7_sd0" More Information needed
[ "# Dataset Card for \"autotree_automl_electricity_dim7_sd0\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"autotree_automl_electricity_dim7_sd0\"\n\nMore Information needed" ]
[ 6, 25 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_electricity_dim7_sd0\"\n\nMore Information needed" ]
c7cdb2beeff8999405a53a76e71481b2b0e0056d
# Dataset Card for "humaneval_x_llvm_wasm" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
JeremiahZ/humaneval_x_llvm_wasm
[ "region:us" ]
2023-09-28T23:04:31+00:00
{"dataset_info": {"features": [{"name": "task_id", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "declaration", "dtype": "string"}, {"name": "canonical_solution", "dtype": "string"}, {"name": "test", "dtype": "string"}, {"name": "example_test", "dtype": "string"}, {"name": "llvm_ir", "dtype": "string"}, {"name": "wat", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 4945639, "num_examples": 161}], "download_size": 1096385, "dataset_size": 4945639}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
2023-09-28T23:04:36+00:00
[]
[]
TAGS #region-us
# Dataset Card for "humaneval_x_llvm_wasm" More Information needed
[ "# Dataset Card for \"humaneval_x_llvm_wasm\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"humaneval_x_llvm_wasm\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"humaneval_x_llvm_wasm\"\n\nMore Information needed" ]
e1582f4a05a193719991f20c6d557447976e7488
# Dataset Card for "yahoo_answers_topics" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jdabello/yahoo_answers_topics
[ "region:us" ]
2023-09-29T00:10:17+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "topic", "dtype": "string"}, {"name": "question_title", "dtype": "string"}, {"name": "question_content", "dtype": "string"}, {"name": "best_answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 778905695, "num_examples": 1400000}], "download_size": 511657090, "dataset_size": 778905695}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-29T00:11:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for "yahoo_answers_topics" More Information needed
[ "# Dataset Card for \"yahoo_answers_topics\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"yahoo_answers_topics\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"yahoo_answers_topics\"\n\nMore Information needed" ]
f622fd434af011127d9565d9c8a3393eab9611f7
# Dataset Card for "llama2d-zoo-compass" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
llama2d/llama2d-zoo-compass
[ "region:us" ]
2023-09-29T00:35:19+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "float32"}, {"name": "coords", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "float32"}, {"name": "attention_mask", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 24160000, "num_examples": 10000}], "download_size": 0, "dataset_size": 24160000}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-05T23:26:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "llama2d-zoo-compass" More Information needed
[ "# Dataset Card for \"llama2d-zoo-compass\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"llama2d-zoo-compass\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"llama2d-zoo-compass\"\n\nMore Information needed" ]
68593ba439f6def1c41afd3078dc5aa2557a7950
# Dataset Card for "44203dc9" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-muse256-muse512-wuerst-sdv15/44203dc9
[ "region:us" ]
2023-09-29T00:42:20+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 192, "num_examples": 10}], "download_size": 1374, "dataset_size": 192}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-29T00:42:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "44203dc9" More Information needed
[ "# Dataset Card for \"44203dc9\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"44203dc9\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"44203dc9\"\n\nMore Information needed" ]
c3abb1a17dbd82b4266a81c8311bda36f6f4af3c
# Dataset Card for "b13fe8b2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-muse256-muse512-wuerst-sdv15/b13fe8b2
[ "region:us" ]
2023-09-29T00:45:07+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 208, "num_examples": 10}], "download_size": 1369, "dataset_size": 208}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-29T00:45:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "b13fe8b2" More Information needed
[ "# Dataset Card for \"b13fe8b2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"b13fe8b2\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"b13fe8b2\"\n\nMore Information needed" ]
6028d3806cef633db6aa362d3c06ef5fbcca731c
# Dataset Card for "0b3e4624" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-muse256-muse512-wuerst-sdv15/0b3e4624
[ "region:us" ]
2023-09-29T00:49:39+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 224, "num_examples": 10}], "download_size": 1395, "dataset_size": 224}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-29T00:49:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "0b3e4624" More Information needed
[ "# Dataset Card for \"0b3e4624\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"0b3e4624\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"0b3e4624\"\n\nMore Information needed" ]
c46a092e8ba1a8ac5ffaff32774998a6ee082033
# Dataset Card for "ac20e7b9" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-muse256-muse512-wuerst-sdv15/ac20e7b9
[ "region:us" ]
2023-09-29T00:52:45+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 201, "num_examples": 10}], "download_size": 1382, "dataset_size": 201}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-29T00:52:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ac20e7b9" More Information needed
[ "# Dataset Card for \"ac20e7b9\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ac20e7b9\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ac20e7b9\"\n\nMore Information needed" ]
3b1edec33916d41c3aa14773c71d152ea789d511
# Dataset Card for "97e5914c" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-muse256-muse512-wuerst-sdv15/97e5914c
[ "region:us" ]
2023-09-29T00:57:52+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 222, "num_examples": 10}], "download_size": 1364, "dataset_size": 222}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-29T00:57:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "97e5914c" More Information needed
[ "# Dataset Card for \"97e5914c\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"97e5914c\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"97e5914c\"\n\nMore Information needed" ]
69ef6cc335442b282f29bc379bc309c976fa7f6c
# Dataset Card for "fd9df6ed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-muse256-muse512-wuerst-sdv15/fd9df6ed
[ "region:us" ]
2023-09-29T01:00:24+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 215, "num_examples": 10}], "download_size": 1393, "dataset_size": 215}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-29T01:00:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fd9df6ed" More Information needed
[ "# Dataset Card for \"fd9df6ed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fd9df6ed\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fd9df6ed\"\n\nMore Information needed" ]
02b4b21513cb7e110b6a3d56ced888a3fffa3066
# Bangumi Image Base of Thunderbolt Fantasy This is the image base of bangumi Thunderbolt Fantasy, we detected 21 characters, 1926 images in total. The full dataset is [here](all.zip). **Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview: | # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 | |:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------| | 0 | 151 | [Download](0/dataset.zip) | ![preview 1](0/preview_1.png) | ![preview 2](0/preview_2.png) | ![preview 3](0/preview_3.png) | ![preview 4](0/preview_4.png) | ![preview 5](0/preview_5.png) | ![preview 6](0/preview_6.png) | ![preview 7](0/preview_7.png) | ![preview 8](0/preview_8.png) | | 1 | 66 | [Download](1/dataset.zip) | ![preview 1](1/preview_1.png) | ![preview 2](1/preview_2.png) | ![preview 3](1/preview_3.png) | ![preview 4](1/preview_4.png) | ![preview 5](1/preview_5.png) | ![preview 6](1/preview_6.png) | ![preview 7](1/preview_7.png) | ![preview 8](1/preview_8.png) | | 2 | 140 | [Download](2/dataset.zip) | ![preview 1](2/preview_1.png) | ![preview 2](2/preview_2.png) | ![preview 3](2/preview_3.png) | ![preview 4](2/preview_4.png) | ![preview 5](2/preview_5.png) | ![preview 6](2/preview_6.png) | ![preview 7](2/preview_7.png) | ![preview 8](2/preview_8.png) | | 3 | 29 | [Download](3/dataset.zip) | ![preview 1](3/preview_1.png) | ![preview 2](3/preview_2.png) | ![preview 3](3/preview_3.png) | ![preview 4](3/preview_4.png) | ![preview 5](3/preview_5.png) | ![preview 6](3/preview_6.png) | ![preview 7](3/preview_7.png) | ![preview 8](3/preview_8.png) | | 4 | 37 | [Download](4/dataset.zip) | ![preview 1](4/preview_1.png) | ![preview 2](4/preview_2.png) | ![preview 3](4/preview_3.png) | ![preview 4](4/preview_4.png) | ![preview 5](4/preview_5.png) | ![preview 6](4/preview_6.png) | ![preview 7](4/preview_7.png) | ![preview 8](4/preview_8.png) | | 5 | 240 | [Download](5/dataset.zip) | ![preview 1](5/preview_1.png) | ![preview 2](5/preview_2.png) | ![preview 3](5/preview_3.png) | ![preview 4](5/preview_4.png) | ![preview 5](5/preview_5.png) | ![preview 6](5/preview_6.png) | ![preview 7](5/preview_7.png) | ![preview 8](5/preview_8.png) | | 6 | 181 | [Download](6/dataset.zip) | ![preview 1](6/preview_1.png) | ![preview 2](6/preview_2.png) | ![preview 3](6/preview_3.png) | ![preview 4](6/preview_4.png) | ![preview 5](6/preview_5.png) | ![preview 6](6/preview_6.png) | ![preview 7](6/preview_7.png) | ![preview 8](6/preview_8.png) | | 7 | 171 | [Download](7/dataset.zip) | ![preview 1](7/preview_1.png) | ![preview 2](7/preview_2.png) | ![preview 3](7/preview_3.png) | ![preview 4](7/preview_4.png) | ![preview 5](7/preview_5.png) | ![preview 6](7/preview_6.png) | ![preview 7](7/preview_7.png) | ![preview 8](7/preview_8.png) | | 8 | 99 | [Download](8/dataset.zip) | ![preview 1](8/preview_1.png) | ![preview 2](8/preview_2.png) | ![preview 3](8/preview_3.png) | ![preview 4](8/preview_4.png) | ![preview 5](8/preview_5.png) | ![preview 6](8/preview_6.png) | ![preview 7](8/preview_7.png) | ![preview 8](8/preview_8.png) | | 9 | 274 | [Download](9/dataset.zip) | ![preview 1](9/preview_1.png) | ![preview 2](9/preview_2.png) | ![preview 3](9/preview_3.png) | ![preview 4](9/preview_4.png) | ![preview 5](9/preview_5.png) | ![preview 6](9/preview_6.png) | ![preview 7](9/preview_7.png) | ![preview 8](9/preview_8.png) | | 10 | 30 | [Download](10/dataset.zip) | ![preview 1](10/preview_1.png) | ![preview 2](10/preview_2.png) | ![preview 3](10/preview_3.png) | ![preview 4](10/preview_4.png) | ![preview 5](10/preview_5.png) | ![preview 6](10/preview_6.png) | ![preview 7](10/preview_7.png) | ![preview 8](10/preview_8.png) | | 11 | 23 | [Download](11/dataset.zip) | ![preview 1](11/preview_1.png) | ![preview 2](11/preview_2.png) | ![preview 3](11/preview_3.png) | ![preview 4](11/preview_4.png) | ![preview 5](11/preview_5.png) | ![preview 6](11/preview_6.png) | ![preview 7](11/preview_7.png) | ![preview 8](11/preview_8.png) | | 12 | 22 | [Download](12/dataset.zip) | ![preview 1](12/preview_1.png) | ![preview 2](12/preview_2.png) | ![preview 3](12/preview_3.png) | ![preview 4](12/preview_4.png) | ![preview 5](12/preview_5.png) | ![preview 6](12/preview_6.png) | ![preview 7](12/preview_7.png) | ![preview 8](12/preview_8.png) | | 13 | 36 | [Download](13/dataset.zip) | ![preview 1](13/preview_1.png) | ![preview 2](13/preview_2.png) | ![preview 3](13/preview_3.png) | ![preview 4](13/preview_4.png) | ![preview 5](13/preview_5.png) | ![preview 6](13/preview_6.png) | ![preview 7](13/preview_7.png) | ![preview 8](13/preview_8.png) | | 14 | 42 | [Download](14/dataset.zip) | ![preview 1](14/preview_1.png) | ![preview 2](14/preview_2.png) | ![preview 3](14/preview_3.png) | ![preview 4](14/preview_4.png) | ![preview 5](14/preview_5.png) | ![preview 6](14/preview_6.png) | ![preview 7](14/preview_7.png) | ![preview 8](14/preview_8.png) | | 15 | 37 | [Download](15/dataset.zip) | ![preview 1](15/preview_1.png) | ![preview 2](15/preview_2.png) | ![preview 3](15/preview_3.png) | ![preview 4](15/preview_4.png) | ![preview 5](15/preview_5.png) | ![preview 6](15/preview_6.png) | ![preview 7](15/preview_7.png) | ![preview 8](15/preview_8.png) | | 16 | 178 | [Download](16/dataset.zip) | ![preview 1](16/preview_1.png) | ![preview 2](16/preview_2.png) | ![preview 3](16/preview_3.png) | ![preview 4](16/preview_4.png) | ![preview 5](16/preview_5.png) | ![preview 6](16/preview_6.png) | ![preview 7](16/preview_7.png) | ![preview 8](16/preview_8.png) | | 17 | 39 | [Download](17/dataset.zip) | ![preview 1](17/preview_1.png) | ![preview 2](17/preview_2.png) | ![preview 3](17/preview_3.png) | ![preview 4](17/preview_4.png) | ![preview 5](17/preview_5.png) | ![preview 6](17/preview_6.png) | ![preview 7](17/preview_7.png) | ![preview 8](17/preview_8.png) | | 18 | 13 | [Download](18/dataset.zip) | ![preview 1](18/preview_1.png) | ![preview 2](18/preview_2.png) | ![preview 3](18/preview_3.png) | ![preview 4](18/preview_4.png) | ![preview 5](18/preview_5.png) | ![preview 6](18/preview_6.png) | ![preview 7](18/preview_7.png) | ![preview 8](18/preview_8.png) | | 19 | 18 | [Download](19/dataset.zip) | ![preview 1](19/preview_1.png) | ![preview 2](19/preview_2.png) | ![preview 3](19/preview_3.png) | ![preview 4](19/preview_4.png) | ![preview 5](19/preview_5.png) | ![preview 6](19/preview_6.png) | ![preview 7](19/preview_7.png) | ![preview 8](19/preview_8.png) | | noise | 100 | [Download](-1/dataset.zip) | ![preview 1](-1/preview_1.png) | ![preview 2](-1/preview_2.png) | ![preview 3](-1/preview_3.png) | ![preview 4](-1/preview_4.png) | ![preview 5](-1/preview_5.png) | ![preview 6](-1/preview_6.png) | ![preview 7](-1/preview_7.png) | ![preview 8](-1/preview_8.png) |
BangumiBase/thunderboltfantasy
[ "size_categories:1K<n<10K", "license:mit", "art", "region:us" ]
2023-09-29T01:02:26+00:00
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
2023-09-29T12:26:36+00:00
[]
[]
TAGS #size_categories-1K<n<10K #license-mit #art #region-us
Bangumi Image Base of Thunderbolt Fantasy ========================================= This is the image base of bangumi Thunderbolt Fantasy, we detected 21 characters, 1926 images in total. The full dataset is here. Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview:
[]
[ "TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n" ]
3623269096a9abf0e3eea3ff6319afaca3bed13d
hello world
tttfff/test
[ "region:us" ]
2023-09-29T01:07:46+00:00
{}
2023-09-29T01:23:29+00:00
[]
[]
TAGS #region-us
hello world
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
1a75fee952ff8f45b931c41cb7ae38f2f773f7cb
This dataset is on progress, just for save the JSONL dataset to do train in the Cloud. It's not complete.
abdiharyadi/indoamrbart-dataset
[ "region:us" ]
2023-09-29T01:21:18+00:00
{}
2023-10-17T10:37:44+00:00
[]
[]
TAGS #region-us
This dataset is on progress, just for save the JSONL dataset to do train in the Cloud. It's not complete.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
7b96045cf292cacf6bd916491aa4beb7b9c0f832
# Dataset Card for "test1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tttfff/test1
[ "region:us" ]
2023-09-29T01:52:46+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "package_name", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "star", "dtype": "int64"}, {"name": "version_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1508, "num_examples": 5}, {"name": "test", "num_bytes": 956, "num_examples": 5}], "download_size": 9451, "dataset_size": 2464}}
2023-09-29T01:53:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test1" More Information needed
[ "# Dataset Card for \"test1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test1\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"test1\"\n\nMore Information needed" ]
8155f1b276181fa29ef6f385788a755c3a25acdd
# Bengaluru Semantic Occupancy Dataset <img src="https://adityang.github.io/AdityaNG/BengaluruDrivingDataset/index_files/BDD_Iterator_Demo-2023-08-30_08.25.17.gif" > ## Dataset Summary We gathered a dataset spanning 114 minutes and 165K frames in Bengaluru, India. Our dataset consists of video data from a calibrated camera sensor with a resolution of 1920×1080 recorded at a framerate of 30 Hz. We utilize a Depth Dataset Generation pipeline that only uses videos as input to produce high-resolution disparity maps. - Dataset Iterator: https://github.com/AdityaNG/bdd_dataset_iterator - Project Page: https://adityang.github.io/AdityaNG/BengaluruDrivingDataset/ - Dataset Download: https://huggingface.co/datasets/AdityaNG/BengaluruSemanticOccupancyDataset ## Paper [Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios](https://arxiv.org/abs/2307.10934) ## Citation ```bibtex @misc{analgund2023octran, title={Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios}, author={Ganesh, Aditya N and Pobbathi Badrinath, Dhruval and Kumar, Harshith Mohan and S, Priya and Narayan, Surabhi }, year={2023}, howpublished={Spotlight Presentation at the Transformers for Vision Workshop, CVPR}, url={https://sites.google.com/view/t4v-cvpr23/papers#h.enx3bt45p649}, note={Transformers for Vision Workshop, CVPR 2023} }
AdityaNG/BengaluruSemanticOccupancyDataset
[ "license:mit", "video", "driving", "Bengaluru", "disparity maps", "depth dataset", "arxiv:2307.10934", "region:us" ]
2023-09-29T03:14:08+00:00
{"license": "mit", "tags": ["video", "driving", "Bengaluru", "disparity maps", "depth dataset"], "homepage": "https://adityang.github.io/AdityaNG/BengaluruDrivingDataset/"}
2024-01-08T14:56:29+00:00
[ "2307.10934" ]
[]
TAGS #license-mit #video #driving #Bengaluru #disparity maps #depth dataset #arxiv-2307.10934 #region-us
# Bengaluru Semantic Occupancy Dataset <img src="URL > ## Dataset Summary We gathered a dataset spanning 114 minutes and 165K frames in Bengaluru, India. Our dataset consists of video data from a calibrated camera sensor with a resolution of 1920×1080 recorded at a framerate of 30 Hz. We utilize a Depth Dataset Generation pipeline that only uses videos as input to produce high-resolution disparity maps. - Dataset Iterator: URL - Project Page: URL - Dataset Download: URL ## Paper Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios '''bibtex @misc{analgund2023octran, title={Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios}, author={Ganesh, Aditya N and Pobbathi Badrinath, Dhruval and Kumar, Harshith Mohan and S, Priya and Narayan, Surabhi }, year={2023}, howpublished={Spotlight Presentation at the Transformers for Vision Workshop, CVPR}, url={URL note={Transformers for Vision Workshop, CVPR 2023} }
[ "# Bengaluru Semantic Occupancy Dataset\n\n<img src=\"URL >", "## Dataset Summary\n\nWe gathered a dataset spanning 114 minutes and 165K frames in Bengaluru, India. Our dataset consists of video data from a calibrated camera sensor with a resolution of 1920×1080 recorded at a framerate of 30 Hz. We utilize a Depth Dataset Generation pipeline that only uses videos as input to produce high-resolution disparity maps.\n\n- Dataset Iterator: URL\n- Project Page: URL\n- Dataset Download: URL", "## Paper\n\nBengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios\n\n'''bibtex\n@misc{analgund2023octran,\n title={Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios},\n author={Ganesh, Aditya N and Pobbathi Badrinath, Dhruval and\n Kumar, Harshith Mohan and S, Priya and Narayan, Surabhi\n },\n year={2023},\n howpublished={Spotlight Presentation at the Transformers for Vision Workshop, CVPR},\n url={URL\n note={Transformers for Vision Workshop, CVPR 2023}\n}" ]
[ "TAGS\n#license-mit #video #driving #Bengaluru #disparity maps #depth dataset #arxiv-2307.10934 #region-us \n", "# Bengaluru Semantic Occupancy Dataset\n\n<img src=\"URL >", "## Dataset Summary\n\nWe gathered a dataset spanning 114 minutes and 165K frames in Bengaluru, India. Our dataset consists of video data from a calibrated camera sensor with a resolution of 1920×1080 recorded at a framerate of 30 Hz. We utilize a Depth Dataset Generation pipeline that only uses videos as input to produce high-resolution disparity maps.\n\n- Dataset Iterator: URL\n- Project Page: URL\n- Dataset Download: URL", "## Paper\n\nBengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios\n\n'''bibtex\n@misc{analgund2023octran,\n title={Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios},\n author={Ganesh, Aditya N and Pobbathi Badrinath, Dhruval and\n Kumar, Harshith Mohan and S, Priya and Narayan, Surabhi\n },\n year={2023},\n howpublished={Spotlight Presentation at the Transformers for Vision Workshop, CVPR},\n url={URL\n note={Transformers for Vision Workshop, CVPR 2023}\n}" ]
[ 38, 19, 106, 168 ]
[ "passage: TAGS\n#license-mit #video #driving #Bengaluru #disparity maps #depth dataset #arxiv-2307.10934 #region-us \n# Bengaluru Semantic Occupancy Dataset\n\n<img src=\"URL >## Dataset Summary\n\nWe gathered a dataset spanning 114 minutes and 165K frames in Bengaluru, India. Our dataset consists of video data from a calibrated camera sensor with a resolution of 1920×1080 recorded at a framerate of 30 Hz. We utilize a Depth Dataset Generation pipeline that only uses videos as input to produce high-resolution disparity maps.\n\n- Dataset Iterator: URL\n- Project Page: URL\n- Dataset Download: URL## Paper\n\nBengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios\n\n'''bibtex\n@misc{analgund2023octran,\n title={Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios},\n author={Ganesh, Aditya N and Pobbathi Badrinath, Dhruval and\n Kumar, Harshith Mohan and S, Priya and Narayan, Surabhi\n },\n year={2023},\n howpublished={Spotlight Presentation at the Transformers for Vision Workshop, CVPR},\n url={URL\n note={Transformers for Vision Workshop, CVPR 2023}\n}" ]
d64232cdab68a2147f59a97e2fc817e868b1de30
a dataset with the openplatypus logic dataset, and the megacode-best code dataset. It should be around 20% general logic and 80% code.
sw882882/megacodeLogic-92k
[ "region:us" ]
2023-09-29T03:34:21+00:00
{}
2023-09-29T04:25:20+00:00
[]
[]
TAGS #region-us
a dataset with the openplatypus logic dataset, and the megacode-best code dataset. It should be around 20% general logic and 80% code.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
a5d23a5732a4e94023ecec6814c078c311f8b499
[ { "id": 1, "question": "How can I program a VEX robot to follow a line?", "Question_Category": "Programming", "Answer_Category": "Instructional", "Format_Category": "Text", "answer": "To program a VEX robot to follow a line, you will need a line follower sensor. Once the sensor is installed, you can use programming environments such as VEXcode to write a program that reads the sensor data to control the robot's movement along the line." }, { "id": 2, "question": "What sensors are available for VEX Robotics platforms?", "Question_Category": "Hardware", "Answer_Category": "Informational", "Format_Category": "List", "answer": "Various sensors are available for VEX Robotics platforms, including but not limited to: Ultrasonic Sensors, Gyro Sensors, Potentiometers, Bumper Switches, Limit Switches, Optical Sensors, and Temperature Sensors." }, { "id": 3, "question": "How do I troubleshoot connection issues with a VEX EDR robot?", "Question_Category": "Troubleshooting", "Answer_Category": "Instructional", "Format_Category": "Text", "answer": "Troubleshooting connection issues with a VEX EDR robot typically involves checking the connections between the robot and the controller, ensuring the batteries are fully charged, and verifying that the VEXnet keys are properly seated. You may also want to check for any software updates or refer to the VEX EDR troubleshooting guide for further assistance." }, ... { "id": 100, "question": "Where can I find resources for preparing for VEX Robotics Competitions?", "Question_Category": "Resources", "Answer_Category": "Informational", "Format_Category": "Web Link", "answer": "Resources for preparing for VEX Robotics Competitions can be found on the official VEX Robotics website, the VEX forum, and the REC Foundation website. Additionally, many teams and organizations share resources and tutorials on their websites and on platforms like YouTube." } ]
TheVarunKaushik/VEX
[ "region:us" ]
2023-09-29T03:38:12+00:00
{}
2023-09-29T20:18:40+00:00
[]
[]
TAGS #region-us
[ { "id": 1, "question": "How can I program a VEX robot to follow a line?", "Question_Category": "Programming", "Answer_Category": "Instructional", "Format_Category": "Text", "answer": "To program a VEX robot to follow a line, you will need a line follower sensor. Once the sensor is installed, you can use programming environments such as VEXcode to write a program that reads the sensor data to control the robot's movement along the line." }, { "id": 2, "question": "What sensors are available for VEX Robotics platforms?", "Question_Category": "Hardware", "Answer_Category": "Informational", "Format_Category": "List", "answer": "Various sensors are available for VEX Robotics platforms, including but not limited to: Ultrasonic Sensors, Gyro Sensors, Potentiometers, Bumper Switches, Limit Switches, Optical Sensors, and Temperature Sensors." }, { "id": 3, "question": "How do I troubleshoot connection issues with a VEX EDR robot?", "Question_Category": "Troubleshooting", "Answer_Category": "Instructional", "Format_Category": "Text", "answer": "Troubleshooting connection issues with a VEX EDR robot typically involves checking the connections between the robot and the controller, ensuring the batteries are fully charged, and verifying that the VEXnet keys are properly seated. You may also want to check for any software updates or refer to the VEX EDR troubleshooting guide for further assistance." }, ... { "id": 100, "question": "Where can I find resources for preparing for VEX Robotics Competitions?", "Question_Category": "Resources", "Answer_Category": "Informational", "Format_Category": "Web Link", "answer": "Resources for preparing for VEX Robotics Competitions can be found on the official VEX Robotics website, the VEX forum, and the REC Foundation website. Additionally, many teams and organizations share resources and tutorials on their websites and on platforms like YouTube." } ]
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
beebad156687f29c9d9f9d1b24eb903e338f7521
# Dataset Card for "code-dictation" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Andyrasika/code-dictation
[ "region:us" ]
2023-09-29T03:40:45+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 8708.8, "num_examples": 40}, {"name": "test", "num_bytes": 2177.2, "num_examples": 10}], "download_size": 8160, "dataset_size": 10886.0}}
2023-09-29T03:41:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "code-dictation" More Information needed
[ "# Dataset Card for \"code-dictation\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"code-dictation\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"code-dictation\"\n\nMore Information needed" ]
b3e02462f4d1be3ab2f77cfde72a3266c6cbea3f
# Dataset Card for "distillation_code_sample_2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jitx/distillation_code_sample_2
[ "region:us" ]
2023-09-29T04:05:23+00:00
{"dataset_info": {"features": [{"name": "santacoder_prompts", "dtype": "string"}, {"name": "fim_inputs", "dtype": "string"}, {"name": "label_middles", "dtype": "string"}, {"name": "santacoder_outputs", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 59821, "num_examples": 18}], "download_size": 43459, "dataset_size": 59821}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-29T04:05:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "distillation_code_sample_2" More Information needed
[ "# Dataset Card for \"distillation_code_sample_2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"distillation_code_sample_2\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"distillation_code_sample_2\"\n\nMore Information needed" ]
9b0f6f4205e16e912d1802abdfac83d64d4b3879
"vex_robotics_faq": [ { "question": "What is VEX Robotics?", "answer": "VEX Robotics is a platform for learning about and building robots. It offers educational resources and a range of robotic kits for individuals and teams to learn about engineering, programming, and problem-solving.", "format_category": "Introduction" }, { "question": "How can I get started with VEX Robotics?", "answer": "To get started with VEX Robotics, you can purchase a robot kit from the VEX Robotics website or a retailer. It's also advisable to access educational resources, join a local robotics club or online community, and participate in VEX Robotics competitions to enhance your learning experience.", "format_category": "Getting Started" }, { "question": "Where can I participate in VEX Robotics competitions?", "answer": "VEX Robotics competitions are held at local, regional, national, and international levels. You can find information about upcoming competitions on the VEX Robotics website or through local robotics clubs and educational institutions.", "format_category": "Competitions" }, { "question": "How do I program my VEX robot?", "answer": "VEX robots can be programmed using the VEXcode software, which is available for download on the VEX Robotics website. There are also many tutorials and community forums available to help you get started with programming your VEX robot.", "format_category": "Programming" } ]
TheVarunKaushik/VexRobot
[ "language:en", "code", "region:us" ]
2023-09-29T04:05:48+00:00
{"language": ["en"], "pretty_name": "Vex Language", "tags": ["code"]}
2023-09-29T20:16:40+00:00
[]
[ "en" ]
TAGS #language-English #code #region-us
"vex_robotics_faq": [ { "question": "What is VEX Robotics?", "answer": "VEX Robotics is a platform for learning about and building robots. It offers educational resources and a range of robotic kits for individuals and teams to learn about engineering, programming, and problem-solving.", "format_category": "Introduction" }, { "question": "How can I get started with VEX Robotics?", "answer": "To get started with VEX Robotics, you can purchase a robot kit from the VEX Robotics website or a retailer. It's also advisable to access educational resources, join a local robotics club or online community, and participate in VEX Robotics competitions to enhance your learning experience.", "format_category": "Getting Started" }, { "question": "Where can I participate in VEX Robotics competitions?", "answer": "VEX Robotics competitions are held at local, regional, national, and international levels. You can find information about upcoming competitions on the VEX Robotics website or through local robotics clubs and educational institutions.", "format_category": "Competitions" }, { "question": "How do I program my VEX robot?", "answer": "VEX robots can be programmed using the VEXcode software, which is available for download on the VEX Robotics website. There are also many tutorials and community forums available to help you get started with programming your VEX robot.", "format_category": "Programming" } ]
[]
[ "TAGS\n#language-English #code #region-us \n" ]
[ 12 ]
[ "passage: TAGS\n#language-English #code #region-us \n" ]
2befbbf8f9cdb13a2014f240bac38907dd1a5cc6
# Dataset of 凜雪鴉 This is the dataset of 凜雪鴉, containing 147 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 147 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 253 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 281 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 147 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 147 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 147 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 253 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 253 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 243 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 281 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 281 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/lin_xue_ya_thunderboltfantasy
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-29T04:26:48+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-29T04:30:12+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of 凜雪鴉 ============== This is the dataset of 凜雪鴉, containing 147 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
034ac96d53b554fa63aacb8130fa524cf9cd3a8a
# Dataset of 殤不患 This is the dataset of 殤不患, containing 261 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 261 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 517 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 521 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 261 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 261 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 261 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 517 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 517 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 455 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 521 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 521 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/shang_bu_huan_thunderboltfantasy
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-29T04:54:58+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-29T05:01:56+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of 殤不患 ============== This is the dataset of 殤不患, containing 261 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
e1caf873673b38fa34f84ac6b054c3ae5de07655
# Dataset Card for "summaries-de-v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tucan-ai/summaries-de-v1
[ "region:us" ]
2023-09-29T04:55:08+00:00
{"dataset_info": {"features": [{"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 93014092.0, "num_examples": 8060}, {"name": "test", "num_bytes": 23253523.0, "num_examples": 2015}], "download_size": 68440450, "dataset_size": 116267615.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-10-18T13:33:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "summaries-de-v1" More Information needed
[ "# Dataset Card for \"summaries-de-v1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"summaries-de-v1\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"summaries-de-v1\"\n\nMore Information needed" ]
05ac5404e63ca10cbb68f491516e190840d800ab
# Dataset of 浪巫謠 This is the dataset of 浪巫謠, containing 176 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 176 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 324 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 373 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 176 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 176 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 176 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 324 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 324 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 299 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 373 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 373 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/lang_wu_yao_thunderboltfantasy
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-29T05:19:47+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-29T05:23:24+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of 浪巫謠 ============== This is the dataset of 浪巫謠, containing 176 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
001ce4fb9e732e103a5b92b3e14c1fe4fc43ff0a
# AutoTrain Dataset for project: sgugit-model-v4 ## Dataset Description This dataset has been automatically processed by AutoTrain for project sgugit-model-v4. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "target": 40, "text": "\u0433\u0434\u0435 \u043c\u043e\u0436\u043d\u043e \u043d\u0430\u0439\u0442\u0438 \u043e\u0431\u0440\u0430\u0437\u0435\u0446 \u0432\u0441\u0442\u0443\u043f\u0438\u0442\u0435\u043b\u044c\u043d\u044b\u0439 \u0438\u0441\u043f\u044b\u0442\u0430\u043d\u0438\u0435 \u043f\u0440\u043e\u0432\u0435\u0441\u0442\u0438 \u043f\u0440\u043e\u0448\u043b\u044b\u0439 \u0433\u043e\u0434 \u0447\u0442\u043e\u0431\u044b \u0431\u044b\u0442\u044c \u0433\u043e\u0442\u043e\u0432\u044b\u0439 \u043a \u043e\u043d\u0438 \u0438 \u0434\u043e\u0441\u0442\u0438\u0433\u043d\u0443\u0442\u044c \u0443\u0441\u043f\u0435\u0445" }, { "target": 28, "text": "\u043a\u0430\u043a\u043e\u0439 \u0448\u0430\u0433 \u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u043e \u043f\u0440\u0435\u0434\u043f\u0440\u0438\u043d\u044f\u0442\u044c \u0447\u0442\u043e\u0431\u044b \u043e\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u0438\u0442\u044c \u0441\u043c\u0435\u043d\u0430 \u0441\u0442\u0430\u0442\u0443\u0441 \u0441 \u043f\u043b\u0430\u0442\u043d\u044b\u0439 \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u0435 \u043d\u0430 \u0431\u044e\u0434\u0436\u0435\u0442\u043d\u044b\u0439" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "target": "ClassLabel(names=['\u0410\u043a\u0430\u0434\u0435\u043c\u0438\u0447\u0435\u0441\u043a\u0438\u0439 \u043e\u0442\u043f\u0443\u0441\u043a', '\u0411\u044e\u0434\u0436\u0435\u0442\u043d\u044b\u0435 \u043c\u0435\u0441\u0442\u0430', '\u0412\u043e\u0441\u0441\u0442\u0430\u043d\u043e\u0432\u043b\u0435\u043d\u0438\u0435 \u043f\u043e\u0441\u043b\u0435 \u043e\u0442\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u044f', '\u0413\u0440\u0430\u0444\u0438\u043a \u0437\u0430\u043d\u044f\u0442\u0438\u0439', '\u0413\u0440\u0430\u0444\u0438\u043a \u0440\u0430\u0431\u043e\u0442\u044b \u043e\u0441\u043d\u043e\u0432\u043d\u044b\u0445 \u043f\u043e\u0434\u0440\u0430\u0437\u0434\u0435\u043b\u0435\u043d\u0438\u0439 \u0421\u0413\u0423\u0413\u0438\u0422', '\u0414\u0430\u0442\u044b \u0441\u0442\u0438\u043f\u0435\u043d\u0434\u0438\u0438', '\u0414\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u044b \u0434\u043b\u044f \u0437\u0430\u0441\u0435\u043b\u0435\u043d\u0438\u044f \u0432 \u043e\u0431\u0449\u0435\u0436\u0438\u0442\u0438\u0435', '\u0414\u043e\u0441\u0442\u0430\u0442\u043e\u0447\u043d\u043e \u043b\u0438 \u0415\u0413\u042d', '\u0418\u0437\u043c\u0435\u043d\u0435\u043d\u0438\u0435 \u0440\u0430\u0441\u043f\u0438\u0441\u0430\u043d\u0438\u044f \u0441\u0442\u0443\u0434\u0435\u043d\u0442\u0430\u043c\u0438', '\u0418\u043d\u0441\u0442\u0438\u0442\u0443\u0442\u044b \u0421\u0413\u0423\u0413\u0438\u0422', '\u0418\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u044f \u043e \u043f\u0440\u0435\u043f\u043e\u0434\u0430\u0432\u0430\u0442\u0435\u043b\u044f\u0445', '\u041a\u0430\u043a \u0432\u043e\u0441\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u0442\u044c \u0434\u043e\u0441\u0442\u0443\u043f \u043a \u042d\u0418\u041e\u0421?', '\u041a\u0430\u043a \u043d\u0430\u0439\u0442\u0438 \u0430\u0443\u0434\u0438\u0442\u043e\u0440\u0438\u044e', '\u041a\u0430\u043a \u043d\u0430\u0439\u0442\u0438 \u0434\u0435\u043a\u0430\u043d\u0430\u0442', '\u041a\u0430\u043a \u043e\u0442\u0447\u0438\u0441\u043b\u0438\u0442\u044c\u0441\u044f?', '\u041a\u043e\u043b\u0438\u0447\u0435\u0441\u0442\u0432\u043e \u043f\u0435\u0440\u0435\u0441\u0434\u0430\u0447', '\u041a\u043e\u043b\u0438\u0447\u0435\u0441\u0442\u0432\u043e \u043f\u0440\u0435\u0434\u043c\u0435\u0442\u043e\u0432, \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u044b\u0445 \u0434\u043b\u044f \u043f\u0435\u0440\u0435\u0441\u0434\u0430\u0447\u0438', '\u041c\u0430\u0442\u0435\u0440\u0438\u0430\u043b\u044c\u043d\u0430\u044f \u043f\u043e\u043c\u043e\u0449\u044c', '\u041c\u0430\u0442\u0435\u0440\u0438\u0430\u043b\u044c\u043d\u0430\u044f \u043f\u043e\u043c\u043e\u0449\u044c \u0434\u043b\u044f \u0438\u043d\u043e\u0441\u0442\u0440\u0430\u043d\u0446\u0435\u0432', '\u041d\u0430\u043f\u0440\u0430\u0432\u043b\u0435\u043d\u0438\u044f \u0434\u043b\u044f \u043c\u0430\u0433\u0438\u0441\u0442\u0440\u0430\u0442\u0443\u0440\u044b', '\u041e\u0431\u043d\u043e\u0432\u043b\u0435\u043d\u0438\u0435 \u043e\u0446\u0435\u043d\u043e\u043a \u0432 \u0437\u0430\u0447\u0451\u0442\u043a\u0435', '\u041e\u0431\u0449\u0435\u0436\u0438\u0442\u0438\u0435 \u0434\u043b\u044f \u0438\u043d\u043e\u0433\u043e\u0440\u043e\u0434\u043d\u0438\u0445 \u0441\u0442\u0443\u0434\u0435\u043d\u0442\u043e\u0432', '\u041e\u043d\u043b\u0430\u0439\u043d \u0441\u0434\u0430\u0447\u0430 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u043e\u0432 \u0438 \u044d\u043a\u0437\u0430\u043c\u0435\u043d\u043e\u0432', '\u041e\u0442\u0441\u0440\u043e\u0447\u043a\u0430 \u043c\u0430\u0433\u0438\u0441\u0442\u0440\u0430\u043d\u0442\u0430\u043c', '\u041e\u0442\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u0435 \u043f\u043e \u0437\u0430\u0434\u043e\u043b\u0436\u0435\u043d\u043d\u043e\u0441\u0442\u0438', '\u041f\u0435\u0440\u0435\u0432\u043e\u0434 \u043c\u0435\u0436\u0434\u0443 \u0433\u0440\u0443\u043f\u043f\u0430\u043c\u0438 \u043a\u0443\u0440\u0441\u0430', '\u041f\u0435\u0440\u0435\u0432\u043e\u0434 \u043d\u0430 \u0434\u0440\u0443\u0433\u043e\u0435 \u043d\u0430\u043f\u0440\u0430\u0432\u043b\u0435\u043d\u0438\u0435', '\u041f\u0435\u0440\u0435\u0445\u043e\u0434 \u0438\u0437 \u0434\u0440\u0443\u0433\u043e\u0433\u043e \u0412\u0423\u0417\u0430', '\u041f\u0435\u0440\u0435\u0445\u043e\u0434 \u043d\u0430 \u0431\u044e\u0434\u0436\u0435\u0442', '\u041f\u043e\u043b\u0443\u0447\u0435\u043d\u0438\u0435 \u0441\u043f\u0440\u0430\u0432\u043a\u0438 \u043e\u0431 \u0443\u0447\u0451\u0431\u0435', '\u041f\u043e\u043b\u0443\u0447\u0435\u043d\u0438\u0435 \u0447\u0438\u0442\u0430\u0442\u0435\u043b\u044c\u0441\u043a\u043e\u0433\u043e \u0431\u0438\u043b\u0435\u0442\u0430', '\u041f\u043e\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0441\u0442\u044c \u0437\u0430\u0441\u0435\u043b\u0435\u043d\u0438\u044f \u0432 \u043e\u0431\u0449\u0435\u0436\u0438\u0442\u0438\u0435', '\u041f\u043e\u0442\u0435\u0440\u044f \u043f\u0440\u043e\u043f\u0443\u0441\u043a\u0430 \u0432 \u043e\u0431\u0449\u0435\u0436\u0438\u0442\u0438\u0435', '\u041f\u043e\u0442\u0435\u0440\u044f \u0441\u0442\u0443\u0434\u0435\u043d\u0447\u0435\u0441\u043a\u043e\u0433\u043e \u0431\u0438\u043b\u0435\u0442\u0430', '\u041f\u043e\u0442\u0435\u0440\u044f \u0447\u0438\u0442\u0430\u0442\u0435\u043b\u044c\u0441\u043a\u043e\u0433\u043e \u0431\u0438\u043b\u0435\u0442\u0430', '\u041f\u0440\u0430\u0432\u0438\u043b\u0430 \u043f\u0440\u043e\u0436\u0438\u0432\u0430\u043d\u0438\u044f \u0432 \u043e\u0431\u0449\u0435\u0436\u0438\u0442\u0438\u0438', '\u041f\u0440\u043e\u0434\u043e\u043b\u0436\u0438\u0442\u0435\u043b\u044c\u043d\u043e\u0441\u0442\u044c \u043f\u0440\u0430\u043a\u0442\u0438\u043a\u0438', '\u041f\u0440\u043e\u043f\u0443\u0441\u043a\u043d\u0430\u044f \u0441\u0438\u0441\u0442\u0435\u043c\u0430 \u043e\u0431\u0449\u0435\u0436\u0438\u0442\u0438\u044f', '\u041f\u0440\u043e\u0445\u043e\u0434\u043d\u044b\u0435 \u0431\u0430\u043b\u043b\u044b \u0438 \u0437\u0430\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u0435', '\u041f\u0440\u043e\u0445\u043e\u0436\u0434\u0435\u043d\u0438\u0435 \u043f\u0440\u0430\u043a\u0442\u0438\u043a\u0438', '\u041f\u0440\u043e\u0448\u043b\u043e\u0433\u043e\u0434\u043d\u0438\u0435 \u044d\u043a\u0437\u0430\u043c\u0435\u043d\u044b', '\u0420\u0435\u0437\u0443\u043b\u044c\u0442\u0430\u0442\u044b \u044d\u043a\u0437\u0430\u043c\u0435\u043d\u043e\u0432 \u0438\u0437 \u0434\u0440\u0443\u0433\u043e\u0433\u043e \u0443\u043d\u0438\u0432\u0435\u0440\u0441\u0438\u0442\u0435\u0442\u0430', '\u0421\u043e\u0441\u0442\u0430\u0432 \u043f\u0440\u0430\u043a\u0442\u0438\u043a\u0438', '\u0421\u0440\u043e\u043a\u0438 \u043f\u0440\u0438\u0451\u043c\u043d\u043e\u0439 \u043a\u0430\u043c\u043f\u0430\u043d\u0438\u0438', '\u0421\u0442\u0438\u043f\u0435\u043d\u0434\u0438\u0430\u043b\u044c\u043d\u0430\u044f \u043a\u0430\u0440\u0442\u0430', '\u0421\u0442\u0438\u043f\u0435\u043d\u0434\u0438\u044f', '\u0421\u0442\u0438\u043f\u0435\u043d\u0434\u0438\u044f \u0432 \u043b\u0435\u0442\u043d\u0438\u0439 \u043f\u0435\u0440\u0438\u043e\u0434', '\u0421\u0442\u0438\u043f\u0435\u043d\u0434\u0438\u044f \u0434\u043b\u044f \u0443\u0447\u0430\u0449\u0438\u0445\u0441\u044f \u043d\u0430 \u043f\u043b\u0430\u0442\u043d\u043e\u0439 \u043e\u0441\u043d\u043e\u0432\u0435', '\u0421\u0442\u043e\u0438\u043c\u043e\u0441\u0442\u044c \u043f\u0440\u043e\u0436\u0438\u0432\u0430\u043d\u0438\u044f \u0432 \u043e\u0431\u0449\u0435\u0436\u0438\u0442\u0438\u0438', '\u0421\u0443\u043c\u043c\u0430 \u0441\u0442\u0438\u043f\u0435\u043d\u0434\u0438\u0438', '\u0422\u0440\u0430\u043d\u0441\u043f\u043e\u0440\u0442\u043d\u0430\u044f \u043a\u0430\u0440\u0442\u0430', '\u0422\u0440\u0435\u0431\u043e\u0432\u0430\u043d\u0438\u044f \u043a \u043a\u0440\u0430\u0441\u043d\u043e\u043c\u0443 \u0434\u0438\u043f\u043b\u043e\u043c\u0443', '\u0423\u0441\u043b\u043e\u0432\u0438\u044f \u043e\u0442\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u044f', '\u0423\u0442\u0435\u0440\u044f \u0434\u0438\u043f\u043b\u043e\u043c\u0430', '\u0423\u0447\u0435\u0431\u043d\u044b\u0439 \u043f\u043b\u0430\u043d', '\u0423\u0447\u0435\u0431\u043d\u044b\u0439 \u043f\u0440\u043e\u0446\u0435\u0441\u0441', '\u0427\u0442\u043e \u0442\u0430\u043a\u043e\u0435 \u042d\u0418\u041e\u0421?', '\u042d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u044b\u0435 \u0438\u0441\u0442\u043e\u0447\u043d\u0438\u043a\u0438 \u0433\u0430\u0437\u0435\u0442 \u0438 \u0436\u0443\u0440\u043d\u0430\u043b\u043e\u0432'], id=None)", "text": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 3993 | | valid | 1006 |
GRPUI/autotrain-data-sgugit-model-v4
[ "task_categories:text-classification", "region:us" ]
2023-09-29T05:29:03+00:00
{"task_categories": ["text-classification"]}
2023-10-02T08:06:43+00:00
[]
[]
TAGS #task_categories-text-classification #region-us
AutoTrain Dataset for project: sgugit-model-v4 ============================================== Dataset Description ------------------- This dataset has been automatically processed by AutoTrain for project sgugit-model-v4. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-text-classification #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ 17, 27, 17, 23, 27 ]
[ "passage: TAGS\n#task_categories-text-classification #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
ba0800325bb778199ea27edd8df10d377073131c
# Dataset Card for "test-books" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Andalf/test-books
[ "region:us" ]
2023-09-29T05:30:44+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 1950648.0, "num_examples": 238}, {"name": "test", "num_bytes": 221292.0, "num_examples": 27}], "download_size": 1101698, "dataset_size": 2171940.0}}
2023-09-29T05:30:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test-books" More Information needed
[ "# Dataset Card for \"test-books\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test-books\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"test-books\"\n\nMore Information needed" ]