sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
tokens_length
listlengths
1
353
input_texts
listlengths
1
40
a975f98e5679cbad44d95e384bd2d6bb363f6199
# Dataset Card for Evaluation run of TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_TheBloke__chronos-wizardlm-uc-scot-st-13B-GPTQ_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-07T17:01:57.084059](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__chronos-wizardlm-uc-scot-st-13B-GPTQ_public/blob/main/results_2023-11-07T17-01-57.084059.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.008284395973154363, "em_stderr": 0.0009282472025612514, "f1": 0.0820406879194631, "f1_stderr": 0.0018086518070639704, "acc": 0.40702937397863653, "acc_stderr": 0.009614901402107493 }, "harness|drop|3": { "em": 0.008284395973154363, "em_stderr": 0.0009282472025612514, "f1": 0.0820406879194631, "f1_stderr": 0.0018086518070639704 }, "harness|gsm8k|5": { "acc": 0.06899166034874905, "acc_stderr": 0.006980995834838566 }, "harness|winogrande|5": { "acc": 0.745067087608524, "acc_stderr": 0.012248806969376422 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_TheBloke__chronos-wizardlm-uc-scot-st-13B-GPTQ
[ "region:us" ]
2023-11-05T09:19:28+00:00
{"pretty_name": "Evaluation run of TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__chronos-wizardlm-uc-scot-st-13B-GPTQ_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-07T17:01:57.084059](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__chronos-wizardlm-uc-scot-st-13B-GPTQ_public/blob/main/results_2023-11-07T17-01-57.084059.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.008284395973154363,\n \"em_stderr\": 0.0009282472025612514,\n \"f1\": 0.0820406879194631,\n \"f1_stderr\": 0.0018086518070639704,\n \"acc\": 0.40702937397863653,\n \"acc_stderr\": 0.009614901402107493\n },\n \"harness|drop|3\": {\n \"em\": 0.008284395973154363,\n \"em_stderr\": 0.0009282472025612514,\n \"f1\": 0.0820406879194631,\n \"f1_stderr\": 0.0018086518070639704\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.06899166034874905,\n \"acc_stderr\": 0.006980995834838566\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.745067087608524,\n \"acc_stderr\": 0.012248806969376422\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_05T09_19_09.913548", "path": ["**/details_harness|drop|3_2023-11-05T09-19-09.913548.parquet"]}, {"split": "2023_11_07T17_01_57.084059", "path": ["**/details_harness|drop|3_2023-11-07T17-01-57.084059.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-07T17-01-57.084059.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_05T09_19_09.913548", "path": ["**/details_harness|gsm8k|5_2023-11-05T09-19-09.913548.parquet"]}, {"split": "2023_11_07T17_01_57.084059", "path": ["**/details_harness|gsm8k|5_2023-11-07T17-01-57.084059.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-07T17-01-57.084059.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_05T09_19_09.913548", "path": ["**/details_harness|winogrande|5_2023-11-05T09-19-09.913548.parquet"]}, {"split": "2023_11_07T17_01_57.084059", "path": ["**/details_harness|winogrande|5_2023-11-07T17-01-57.084059.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-07T17-01-57.084059.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_05T09_19_09.913548", "path": ["results_2023-11-05T09-19-09.913548.parquet"]}, {"split": "2023_11_07T17_01_57.084059", "path": ["results_2023-11-07T17-01-57.084059.parquet"]}, {"split": "latest", "path": ["results_2023-11-07T17-01-57.084059.parquet"]}]}]}
2023-11-07T17:02:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-07T17:01:57.084059(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-07T17:01:57.084059(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-07T17:01:57.084059(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 32, 31, 181, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-07T17:01:57.084059(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
5e6f49ef0acc2b55797aa44cb85a3e10ca107e49
# Dataset Card for "eurlexsum_ita_cleaned_32768_299" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gianma/eurlexsum_ita_cleaned_32768_299
[ "region:us" ]
2023-11-05T09:20:04+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "is_camera", "dtype": "bool"}, {"name": "reference", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "tokenized_len_total", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 27759232, "num_examples": 1097}, {"name": "validation", "num_bytes": 1700544, "num_examples": 63}, {"name": "test", "num_bytes": 1779936, "num_examples": 63}], "download_size": 12565212, "dataset_size": 31239712}}
2023-11-05T11:56:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "eurlexsum_ita_cleaned_32768_299" More Information needed
[ "# Dataset Card for \"eurlexsum_ita_cleaned_32768_299\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"eurlexsum_ita_cleaned_32768_299\"\n\nMore Information needed" ]
[ 6, 25 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"eurlexsum_ita_cleaned_32768_299\"\n\nMore Information needed" ]
de0b66036741e785696af316807072db9febd036
# Dataset Card for FILE-CS <!-- Provide a quick summary of the dataset. --> ## Dataset Summary FILE-CS is the first dataset for file-level code summarization task. FILE-CS obtains 98,236 <code file, summary> pairs and is split into 78,588/9,824/9,824 examples for training/development/testing. ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> ### Data Instances A data point consists of a file-level code along with its documentation. Each data point also contains meta data on the file, such as the repository it was extracted from. ```shell { 'index': '3526' 'file_contents': 'from http import HTTPStatus [...] return True' 'file_docstring_tokens': 'Support for functionality to download files' } ``` ### Data Fields * index: Attribute number * file_contents: All codes in the file * file_docstring_tokens: File documentation ### Data Splits Three splits are available: * train * test * valid ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed]
huangyx353/FILE-CS
[ "region:us" ]
2023-11-05T09:27:28+00:00
{}
2023-11-10T16:22:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for FILE-CS ## Dataset Summary FILE-CS is the first dataset for file-level code summarization task. FILE-CS obtains 98,236 <code file, summary> pairs and is split into 78,588/9,824/9,824 examples for training/development/testing. ## Dataset Structure ### Data Instances A data point consists of a file-level code along with its documentation. Each data point also contains meta data on the file, such as the repository it was extracted from. ### Data Fields * index: Attribute number * file_contents: All codes in the file * file_docstring_tokens: File documentation ### Data Splits Three splits are available: * train * test * valid [optional] BibTeX:
[ "# Dataset Card for FILE-CS", "## Dataset Summary\n\nFILE-CS is the first dataset for file-level code summarization task. FILE-CS obtains 98,236 <code file, summary> pairs and is split into 78,588/9,824/9,824 examples for training/development/testing.", "## Dataset Structure", "### Data Instances\nA data point consists of a file-level code along with its documentation. Each data point also contains meta data on the file, such as the repository it was extracted from.", "### Data Fields\n\n* index: Attribute number\n* file_contents: All codes in the file\n* file_docstring_tokens: File documentation", "### Data Splits\n\nThree splits are available:\n\n* train\n* test\n* valid\n\n\n\n\n[optional]\n\n\n\nBibTeX:" ]
[ "TAGS\n#region-us \n", "# Dataset Card for FILE-CS", "## Dataset Summary\n\nFILE-CS is the first dataset for file-level code summarization task. FILE-CS obtains 98,236 <code file, summary> pairs and is split into 78,588/9,824/9,824 examples for training/development/testing.", "## Dataset Structure", "### Data Instances\nA data point consists of a file-level code along with its documentation. Each data point also contains meta data on the file, such as the repository it was extracted from.", "### Data Fields\n\n* index: Attribute number\n* file_contents: All codes in the file\n* file_docstring_tokens: File documentation", "### Data Splits\n\nThree splits are available:\n\n* train\n* test\n* valid\n\n\n\n\n[optional]\n\n\n\nBibTeX:" ]
[ 6, 9, 66, 6, 46, 34, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for FILE-CS## Dataset Summary\n\nFILE-CS is the first dataset for file-level code summarization task. FILE-CS obtains 98,236 <code file, summary> pairs and is split into 78,588/9,824/9,824 examples for training/development/testing.## Dataset Structure### Data Instances\nA data point consists of a file-level code along with its documentation. Each data point also contains meta data on the file, such as the repository it was extracted from.### Data Fields\n\n* index: Attribute number\n* file_contents: All codes in the file\n* file_docstring_tokens: File documentation### Data Splits\n\nThree splits are available:\n\n* train\n* test\n* valid\n\n\n\n\n[optional]\n\n\n\nBibTeX:" ]
58cafa0c3b73c44a88994d9c99043df9fc214076
# Dataset Card for "must-c-en-de-wait3-01" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-de-wait3-01
[ "region:us" ]
2023-11-05T10:12:57+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 806093772, "num_examples": 4513829}, {"name": "test", "num_bytes": 9925067, "num_examples": 57041}, {"name": "validation", "num_bytes": 4994760, "num_examples": 26843}], "download_size": 161231985, "dataset_size": 821013599}}
2023-11-05T10:13:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-de-wait3-01" More Information needed
[ "# Dataset Card for \"must-c-en-de-wait3-01\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-de-wait3-01\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-de-wait3-01\"\n\nMore Information needed" ]
a940e8035fbeaf52bb8580f401875596f2b8480a
# Dataset Card for "must-c-en-de-wait4-01" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-de-wait4-01
[ "region:us" ]
2023-11-05T10:13:36+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 826970789, "num_examples": 4513829}, {"name": "test", "num_bytes": 10182976, "num_examples": 57041}, {"name": "validation", "num_bytes": 5115344, "num_examples": 26843}], "download_size": 160313894, "dataset_size": 842269109}}
2023-11-05T10:13:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-de-wait4-01" More Information needed
[ "# Dataset Card for \"must-c-en-de-wait4-01\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-de-wait4-01\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-de-wait4-01\"\n\nMore Information needed" ]
0acb7700fc484a28106729beca234b11ed447087
# Dataset Card for "must-c-en-de-wait5-01" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-de-wait5-01
[ "region:us" ]
2023-11-05T10:14:09+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 846818255, "num_examples": 4513829}, {"name": "test", "num_bytes": 10426751, "num_examples": 57041}, {"name": "validation", "num_bytes": 5229724, "num_examples": 26843}], "download_size": 159077466, "dataset_size": 862474730}}
2023-11-05T10:14:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-de-wait5-01" More Information needed
[ "# Dataset Card for \"must-c-en-de-wait5-01\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-de-wait5-01\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-de-wait5-01\"\n\nMore Information needed" ]
20af4d4a9a6c736000367bf8b7385c0492e80688
# Dataset Card for "amazon_tts_encodec_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lca0503/amazon_tts_encodec_v2
[ "region:us" ]
2023-11-05T10:14:33+00:00
{"dataset_info": {"features": [{"name": "file_id", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "src_encodec_0", "sequence": "int64"}, {"name": "src_encodec_1", "sequence": "int64"}, {"name": "src_encodec_2", "sequence": "int64"}, {"name": "src_encodec_3", "sequence": "int64"}, {"name": "src_encodec_4", "sequence": "int64"}, {"name": "src_encodec_5", "sequence": "int64"}, {"name": "src_encodec_6", "sequence": "int64"}, {"name": "src_encodec_7", "sequence": "int64"}, {"name": "tgt_encodec_0", "sequence": "int64"}, {"name": "tgt_encodec_1", "sequence": "int64"}, {"name": "tgt_encodec_2", "sequence": "int64"}, {"name": "tgt_encodec_3", "sequence": "int64"}, {"name": "tgt_encodec_4", "sequence": "int64"}, {"name": "tgt_encodec_5", "sequence": "int64"}, {"name": "tgt_encodec_6", "sequence": "int64"}, {"name": "tgt_encodec_7", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 6057049080, "num_examples": 171430}, {"name": "validation", "num_bytes": 351534634, "num_examples": 10000}, {"name": "test", "num_bytes": 353020020, "num_examples": 10000}], "download_size": 506178649, "dataset_size": 6761603734}}
2023-11-14T00:41:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "amazon_tts_encodec_v2" More Information needed
[ "# Dataset Card for \"amazon_tts_encodec_v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"amazon_tts_encodec_v2\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"amazon_tts_encodec_v2\"\n\nMore Information needed" ]
e48f6016b72833177d28bbd20ed2c47c8e6e4983
# Dataset Card for "must-c-en-de-wait7-01" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-de-wait7-01
[ "region:us" ]
2023-11-05T10:14:40+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 883194948, "num_examples": 4513829}, {"name": "test", "num_bytes": 10870283, "num_examples": 57041}, {"name": "validation", "num_bytes": 5438711, "num_examples": 26843}], "download_size": 156235216, "dataset_size": 899503942}}
2023-11-05T10:15:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-de-wait7-01" More Information needed
[ "# Dataset Card for \"must-c-en-de-wait7-01\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-de-wait7-01\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-de-wait7-01\"\n\nMore Information needed" ]
72a5dee76702722c45daa580bd4a4805c193ca6b
# Dataset Card for "must-c-en-de-wait9-01" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-de-wait9-01
[ "region:us" ]
2023-11-05T10:15:15+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 915123929, "num_examples": 4513829}, {"name": "test", "num_bytes": 11255234, "num_examples": 57041}, {"name": "validation", "num_bytes": 5621779, "num_examples": 26843}], "download_size": 153197691, "dataset_size": 932000942}}
2023-11-05T10:15:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-de-wait9-01" More Information needed
[ "# Dataset Card for \"must-c-en-de-wait9-01\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-de-wait9-01\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-de-wait9-01\"\n\nMore Information needed" ]
00643730927e2b9d6ac394c4e47234d1eb901704
# Dataset Card for "must-c-en-es-wait4-01" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-es-wait4-01
[ "region:us" ]
2023-11-05T10:22:55+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1018568678, "num_examples": 5239386}, {"name": "test", "num_bytes": 10221278, "num_examples": 57187}, {"name": "validation", "num_bytes": 5552288, "num_examples": 27549}], "download_size": 183161579, "dataset_size": 1034342244}}
2023-11-05T10:23:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-es-wait4-01" More Information needed
[ "# Dataset Card for \"must-c-en-es-wait4-01\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-es-wait4-01\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-es-wait4-01\"\n\nMore Information needed" ]
c7966f26268e1320902fbe04458b338e3c0e94ca
# Dataset Card for "must-c-en-es-wait5-01" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-es-wait5-01
[ "region:us" ]
2023-11-05T10:23:43+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1041109165, "num_examples": 5239386}, {"name": "test", "num_bytes": 10469241, "num_examples": 57187}, {"name": "validation", "num_bytes": 5668756, "num_examples": 27549}], "download_size": 181666335, "dataset_size": 1057247162}}
2023-11-05T10:24:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-es-wait5-01" More Information needed
[ "# Dataset Card for \"must-c-en-es-wait5-01\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-es-wait5-01\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-es-wait5-01\"\n\nMore Information needed" ]
bc8f7dcbd2edc852854094070459abd3f9d982e8
# Dataset Card for "must-c-en-es-wait7-01" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-es-wait7-01
[ "region:us" ]
2023-11-05T10:24:36+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1082486943, "num_examples": 5239386}, {"name": "test", "num_bytes": 10924140, "num_examples": 57187}, {"name": "validation", "num_bytes": 5882303, "num_examples": 27549}], "download_size": 178341886, "dataset_size": 1099293386}}
2023-11-05T10:25:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-es-wait7-01" More Information needed
[ "# Dataset Card for \"must-c-en-es-wait7-01\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-es-wait7-01\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-es-wait7-01\"\n\nMore Information needed" ]
d9ee7060a78c91b131ab0f207da555ad9507bc31
# Dataset Card for "must-c-en-es-wait9-01" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-es-wait9-01
[ "region:us" ]
2023-11-05T10:25:18+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1118986767, "num_examples": 5239386}, {"name": "test", "num_bytes": 11323095, "num_examples": 57187}, {"name": "validation", "num_bytes": 6070911, "num_examples": 27549}], "download_size": 174902595, "dataset_size": 1136380773}}
2023-11-05T10:25:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-es-wait9-01" More Information needed
[ "# Dataset Card for \"must-c-en-es-wait9-01\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-es-wait9-01\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-es-wait9-01\"\n\nMore Information needed" ]
5d433c9f94fd017974f651a8da42fd7c623dbb66
# Dataset Card for "meal_type" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Thefoodprocessor/meal_type
[ "region:us" ]
2023-11-05T10:34:28+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "recipe", "dtype": "string"}, {"name": "meal_type_title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 107900952, "num_examples": 74465}], "download_size": 54288492, "dataset_size": 107900952}}
2023-11-05T10:34:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "meal_type" More Information needed
[ "# Dataset Card for \"meal_type\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"meal_type\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"meal_type\"\n\nMore Information needed" ]
9c05eb64f5abafd6e883b9c4ee042019c675635c
# Dialogues-Data In this dataset you will find conversations between users with an AI assistant who have agreed to share their conversations. ## What can you use the dataset for? You can use this dataset as you wish without breaking the rules outlined in the last section! Examples use: 1. Train a neural network using OPEN SOURCE CODE 2. Use for your FREE dataset 3. For studying dialogues independently or in educational institutions 4. To compile statistics ## Rules If you use this dataset with dialogs, you agree to: 1. Do not use dialogues for evil purposes 2. Do not use the dataset in proprietary products, for example, for training PAID neural networks
ehristoforu/dialogues
[ "task_categories:text-generation", "task_categories:conversational", "task_categories:text2text-generation", "size_categories:1K<n<10K", "language:en", "license:mit", "dialogues_data", "region:us" ]
2023-11-05T10:39:03+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation", "conversational", "text2text-generation"], "pretty_name": "dialogues", "tags": ["dialogues_data"]}
2024-02-09T12:12:23+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-conversational #task_categories-text2text-generation #size_categories-1K<n<10K #language-English #license-mit #dialogues_data #region-us
# Dialogues-Data In this dataset you will find conversations between users with an AI assistant who have agreed to share their conversations. ## What can you use the dataset for? You can use this dataset as you wish without breaking the rules outlined in the last section! Examples use: 1. Train a neural network using OPEN SOURCE CODE 2. Use for your FREE dataset 3. For studying dialogues independently or in educational institutions 4. To compile statistics ## Rules If you use this dataset with dialogs, you agree to: 1. Do not use dialogues for evil purposes 2. Do not use the dataset in proprietary products, for example, for training PAID neural networks
[ "# Dialogues-Data\n\nIn this dataset you will find conversations between users with an AI assistant who have agreed to share their conversations.", "## What can you use the dataset for?\nYou can use this dataset as you wish without breaking the rules outlined in the last section!\nExamples use:\n1. Train a neural network using OPEN SOURCE CODE\n2. Use for your FREE dataset\n3. For studying dialogues independently or in educational institutions\n4. To compile statistics", "## Rules\nIf you use this dataset with dialogs, you agree to:\n1. Do not use dialogues for evil purposes\n2. Do not use the dataset in proprietary products, for example, for training PAID neural networks" ]
[ "TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-text2text-generation #size_categories-1K<n<10K #language-English #license-mit #dialogues_data #region-us \n", "# Dialogues-Data\n\nIn this dataset you will find conversations between users with an AI assistant who have agreed to share their conversations.", "## What can you use the dataset for?\nYou can use this dataset as you wish without breaking the rules outlined in the last section!\nExamples use:\n1. Train a neural network using OPEN SOURCE CODE\n2. Use for your FREE dataset\n3. For studying dialogues independently or in educational institutions\n4. To compile statistics", "## Rules\nIf you use this dataset with dialogs, you agree to:\n1. Do not use dialogues for evil purposes\n2. Do not use the dataset in proprietary products, for example, for training PAID neural networks" ]
[ 67, 29, 75, 51 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-text2text-generation #size_categories-1K<n<10K #language-English #license-mit #dialogues_data #region-us \n# Dialogues-Data\n\nIn this dataset you will find conversations between users with an AI assistant who have agreed to share their conversations.## What can you use the dataset for?\nYou can use this dataset as you wish without breaking the rules outlined in the last section!\nExamples use:\n1. Train a neural network using OPEN SOURCE CODE\n2. Use for your FREE dataset\n3. For studying dialogues independently or in educational institutions\n4. To compile statistics## Rules\nIf you use this dataset with dialogs, you agree to:\n1. Do not use dialogues for evil purposes\n2. Do not use the dataset in proprietary products, for example, for training PAID neural networks" ]
1ae48732e4ed3c08287c25e2c40fced1725ea26e
# 使用限制 仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
HackPig520/obs
[ "task_categories:token-classification", "size_categories:1K<n<10K", "language:zh", "language:en", "license:wtfpl", "not-for-all-audiences", "region:us" ]
2023-11-05T10:41:40+00:00
{"language": ["zh", "en"], "license": "wtfpl", "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "pretty_name": "OBS", "tags": ["not-for-all-audiences"]}
2023-11-05T10:52:06+00:00
[]
[ "zh", "en" ]
TAGS #task_categories-token-classification #size_categories-1K<n<10K #language-Chinese #language-English #license-wtfpl #not-for-all-audiences #region-us
# 使用限制 仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
[ "# 使用限制\n\n仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。" ]
[ "TAGS\n#task_categories-token-classification #size_categories-1K<n<10K #language-Chinese #language-English #license-wtfpl #not-for-all-audiences #region-us \n", "# 使用限制\n\n仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。" ]
[ 56, 80 ]
[ "passage: TAGS\n#task_categories-token-classification #size_categories-1K<n<10K #language-Chinese #language-English #license-wtfpl #not-for-all-audiences #region-us \n# 使用限制\n\n仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。" ]
b53b794a703b76d46e9ded7ba0fb6c1ce0f96738
# Dataset Card for Evaluation run of TheBloke/EverythingLM-13B-16K-GPTQ ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/TheBloke/EverythingLM-13B-16K-GPTQ - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [TheBloke/EverythingLM-13B-16K-GPTQ](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_TheBloke__EverythingLM-13B-16K-GPTQ_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-07T12:26:38.184269](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__EverythingLM-13B-16K-GPTQ_public/blob/main/results_2023-11-07T12-26-38.184269.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.002307046979865772, "em_stderr": 0.0004913221265094551, "f1": 0.05827705536912766, "f1_stderr": 0.0013555316279792778, "acc": 0.3836625531886884, "acc_stderr": 0.009461679390099264 }, "harness|drop|3": { "em": 0.002307046979865772, "em_stderr": 0.0004913221265094551, "f1": 0.05827705536912766, "f1_stderr": 0.0013555316279792778 }, "harness|gsm8k|5": { "acc": 0.053828658074298714, "acc_stderr": 0.0062163286402381465 }, "harness|winogrande|5": { "acc": 0.7134964483030781, "acc_stderr": 0.012707030139960381 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_TheBloke__EverythingLM-13B-16K-GPTQ
[ "region:us" ]
2023-11-05T10:46:07+00:00
{"pretty_name": "Evaluation run of TheBloke/EverythingLM-13B-16K-GPTQ", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/EverythingLM-13B-16K-GPTQ](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__EverythingLM-13B-16K-GPTQ_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-07T12:26:38.184269](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__EverythingLM-13B-16K-GPTQ_public/blob/main/results_2023-11-07T12-26-38.184269.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002307046979865772,\n \"em_stderr\": 0.0004913221265094551,\n \"f1\": 0.05827705536912766,\n \"f1_stderr\": 0.0013555316279792778,\n \"acc\": 0.3836625531886884,\n \"acc_stderr\": 0.009461679390099264\n },\n \"harness|drop|3\": {\n \"em\": 0.002307046979865772,\n \"em_stderr\": 0.0004913221265094551,\n \"f1\": 0.05827705536912766,\n \"f1_stderr\": 0.0013555316279792778\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.053828658074298714,\n \"acc_stderr\": 0.0062163286402381465\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7134964483030781,\n \"acc_stderr\": 0.012707030139960381\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/EverythingLM-13B-16K-GPTQ", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_05T10_45_48.960213", "path": ["**/details_harness|drop|3_2023-11-05T10-45-48.960213.parquet"]}, {"split": "2023_11_07T12_26_38.184269", "path": ["**/details_harness|drop|3_2023-11-07T12-26-38.184269.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-07T12-26-38.184269.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_05T10_45_48.960213", "path": ["**/details_harness|gsm8k|5_2023-11-05T10-45-48.960213.parquet"]}, {"split": "2023_11_07T12_26_38.184269", "path": ["**/details_harness|gsm8k|5_2023-11-07T12-26-38.184269.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-07T12-26-38.184269.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_05T10_45_48.960213", "path": ["**/details_harness|winogrande|5_2023-11-05T10-45-48.960213.parquet"]}, {"split": "2023_11_07T12_26_38.184269", "path": ["**/details_harness|winogrande|5_2023-11-07T12-26-38.184269.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-07T12-26-38.184269.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_05T10_45_48.960213", "path": ["results_2023-11-05T10-45-48.960213.parquet"]}, {"split": "2023_11_07T12_26_38.184269", "path": ["results_2023-11-07T12-26-38.184269.parquet"]}, {"split": "latest", "path": ["results_2023-11-07T12-26-38.184269.parquet"]}]}]}
2023-11-07T12:27:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of TheBloke/EverythingLM-13B-16K-GPTQ ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model TheBloke/EverythingLM-13B-16K-GPTQ on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-07T12:26:38.184269(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of TheBloke/EverythingLM-13B-16K-GPTQ", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/EverythingLM-13B-16K-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-07T12:26:38.184269(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of TheBloke/EverythingLM-13B-16K-GPTQ", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/EverythingLM-13B-16K-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-07T12:26:38.184269(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 24, 31, 173, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/EverythingLM-13B-16K-GPTQ## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/EverythingLM-13B-16K-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-07T12:26:38.184269(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
df10846857548769c1f4512ec0ccee77bf718f11
# Labyrinth Dataset Labyrinth is a code dataset that combines three existing datasets without modifying the data itself but adapting the structure/format to streamline fine-tuning for [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on code. ## Dataset Sources Labyrinth is composed of code examples and instructions from the following three datasets: 1. [CodeAlpaca](https://github.com/sahil280114/codealpaca/blob/master/data/code_alpaca_20k.json) by [Sahil Chaudhary](https://huggingface.co/sahil2801). 2. [Codegen-instruct](https://github.com/teknium1/GPTeacher/blob/main/Codegen/codegen-instruct.json) by [Teknium](https://huggingface.co/teknium). 3. [llama-2-instruct-121k-code](https://huggingface.co/datasets/emre/llama-2-instruct-121k-code) by [Davut Emre TASAR](https://huggingface.co/emre).
pnkvalavala/Labyrinth
[ "size_categories:100K<n<1M", "language:en", "license:mit", "code", "region:us" ]
2023-11-05T10:47:05+00:00
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "tags": ["code"]}
2023-11-05T10:49:15+00:00
[]
[ "en" ]
TAGS #size_categories-100K<n<1M #language-English #license-mit #code #region-us
# Labyrinth Dataset Labyrinth is a code dataset that combines three existing datasets without modifying the data itself but adapting the structure/format to streamline fine-tuning for Zephyr on code. ## Dataset Sources Labyrinth is composed of code examples and instructions from the following three datasets: 1. CodeAlpaca by Sahil Chaudhary. 2. Codegen-instruct by Teknium. 3. llama-2-instruct-121k-code by Davut Emre TASAR.
[ "# Labyrinth Dataset\n\nLabyrinth is a code dataset that combines three existing datasets without modifying the data itself but adapting the structure/format to streamline fine-tuning for Zephyr on code.", "## Dataset Sources\n\nLabyrinth is composed of code examples and instructions from the following three datasets:\n\n1. CodeAlpaca by Sahil Chaudhary.\n2. Codegen-instruct by Teknium.\n3. llama-2-instruct-121k-code by Davut Emre TASAR." ]
[ "TAGS\n#size_categories-100K<n<1M #language-English #license-mit #code #region-us \n", "# Labyrinth Dataset\n\nLabyrinth is a code dataset that combines three existing datasets without modifying the data itself but adapting the structure/format to streamline fine-tuning for Zephyr on code.", "## Dataset Sources\n\nLabyrinth is composed of code examples and instructions from the following three datasets:\n\n1. CodeAlpaca by Sahil Chaudhary.\n2. Codegen-instruct by Teknium.\n3. llama-2-instruct-121k-code by Davut Emre TASAR." ]
[ 29, 51, 67 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-English #license-mit #code #region-us \n# Labyrinth Dataset\n\nLabyrinth is a code dataset that combines three existing datasets without modifying the data itself but adapting the structure/format to streamline fine-tuning for Zephyr on code.## Dataset Sources\n\nLabyrinth is composed of code examples and instructions from the following three datasets:\n\n1. CodeAlpaca by Sahil Chaudhary.\n2. Codegen-instruct by Teknium.\n3. llama-2-instruct-121k-code by Davut Emre TASAR." ]
54b04b188bc637a7625d1c3e7dbe3e2179b25483
# Dataset Card for "2022-president-candidates" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
brainer/2022-korea-politician-face
[ "region:us" ]
2023-11-05T10:48:44+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "ahn", "1": "heo", "2": "jundory", "3": "kim", "4": "lee", "5": "sim", "6": "yoon"}}}}], "splits": [{"name": "train", "num_bytes": 510125656.32, "num_examples": 3296}], "download_size": 458747655, "dataset_size": 510125656.32}}
2023-11-05T10:54:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "2022-president-candidates" More Information needed
[ "# Dataset Card for \"2022-president-candidates\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"2022-president-candidates\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"2022-president-candidates\"\n\nMore Information needed" ]
711ad12491e85e6b36772c8f363e0712603c95db
# Dataset Card for "contracts_v3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
paul-w-qs/contracts_v3
[ "region:us" ]
2023-11-05T10:49:30+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "N_ROWS", "dtype": "int64"}, {"name": "N_COLS", "dtype": "int64"}, {"name": "FONT_SIZE", "dtype": "int64"}, {"name": "FONT_NAME", "dtype": "string"}, {"name": "BORDER_THICKNESS", "dtype": "int64"}, {"name": "NOISED", "dtype": "bool"}, {"name": "LABEL_NOISE", "dtype": "bool"}, {"name": "JSON_LABEL", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 851851220.726, "num_examples": 10002}], "download_size": 788219279, "dataset_size": 851851220.726}}
2023-11-05T10:56:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "contracts_v3" More Information needed
[ "# Dataset Card for \"contracts_v3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"contracts_v3\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"contracts_v3\"\n\nMore Information needed" ]
836773f68990d4660333a4d2b0356809fd24cc06
This is the registry of models which can be used both in PyTorch (standalone) or in the GelGenie QuPath Extension. More details TBC
mattaq/GelGenie-Model-Zoo
[ "license:apache-2.0", "region:us" ]
2023-11-05T11:17:28+00:00
{"license": "apache-2.0", "pretty_name": "GelGenie Model Zoo Registry."}
2023-12-27T15:03:50+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
This is the registry of models which can be used both in PyTorch (standalone) or in the GelGenie QuPath Extension. More details TBC
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
3b5051218ee1fb05dfac7de516bdf68729948218
# Dataset Card for Evaluation run of TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ](https://huggingface.co/TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_TheBloke__WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-08T02:57:56.626250](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ_public/blob/main/results_2023-11-08T02-57-56.626250.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.15992030201342283, "em_stderr": 0.0037536320326496562, "f1": 0.2571140939597322, "f1_stderr": 0.0038666311684885475, "acc": 0.36986595642701264, "acc_stderr": 0.009605690477693173 }, "harness|drop|3": { "em": 0.15992030201342283, "em_stderr": 0.0037536320326496562, "f1": 0.2571140939597322, "f1_stderr": 0.0038666311684885475 }, "harness|gsm8k|5": { "acc": 0.05307050796057619, "acc_stderr": 0.006174868858638364 }, "harness|winogrande|5": { "acc": 0.6866614048934491, "acc_stderr": 0.013036512096747983 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_TheBloke__WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ
[ "region:us" ]
2023-11-05T11:28:54+00:00
{"pretty_name": "Evaluation run of TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ](https://huggingface.co/TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-08T02:57:56.626250](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ_public/blob/main/results_2023-11-08T02-57-56.626250.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.15992030201342283,\n \"em_stderr\": 0.0037536320326496562,\n \"f1\": 0.2571140939597322,\n \"f1_stderr\": 0.0038666311684885475,\n \"acc\": 0.36986595642701264,\n \"acc_stderr\": 0.009605690477693173\n },\n \"harness|drop|3\": {\n \"em\": 0.15992030201342283,\n \"em_stderr\": 0.0037536320326496562,\n \"f1\": 0.2571140939597322,\n \"f1_stderr\": 0.0038666311684885475\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.05307050796057619,\n \"acc_stderr\": 0.006174868858638364\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6866614048934491,\n \"acc_stderr\": 0.013036512096747983\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_05T11_28_36.402381", "path": ["**/details_harness|drop|3_2023-11-05T11-28-36.402381.parquet"]}, {"split": "2023_11_08T02_57_56.626250", "path": ["**/details_harness|drop|3_2023-11-08T02-57-56.626250.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-08T02-57-56.626250.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_05T11_28_36.402381", "path": ["**/details_harness|gsm8k|5_2023-11-05T11-28-36.402381.parquet"]}, {"split": "2023_11_08T02_57_56.626250", "path": ["**/details_harness|gsm8k|5_2023-11-08T02-57-56.626250.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-08T02-57-56.626250.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_05T11_28_36.402381", "path": ["**/details_harness|winogrande|5_2023-11-05T11-28-36.402381.parquet"]}, {"split": "2023_11_08T02_57_56.626250", "path": ["**/details_harness|winogrande|5_2023-11-08T02-57-56.626250.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-08T02-57-56.626250.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_05T11_28_36.402381", "path": ["results_2023-11-05T11-28-36.402381.parquet"]}, {"split": "2023_11_08T02_57_56.626250", "path": ["results_2023-11-08T02-57-56.626250.parquet"]}, {"split": "latest", "path": ["results_2023-11-08T02-57-56.626250.parquet"]}]}]}
2023-11-08T02:58:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-08T02:57:56.626250(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-08T02:57:56.626250(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-08T02:57:56.626250(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 36, 31, 185, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-08T02:57:56.626250(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
322675e00ac9764f6507e096c9fa8837c8f45353
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
speechGenius/swahili
[ "region:us" ]
2023-11-05T12:13:56+00:00
{}
2023-11-05T12:40:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
226cbf6f4c3bd07a37627f0792bedd296ba4efde
# Dataset Card for "detoxify-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mariammaher550/detoxify-dataset
[ "region:us" ]
2023-11-05T12:57:39+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15784576, "num_examples": 113758}], "download_size": 0, "dataset_size": 15784576}}
2023-11-05T12:59:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "detoxify-dataset" More Information needed
[ "# Dataset Card for \"detoxify-dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"detoxify-dataset\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"detoxify-dataset\"\n\nMore Information needed" ]
0be2ca84dc2193c8ec634a1aaa97bc466bc6bed3
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
St4n/dataset_2
[ "region:us" ]
2023-11-05T13:12:48+00:00
{}
2023-11-05T14:18:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
599d3cb3e8d8e5310dcc022ed08450cb7185a824
# Dataset Card for "contracts_v4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
paul-w-qs/contracts_v4
[ "region:us" ]
2023-11-05T13:24:08+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "N_ROWS", "dtype": "int64"}, {"name": "N_COLS", "dtype": "int64"}, {"name": "FONT_SIZE", "dtype": "int64"}, {"name": "FONT_NAME", "dtype": "string"}, {"name": "BORDER_THICKNESS", "dtype": "int64"}, {"name": "NOISED", "dtype": "bool"}, {"name": "LABEL_NOISE", "dtype": "bool"}, {"name": "JSON_LABEL", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 804621271.1, "num_examples": 10002}], "download_size": 782850836, "dataset_size": 804621271.1}}
2023-11-05T13:30:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "contracts_v4" More Information needed
[ "# Dataset Card for \"contracts_v4\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"contracts_v4\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"contracts_v4\"\n\nMore Information needed" ]
58e8517a7d47f359fe881a6c7265829305f8d0f3
All instances from the `allegro/klej-dyk` (train, val, test) translated to English with Google Translate API. Columns: - `source` - text instance in Polish. - `target` - text instance in English.
allegro/klej-dyk-en
[ "task_categories:text-classification", "size_categories:n<1K", "language:pl", "language:en", "license:apache-2.0", "region:us" ]
2023-11-05T13:32:40+00:00
{"language": ["pl", "en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "DYK translated to Englis"}
2023-11-05T13:53:53+00:00
[]
[ "pl", "en" ]
TAGS #task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us
All instances from the 'allegro/klej-dyk' (train, val, test) translated to English with Google Translate API. Columns: - 'source' - text instance in Polish. - 'target' - text instance in English.
[]
[ "TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us \n" ]
f098176116844ff37b444862c4da50af867e7cfe
# Dataset Card for "attemp" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
iashchak/attemp
[ "region:us" ]
2023-11-05T13:34:00+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 55451824, "num_examples": 28364}], "download_size": 27606753, "dataset_size": 55451824}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-05T13:34:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "attemp" More Information needed
[ "# Dataset Card for \"attemp\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"attemp\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"attemp\"\n\nMore Information needed" ]
1c5fdd09813ce9d22544a8728819ad05470c7927
# Dataset Card for "Bible-ACF-portuguese" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
luvres/Bible-ACF-portuguese
[ "region:us" ]
2023-11-05T13:34:34+00:00
{"dataset_info": {"features": [{"name": "book", "dtype": "string"}, {"name": "chapter", "dtype": "string"}, {"name": "verse", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "testament", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5514873, "num_examples": 29631}], "download_size": 2483576, "dataset_size": 5514873}}
2023-11-05T14:55:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Bible-ACF-portuguese" More Information needed
[ "# Dataset Card for \"Bible-ACF-portuguese\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Bible-ACF-portuguese\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Bible-ACF-portuguese\"\n\nMore Information needed" ]
f586625ab8ef2bc03693aebdfe3d20334af522ef
# Dataset Card for "tamilReview-ds-mini" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
thatbrowngirl/tamilReview-ds-mini
[ "region:us" ]
2023-11-05T13:41:17+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "review", "sequence": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 973458.45725, "num_examples": 3473}, {"name": "validation", "num_bytes": 108193.1945, "num_examples": 386}], "download_size": 0, "dataset_size": 1081651.65175}}
2023-11-05T15:01:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "tamilReview-ds-mini" More Information needed
[ "# Dataset Card for \"tamilReview-ds-mini\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"tamilReview-ds-mini\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"tamilReview-ds-mini\"\n\nMore Information needed" ]
e973dea471c69950d46b81eb52776cacfa656151
# Dataset Card for Evaluation run of TheBloke/orca_mini_13B-GPTQ ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/TheBloke/orca_mini_13B-GPTQ - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [TheBloke/orca_mini_13B-GPTQ](https://huggingface.co/TheBloke/orca_mini_13B-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_TheBloke__orca_mini_13B-GPTQ_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-07T10:33:18.298818](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__orca_mini_13B-GPTQ_public/blob/main/results_2023-11-07T10-33-18.298818.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.04047818791946309, "em_stderr": 0.002018262301743542, "f1": 0.11770658557046992, "f1_stderr": 0.002544480345951201, "acc": 0.3192425320418652, "acc_stderr": 0.007133502794987517 }, "harness|drop|3": { "em": 0.04047818791946309, "em_stderr": 0.002018262301743542, "f1": 0.11770658557046992, "f1_stderr": 0.002544480345951201 }, "harness|gsm8k|5": { "acc": 0.000758150113722517, "acc_stderr": 0.0007581501137225239 }, "harness|winogrande|5": { "acc": 0.6377269139700079, "acc_stderr": 0.01350885547625251 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_TheBloke__orca_mini_13B-GPTQ
[ "region:us" ]
2023-11-05T13:43:49+00:00
{"pretty_name": "Evaluation run of TheBloke/orca_mini_13B-GPTQ", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/orca_mini_13B-GPTQ](https://huggingface.co/TheBloke/orca_mini_13B-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__orca_mini_13B-GPTQ_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-07T10:33:18.298818](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__orca_mini_13B-GPTQ_public/blob/main/results_2023-11-07T10-33-18.298818.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.04047818791946309,\n \"em_stderr\": 0.002018262301743542,\n \"f1\": 0.11770658557046992,\n \"f1_stderr\": 0.002544480345951201,\n \"acc\": 0.3192425320418652,\n \"acc_stderr\": 0.007133502794987517\n },\n \"harness|drop|3\": {\n \"em\": 0.04047818791946309,\n \"em_stderr\": 0.002018262301743542,\n \"f1\": 0.11770658557046992,\n \"f1_stderr\": 0.002544480345951201\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.000758150113722517,\n \"acc_stderr\": 0.0007581501137225239\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6377269139700079,\n \"acc_stderr\": 0.01350885547625251\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/orca_mini_13B-GPTQ", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_05T13_43_32.201116", "path": ["**/details_harness|drop|3_2023-11-05T13-43-32.201116.parquet"]}, {"split": "2023_11_07T10_33_18.298818", "path": ["**/details_harness|drop|3_2023-11-07T10-33-18.298818.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-07T10-33-18.298818.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_05T13_43_32.201116", "path": ["**/details_harness|gsm8k|5_2023-11-05T13-43-32.201116.parquet"]}, {"split": "2023_11_07T10_33_18.298818", "path": ["**/details_harness|gsm8k|5_2023-11-07T10-33-18.298818.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-07T10-33-18.298818.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_05T13_43_32.201116", "path": ["**/details_harness|winogrande|5_2023-11-05T13-43-32.201116.parquet"]}, {"split": "2023_11_07T10_33_18.298818", "path": ["**/details_harness|winogrande|5_2023-11-07T10-33-18.298818.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-07T10-33-18.298818.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_05T13_43_32.201116", "path": ["results_2023-11-05T13-43-32.201116.parquet"]}, {"split": "2023_11_07T10_33_18.298818", "path": ["results_2023-11-07T10-33-18.298818.parquet"]}, {"split": "latest", "path": ["results_2023-11-07T10-33-18.298818.parquet"]}]}]}
2023-11-07T10:33:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of TheBloke/orca_mini_13B-GPTQ ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model TheBloke/orca_mini_13B-GPTQ on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-07T10:33:18.298818(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of TheBloke/orca_mini_13B-GPTQ", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/orca_mini_13B-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-07T10:33:18.298818(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of TheBloke/orca_mini_13B-GPTQ", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/orca_mini_13B-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-07T10:33:18.298818(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 24, 31, 173, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/orca_mini_13B-GPTQ## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/orca_mini_13B-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-07T10:33:18.298818(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
a4d9fbca7c9d51c90e59b3dd08460351b2d112cd
# Dataset Card for "nmsqa_full" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yuhsinchan/nmsqa_full
[ "region:us" ]
2023-11-05T13:47:39+00:00
{"dataset_info": {"features": [{"name": "context_code", "sequence": "int16"}, {"name": "context_cnt", "sequence": "int16"}, {"name": "question_code", "sequence": "int16"}, {"name": "question_cnt", "sequence": "int16"}, {"name": "start_idx", "dtype": "int64"}, {"name": "end_idx", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 506838688, "num_examples": 87075}, {"name": "dev", "num_bytes": 62621788, "num_examples": 10493}], "download_size": 152779086, "dataset_size": 569460476}}
2023-11-05T13:49:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "nmsqa_full" More Information needed
[ "# Dataset Card for \"nmsqa_full\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"nmsqa_full\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"nmsqa_full\"\n\nMore Information needed" ]
a537b0c0acc011e30979e06fe47a925393329cdc
All instances from the `laugustyniak/abusive-clauses-pl` (train, val, test) translated to English with Google Translate API. Columns: - `source` - text instance in Polish. - `target` - text instance in English.
allegro/abusive-clauses-pl-en
[ "task_categories:text-classification", "size_categories:n<1K", "language:pl", "language:en", "license:apache-2.0", "region:us" ]
2023-11-05T13:55:20+00:00
{"language": ["pl", "en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "PAC translated to English"}
2023-11-05T13:57:05+00:00
[]
[ "pl", "en" ]
TAGS #task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us
All instances from the 'laugustyniak/abusive-clauses-pl' (train, val, test) translated to English with Google Translate API. Columns: - 'source' - text instance in Polish. - 'target' - text instance in English.
[]
[ "TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us \n" ]
9baf61a02e48769ba773f28e6f3da583821a6b97
All instances from the `clarin-pl/cst-wikinews` (train, val, test) translated to English with Google Translate API. Columns: - `source` - text instance in Polish. - `target` - text instance in English.
allegro/cst-wikinews-en
[ "task_categories:text-classification", "size_categories:n<1K", "language:pl", "language:en", "license:apache-2.0", "region:us" ]
2023-11-05T13:58:16+00:00
{"language": ["pl", "en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "Cst-Wikinews translated to English"}
2023-11-05T14:07:26+00:00
[]
[ "pl", "en" ]
TAGS #task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us
All instances from the 'clarin-pl/cst-wikinews' (train, val, test) translated to English with Google Translate API. Columns: - 'source' - text instance in Polish. - 'target' - text instance in English.
[]
[ "TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us \n" ]
ef730af5978701f75ec6ee05787936d72e951f62
All instances from the `clarin-pl/polemo2-official` (train, val, test) translated to English with Google Translate API. Columns: - `source` - text instance in Polish. - `target` - text instance in English.
allegro/polemo2-official-en
[ "task_categories:text-classification", "size_categories:n<1K", "language:pl", "language:en", "license:apache-2.0", "region:us" ]
2023-11-05T13:58:44+00:00
{"language": ["pl", "en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "Polemo-2 translated to English"}
2023-11-05T15:39:09+00:00
[]
[ "pl", "en" ]
TAGS #task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us
All instances from the 'clarin-pl/polemo2-official' (train, val, test) translated to English with Google Translate API. Columns: - 'source' - text instance in Polish. - 'target' - text instance in English.
[]
[ "TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us \n" ]
76893fc80ab3ee1d1209a8c34c810f341d76628d
All instances from the `allegro/klej-cdsc-e` (train, val, test) translated to English with Google Translate API. Columns: - `source` - text instance in Polish. - `target` - text instance in English.
allegro/klej-cdsc-e-en
[ "task_categories:text-classification", "size_categories:n<1K", "language:pl", "language:en", "license:apache-2.0", "region:us" ]
2023-11-05T13:59:20+00:00
{"language": ["pl", "en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "CDSC-E translated to English"}
2023-11-05T15:37:00+00:00
[]
[ "pl", "en" ]
TAGS #task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us
All instances from the 'allegro/klej-cdsc-e' (train, val, test) translated to English with Google Translate API. Columns: - 'source' - text instance in Polish. - 'target' - text instance in English.
[]
[ "TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Polish #language-English #license-apache-2.0 #region-us \n" ]
f0fd22b76f74acabc2be1d71172f9cf254ed56b9
All instances from the `allegro/klej-nkjp-ner` (train, val, test) translated to English with Google Translate API. Columns: - `source` - text instance in Polish. - `target` - text instance in English.
allegro/klej-nkjp-ner-en
[ "task_categories:text-classification", "size_categories:n<1K", "language:en", "language:pl", "license:apache-2.0", "region:us" ]
2023-11-05T13:59:40+00:00
{"language": ["en", "pl"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "NKPJ-NER translated to English"}
2023-11-05T16:46:24+00:00
[]
[ "en", "pl" ]
TAGS #task_categories-text-classification #size_categories-n<1K #language-English #language-Polish #license-apache-2.0 #region-us
All instances from the 'allegro/klej-nkjp-ner' (train, val, test) translated to English with Google Translate API. Columns: - 'source' - text instance in Polish. - 'target' - text instance in English.
[]
[ "TAGS\n#task_categories-text-classification #size_categories-n<1K #language-English #language-Polish #license-apache-2.0 #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-n<1K #language-English #language-Polish #license-apache-2.0 #region-us \n" ]
2bef9ee92b22de2db17c1ae9d2ec1b532afde287
# Dataset Card for Evaluation run of TheBloke/medalpaca-13B-GPTQ-4bit ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/TheBloke/medalpaca-13B-GPTQ-4bit - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [TheBloke/medalpaca-13B-GPTQ-4bit](https://huggingface.co/TheBloke/medalpaca-13B-GPTQ-4bit) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_TheBloke__medalpaca-13B-GPTQ-4bit_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-07T11:22:05.804023](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__medalpaca-13B-GPTQ-4bit_public/blob/main/results_2023-11-07T11-22-05.804023.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.06973573825503356, "em_stderr": 0.0026083779557512714, "f1": 0.12751992449664398, "f1_stderr": 0.0028759868015646797, "acc": 0.26558800315706393, "acc_stderr": 0.00701257132031976 }, "harness|drop|3": { "em": 0.06973573825503356, "em_stderr": 0.0026083779557512714, "f1": 0.12751992449664398, "f1_stderr": 0.0028759868015646797 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 }, "harness|winogrande|5": { "acc": 0.5311760063141279, "acc_stderr": 0.01402514264063952 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_TheBloke__medalpaca-13B-GPTQ-4bit
[ "region:us" ]
2023-11-05T14:02:43+00:00
{"pretty_name": "Evaluation run of TheBloke/medalpaca-13B-GPTQ-4bit", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/medalpaca-13B-GPTQ-4bit](https://huggingface.co/TheBloke/medalpaca-13B-GPTQ-4bit) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__medalpaca-13B-GPTQ-4bit_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-07T11:22:05.804023](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__medalpaca-13B-GPTQ-4bit_public/blob/main/results_2023-11-07T11-22-05.804023.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.06973573825503356,\n \"em_stderr\": 0.0026083779557512714,\n \"f1\": 0.12751992449664398,\n \"f1_stderr\": 0.0028759868015646797,\n \"acc\": 0.26558800315706393,\n \"acc_stderr\": 0.00701257132031976\n },\n \"harness|drop|3\": {\n \"em\": 0.06973573825503356,\n \"em_stderr\": 0.0026083779557512714,\n \"f1\": 0.12751992449664398,\n \"f1_stderr\": 0.0028759868015646797\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5311760063141279,\n \"acc_stderr\": 0.01402514264063952\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/medalpaca-13B-GPTQ-4bit", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_05T14_02_24.762310", "path": ["**/details_harness|drop|3_2023-11-05T14-02-24.762310.parquet"]}, {"split": "2023_11_07T11_22_05.804023", "path": ["**/details_harness|drop|3_2023-11-07T11-22-05.804023.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-07T11-22-05.804023.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_05T14_02_24.762310", "path": ["**/details_harness|gsm8k|5_2023-11-05T14-02-24.762310.parquet"]}, {"split": "2023_11_07T11_22_05.804023", "path": ["**/details_harness|gsm8k|5_2023-11-07T11-22-05.804023.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-07T11-22-05.804023.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_05T14_02_24.762310", "path": ["**/details_harness|winogrande|5_2023-11-05T14-02-24.762310.parquet"]}, {"split": "2023_11_07T11_22_05.804023", "path": ["**/details_harness|winogrande|5_2023-11-07T11-22-05.804023.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-07T11-22-05.804023.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_05T14_02_24.762310", "path": ["results_2023-11-05T14-02-24.762310.parquet"]}, {"split": "2023_11_07T11_22_05.804023", "path": ["results_2023-11-07T11-22-05.804023.parquet"]}, {"split": "latest", "path": ["results_2023-11-07T11-22-05.804023.parquet"]}]}]}
2023-11-07T11:22:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of TheBloke/medalpaca-13B-GPTQ-4bit ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model TheBloke/medalpaca-13B-GPTQ-4bit on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-11-07T11:22:05.804023(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of TheBloke/medalpaca-13B-GPTQ-4bit", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/medalpaca-13B-GPTQ-4bit on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-07T11:22:05.804023(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of TheBloke/medalpaca-13B-GPTQ-4bit", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/medalpaca-13B-GPTQ-4bit on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-11-07T11:22:05.804023(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 24, 31, 173, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/medalpaca-13B-GPTQ-4bit## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/medalpaca-13B-GPTQ-4bit on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-07T11:22:05.804023(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
11e94d01cb2c316525819bc9fcf7ac0a8454eb94
# Dataset Card for "blackboard_treebank_prompt" This dataset made from [blackboard treebank](https://bitbucket.org/kaamanita/blackboard-treebank). The dataset want to create Thai sentence by structure. The original dataset used own tags but we use Universal Dependencies tags, so we convert those tags into Universal Dependencies tags. [See blackboard treebank tags to Universal Dependencies tags](https://github.com/PyThaiNLP/pythainlp/blob/dev/pythainlp/tag/blackboard.py#L56C5-L56C17) Source code for create dataset: [https://github.com/PyThaiNLP/support-aya-datasets/blob/main/pos/blackboard_treebank_prompt.ipynb](https://github.com/PyThaiNLP/support-aya-datasets/blob/main/pos/blackboard_treebank_prompt.ipynb) ## Template ``` Inputs: จงสร้างประโยคตามโครงสร้าง {pos}: Targets: Thai sentence ``` pos: [All tag](https://universaldependencies.org/u/pos/) See more: [blackboard treebank](https://bitbucket.org/kaamanita/blackboard-treebank).
pythainlp/blackboard_treebank_prompt
[ "task_categories:text2text-generation", "task_categories:text-generation", "size_categories:10K<n<100K", "language:th", "license:cc-by-3.0", "region:us" ]
2023-11-05T14:18:23+00:00
{"language": ["th"], "license": "cc-by-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["text2text-generation", "text-generation"], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26964824, "num_examples": 130454}], "download_size": 5903386, "dataset_size": 26964824}}
2023-11-05T14:44:09+00:00
[]
[ "th" ]
TAGS #task_categories-text2text-generation #task_categories-text-generation #size_categories-10K<n<100K #language-Thai #license-cc-by-3.0 #region-us
# Dataset Card for "blackboard_treebank_prompt" This dataset made from blackboard treebank. The dataset want to create Thai sentence by structure. The original dataset used own tags but we use Universal Dependencies tags, so we convert those tags into Universal Dependencies tags. See blackboard treebank tags to Universal Dependencies tags Source code for create dataset: URL ## Template pos: All tag See more: blackboard treebank.
[ "# Dataset Card for \"blackboard_treebank_prompt\"\n\nThis dataset made from blackboard treebank. The dataset want to create Thai sentence by structure.\n\nThe original dataset used own tags but we use Universal Dependencies tags, so we convert those tags into Universal Dependencies tags. See blackboard treebank tags to Universal Dependencies tags\n\n\nSource code for create dataset: URL", "## Template\n\npos: All tag\n\nSee more: blackboard treebank." ]
[ "TAGS\n#task_categories-text2text-generation #task_categories-text-generation #size_categories-10K<n<100K #language-Thai #license-cc-by-3.0 #region-us \n", "# Dataset Card for \"blackboard_treebank_prompt\"\n\nThis dataset made from blackboard treebank. The dataset want to create Thai sentence by structure.\n\nThe original dataset used own tags but we use Universal Dependencies tags, so we convert those tags into Universal Dependencies tags. See blackboard treebank tags to Universal Dependencies tags\n\n\nSource code for create dataset: URL", "## Template\n\npos: All tag\n\nSee more: blackboard treebank." ]
[ 56, 85, 14 ]
[ "passage: TAGS\n#task_categories-text2text-generation #task_categories-text-generation #size_categories-10K<n<100K #language-Thai #license-cc-by-3.0 #region-us \n# Dataset Card for \"blackboard_treebank_prompt\"\n\nThis dataset made from blackboard treebank. The dataset want to create Thai sentence by structure.\n\nThe original dataset used own tags but we use Universal Dependencies tags, so we convert those tags into Universal Dependencies tags. See blackboard treebank tags to Universal Dependencies tags\n\n\nSource code for create dataset: URL## Template\n\npos: All tag\n\nSee more: blackboard treebank." ]
74e567d2948324e156d35aa19d13e38d1e7bcbc2
# Dataset Card for "coastal" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
peldrak/coastal
[ "region:us" ]
2023-11-05T14:33:20+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 443045595.392, "num_examples": 1296}, {"name": "test", "num_bytes": 148303065.0, "num_examples": 370}], "download_size": 482329769, "dataset_size": 591348660.392}}
2023-11-05T15:22:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "coastal" More Information needed
[ "# Dataset Card for \"coastal\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"coastal\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"coastal\"\n\nMore Information needed" ]
082009cecb9d37353ce8f9d17855f68d1cb598e9
# Dataset Card for "ranking_options_processes" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Santp98/ranking_options_processes
[ "region:us" ]
2023-11-05T14:37:45+00:00
{"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "process_id", "dtype": "string"}, {"name": "description", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5619635, "num_examples": 23323}], "download_size": 3091438, "dataset_size": 5619635}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-05T14:37:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ranking_options_processes" More Information needed
[ "# Dataset Card for \"ranking_options_processes\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ranking_options_processes\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ranking_options_processes\"\n\nMore Information needed" ]
c27a97ee5e0e128aab92543c73865561f8486e37
This dataset primarily comprises sliced piano compositions from two games under miHoYo, namely "Genshin Impact" and "Honkai: Star Rail". These piano slices have been transformed into ABC musical notation. The annotated information includes structural details corresponding to the musical style of the in-game scenes. This dataset serves not only as an outcome of game music extraction but also as essential training material for research in the field of music generation, particularly focused on mihoyo game music. Researchers can delve into the analysis of musical features, such as notes and melodic structures, using this resource, offering substantive data support for the training and enhancement of music generation algorithms. ## Usage ```python from datasets import load_dataset genshin_data = load_dataset("MuGeminorum/hoyo_music", split="train") for item in genshin_data: print(item) ``` ## Maintainence ```bash git clone [email protected]:datasets/MuGeminorum/hoyo_music ``` ## Mirror <https://www.modelscope.cn/datasets/MuGeminorum/hoyo_music> ## Reference [1] <https://musescore.org><br> [2] <https://huggingface.co/datasets/sander-wood/irishman><br> [3] <https://genshin-impact.fandom.com/wiki/Genshin_Impact_Wiki><br> [4] <https://honkai-star-rail.fandom.com/wiki/Honkai:_Star_Rail_Wiki>
MuGeminorum/hoyo_music
[ "task_categories:text-generation", "task_categories:text2text-generation", "task_categories:text-classification", "size_categories:n<1K", "language:en", "language:zh", "license:cc-by-sa-4.0", "art", "music", "region:us" ]
2023-11-05T15:07:57+00:00
{"language": ["en", "zh"], "license": "cc-by-sa-4.0", "size_categories": ["n<1K"], "task_categories": ["text-generation", "text2text-generation", "text-classification"], "pretty_name": "Dataset of mihoyo game songs in abc notation", "tags": ["art", "music"]}
2024-01-19T12:55:33+00:00
[]
[ "en", "zh" ]
TAGS #task_categories-text-generation #task_categories-text2text-generation #task_categories-text-classification #size_categories-n<1K #language-English #language-Chinese #license-cc-by-sa-4.0 #art #music #region-us
This dataset primarily comprises sliced piano compositions from two games under miHoYo, namely "Genshin Impact" and "Honkai: Star Rail". These piano slices have been transformed into ABC musical notation. The annotated information includes structural details corresponding to the musical style of the in-game scenes. This dataset serves not only as an outcome of game music extraction but also as essential training material for research in the field of music generation, particularly focused on mihoyo game music. Researchers can delve into the analysis of musical features, such as notes and melodic structures, using this resource, offering substantive data support for the training and enhancement of music generation algorithms. ## Usage ## Maintainence ## Mirror <URL ## Reference [1] <URL><br> [2] <URL [3] <URL [4] <URL
[ "## Usage", "## Maintainence", "## Mirror\n<URL", "## Reference\n[1] <URL><br>\n[2] <URL\n[3] <URL\n[4] <URL" ]
[ "TAGS\n#task_categories-text-generation #task_categories-text2text-generation #task_categories-text-classification #size_categories-n<1K #language-English #language-Chinese #license-cc-by-sa-4.0 #art #music #region-us \n", "## Usage", "## Maintainence", "## Mirror\n<URL", "## Reference\n[1] <URL><br>\n[2] <URL\n[3] <URL\n[4] <URL" ]
[ 75, 3, 4, 4, 17 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #task_categories-text-classification #size_categories-n<1K #language-English #language-Chinese #license-cc-by-sa-4.0 #art #music #region-us \n## Usage## Maintainence## Mirror\n<URL## Reference\n[1] <URL><br>\n[2] <URL\n[3] <URL\n[4] <URL" ]
d8a3dba5c812e7df95e3f4f0083e90a16cfe2ba3
# Dataset Card for "lexicon-kh" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
seanghay/lexicon-kh
[ "region:us" ]
2023-11-05T15:29:28+00:00
{"dataset_info": {"features": [{"name": "t_id", "dtype": "int64"}, {"name": "t_cat_id", "dtype": "int64"}, {"name": "t_main_kh", "dtype": "string"}, {"name": "t_main_en", "dtype": "string"}, {"name": "t_main_fr", "dtype": "string"}, {"name": "t_isimg", "dtype": "int64"}, {"name": "t_desc", "dtype": "string"}, {"name": "t_lookup", "dtype": "float64"}, {"name": "t_active", "dtype": "int64"}, {"name": "created_at", "dtype": "string"}, {"name": "updated_at", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2456264, "num_examples": 3766}], "download_size": 825805, "dataset_size": 2456264}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-05T15:29:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "lexicon-kh" More Information needed
[ "# Dataset Card for \"lexicon-kh\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"lexicon-kh\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"lexicon-kh\"\n\nMore Information needed" ]
b4d2777046efc120f4a89130a8afcc449db25dbc
# Lilium Albanicum Eng-Alb ![Lilium Albanicum Dataset of QA Translation Pairs curated for LLM finetuning.](https://huggingface.co/datasets/noxneural/lilium_albanicum_eng_alb/resolve/main/lilium_albanicum.png) **Task Categories**: - Translation - Question-Answering - Conversational **Languages**: English (en), Albanian (sq) **Size Categories**: 100K < n < 1M --- # Dataset Card for "Lilium Albanicum" ## Dataset Summary The Lilium Albanicum dataset is a comprehensive English-Albanian and Albanian-English parallel corpus. The dataset includes original translations and extended synthetic Q&A pairs, which are designed to support and optimize LLM translation tasks. The synthetic pairs are generated to mimic realistic conversational scenarios, aiding in the development of more effective translation models. ## Dataset Attribution ### Translation Process: The dataset comprises expert-generated translations, ensuring high-quality language pairs. The Q&A pairs are machine-generated, followed by rigorous human review and refinement to guarantee natural and coherent translations. ## Supported Tasks and Leaderboards This dataset is primarily tailored for translation, question-answering, and conversational tasks, aiming to improve bilingual models' performance with a focus on contextual understanding. ## Languages The dataset includes bilingual data in English (en) and Albanian (sq). ## Dataset Structure ### Data Instances A typical data instance includes a text pair in English and Albanian, reflecting a conversational exchange or a Q&A format suited for translation tasks. ### Data Fields - albanian: The corresponding Albanian translation of the text. - english: The English version of the text. - question: The question part of the conversational or Q&A context. - response: The response part of the conversational or Q&A context. - swapped: An integer (int64) indicating whether the roles in the conversation have been swapped. - system_prompt: A string containing system prompts or instructions related to the text entry. ### Data Splits The dataset is structured into appropriate splits for training, validation, and testing to facilitate effective machine learning practices. ## Dataset Creation ### Curation Rationale The creation of Lilium Albanicum aims to fill the gap in high-quality, conversational-context-focused datasets for English-Albanian translation tasks, thereby enhancing the capabilities of translation models. ### Source Data The source data originates from a well-established Albanian-English parallel corpus, enriched with synthetic yet realistic Q&A scenarios. ## Dataset Use ### Use Cases The dataset can be employed for various NLP tasks such as bilingual translation, conversational understanding, and question-answering systems development, both in academic research and practical applications. ### Usage Caveats The synthetic nature of some parts of the dataset may not encompass all nuances of natural language. Users should consider complementing it with naturally occurring text data for tasks requiring high levels of linguistic subtlety. ### Getting Started The dataset is accessible through the Hugging Face datasets library, with support for streaming to handle large datasets efficiently. ________________________________________ **Dataset contributors**: - Marlind Maksuti (contact: [email protected]) - StochastX team **Acknowledgments**: Special thanks to the creators of the original Albanian-English parallel corpus MaCoCu-sq-en 1.0 and to all contributors who participated in the generation and refinement of the Q&A pairs. **License**: This work is licensed under the MIT license.
noxneural/lilium_albanicum_eng_alb
[ "task_categories:translation", "task_categories:question-answering", "task_categories:conversational", "size_categories:100K<n<1M", "language:en", "language:sq", "region:us" ]
2023-11-05T16:02:00+00:00
{"language": ["en", "sq"], "size_categories": ["100K<n<1M"], "task_categories": ["translation", "question-answering", "conversational"], "pretty_name": "Lilium Albanicum Eng-Alb"}
2023-11-05T17:00:26+00:00
[]
[ "en", "sq" ]
TAGS #task_categories-translation #task_categories-question-answering #task_categories-conversational #size_categories-100K<n<1M #language-English #language-Albanian #region-us
# Lilium Albanicum Eng-Alb !Lilium Albanicum Dataset of QA Translation Pairs curated for LLM finetuning. Task Categories: - Translation - Question-Answering - Conversational Languages: English (en), Albanian (sq) Size Categories: 100K < n < 1M --- # Dataset Card for "Lilium Albanicum" ## Dataset Summary The Lilium Albanicum dataset is a comprehensive English-Albanian and Albanian-English parallel corpus. The dataset includes original translations and extended synthetic Q&A pairs, which are designed to support and optimize LLM translation tasks. The synthetic pairs are generated to mimic realistic conversational scenarios, aiding in the development of more effective translation models. ## Dataset Attribution ### Translation Process: The dataset comprises expert-generated translations, ensuring high-quality language pairs. The Q&A pairs are machine-generated, followed by rigorous human review and refinement to guarantee natural and coherent translations. ## Supported Tasks and Leaderboards This dataset is primarily tailored for translation, question-answering, and conversational tasks, aiming to improve bilingual models' performance with a focus on contextual understanding. ## Languages The dataset includes bilingual data in English (en) and Albanian (sq). ## Dataset Structure ### Data Instances A typical data instance includes a text pair in English and Albanian, reflecting a conversational exchange or a Q&A format suited for translation tasks. ### Data Fields - albanian: The corresponding Albanian translation of the text. - english: The English version of the text. - question: The question part of the conversational or Q&A context. - response: The response part of the conversational or Q&A context. - swapped: An integer (int64) indicating whether the roles in the conversation have been swapped. - system_prompt: A string containing system prompts or instructions related to the text entry. ### Data Splits The dataset is structured into appropriate splits for training, validation, and testing to facilitate effective machine learning practices. ## Dataset Creation ### Curation Rationale The creation of Lilium Albanicum aims to fill the gap in high-quality, conversational-context-focused datasets for English-Albanian translation tasks, thereby enhancing the capabilities of translation models. ### Source Data The source data originates from a well-established Albanian-English parallel corpus, enriched with synthetic yet realistic Q&A scenarios. ## Dataset Use ### Use Cases The dataset can be employed for various NLP tasks such as bilingual translation, conversational understanding, and question-answering systems development, both in academic research and practical applications. ### Usage Caveats The synthetic nature of some parts of the dataset may not encompass all nuances of natural language. Users should consider complementing it with naturally occurring text data for tasks requiring high levels of linguistic subtlety. ### Getting Started The dataset is accessible through the Hugging Face datasets library, with support for streaming to handle large datasets efficiently. ________________________________________ Dataset contributors: - Marlind Maksuti (contact: marlind.maksuti@URL) - StochastX team Acknowledgments: Special thanks to the creators of the original Albanian-English parallel corpus MaCoCu-sq-en 1.0 and to all contributors who participated in the generation and refinement of the Q&A pairs. License: This work is licensed under the MIT license.
[ "# Lilium Albanicum Eng-Alb\n\n\n!Lilium Albanicum Dataset of QA Translation Pairs curated for LLM finetuning.\n\nTask Categories:\n- Translation\n- Question-Answering\n- Conversational\n\nLanguages: English (en), Albanian (sq)\n\nSize Categories: 100K < n < 1M\n\n---", "# Dataset Card for \"Lilium Albanicum\"", "## Dataset Summary\n\nThe Lilium Albanicum dataset is a comprehensive English-Albanian and Albanian-English parallel corpus. The dataset includes original translations and extended synthetic Q&A pairs, which are designed to support and optimize LLM translation tasks. The synthetic pairs are generated to mimic realistic conversational scenarios, aiding in the development of more effective translation models.", "## Dataset Attribution", "### Translation Process:\n\nThe dataset comprises expert-generated translations, ensuring high-quality language pairs. The Q&A pairs are machine-generated, followed by rigorous human review and refinement to guarantee natural and coherent translations.", "## Supported Tasks and Leaderboards\n\nThis dataset is primarily tailored for translation, question-answering, and conversational tasks, aiming to improve bilingual models' performance with a focus on contextual understanding.", "## Languages\n\nThe dataset includes bilingual data in English (en) and Albanian (sq).", "## Dataset Structure", "### Data Instances\n\nA typical data instance includes a text pair in English and Albanian, reflecting a conversational exchange or a Q&A format suited for translation tasks.", "### Data Fields\n\n- albanian: The corresponding Albanian translation of the text.\n- english: The English version of the text.\n- question: The question part of the conversational or Q&A context.\n- response: The response part of the conversational or Q&A context.\n- swapped: An integer (int64) indicating whether the roles in the conversation have been swapped.\n- system_prompt: A string containing system prompts or instructions related to the text entry.", "### Data Splits\n\nThe dataset is structured into appropriate splits for training, validation, and testing to facilitate effective machine learning practices.", "## Dataset Creation", "### Curation Rationale\n\nThe creation of Lilium Albanicum aims to fill the gap in high-quality, conversational-context-focused datasets for English-Albanian translation tasks, thereby enhancing the capabilities of translation models.", "### Source Data\n\nThe source data originates from a well-established Albanian-English parallel corpus, enriched with synthetic yet realistic Q&A scenarios.", "## Dataset Use", "### Use Cases\n\nThe dataset can be employed for various NLP tasks such as bilingual translation, conversational understanding, and question-answering systems development, both in academic research and practical applications.", "### Usage Caveats\n\nThe synthetic nature of some parts of the dataset may not encompass all nuances of natural language. Users should consider complementing it with naturally occurring text data for tasks requiring high levels of linguistic subtlety.", "### Getting Started\n\nThe dataset is accessible through the Hugging Face datasets library, with support for streaming to handle large datasets efficiently.\n\n________________________________________\n\nDataset contributors:\n\n- Marlind Maksuti (contact: marlind.maksuti@URL)\n- StochastX team\n\nAcknowledgments:\n\nSpecial thanks to the creators of the original Albanian-English parallel corpus MaCoCu-sq-en 1.0 and to all contributors who participated in the generation and refinement of the Q&A pairs.\n\nLicense:\n\nThis work is licensed under the MIT license." ]
[ "TAGS\n#task_categories-translation #task_categories-question-answering #task_categories-conversational #size_categories-100K<n<1M #language-English #language-Albanian #region-us \n", "# Lilium Albanicum Eng-Alb\n\n\n!Lilium Albanicum Dataset of QA Translation Pairs curated for LLM finetuning.\n\nTask Categories:\n- Translation\n- Question-Answering\n- Conversational\n\nLanguages: English (en), Albanian (sq)\n\nSize Categories: 100K < n < 1M\n\n---", "# Dataset Card for \"Lilium Albanicum\"", "## Dataset Summary\n\nThe Lilium Albanicum dataset is a comprehensive English-Albanian and Albanian-English parallel corpus. The dataset includes original translations and extended synthetic Q&A pairs, which are designed to support and optimize LLM translation tasks. The synthetic pairs are generated to mimic realistic conversational scenarios, aiding in the development of more effective translation models.", "## Dataset Attribution", "### Translation Process:\n\nThe dataset comprises expert-generated translations, ensuring high-quality language pairs. The Q&A pairs are machine-generated, followed by rigorous human review and refinement to guarantee natural and coherent translations.", "## Supported Tasks and Leaderboards\n\nThis dataset is primarily tailored for translation, question-answering, and conversational tasks, aiming to improve bilingual models' performance with a focus on contextual understanding.", "## Languages\n\nThe dataset includes bilingual data in English (en) and Albanian (sq).", "## Dataset Structure", "### Data Instances\n\nA typical data instance includes a text pair in English and Albanian, reflecting a conversational exchange or a Q&A format suited for translation tasks.", "### Data Fields\n\n- albanian: The corresponding Albanian translation of the text.\n- english: The English version of the text.\n- question: The question part of the conversational or Q&A context.\n- response: The response part of the conversational or Q&A context.\n- swapped: An integer (int64) indicating whether the roles in the conversation have been swapped.\n- system_prompt: A string containing system prompts or instructions related to the text entry.", "### Data Splits\n\nThe dataset is structured into appropriate splits for training, validation, and testing to facilitate effective machine learning practices.", "## Dataset Creation", "### Curation Rationale\n\nThe creation of Lilium Albanicum aims to fill the gap in high-quality, conversational-context-focused datasets for English-Albanian translation tasks, thereby enhancing the capabilities of translation models.", "### Source Data\n\nThe source data originates from a well-established Albanian-English parallel corpus, enriched with synthetic yet realistic Q&A scenarios.", "## Dataset Use", "### Use Cases\n\nThe dataset can be employed for various NLP tasks such as bilingual translation, conversational understanding, and question-answering systems development, both in academic research and practical applications.", "### Usage Caveats\n\nThe synthetic nature of some parts of the dataset may not encompass all nuances of natural language. Users should consider complementing it with naturally occurring text data for tasks requiring high levels of linguistic subtlety.", "### Getting Started\n\nThe dataset is accessible through the Hugging Face datasets library, with support for streaming to handle large datasets efficiently.\n\n________________________________________\n\nDataset contributors:\n\n- Marlind Maksuti (contact: marlind.maksuti@URL)\n- StochastX team\n\nAcknowledgments:\n\nSpecial thanks to the creators of the original Albanian-English parallel corpus MaCoCu-sq-en 1.0 and to all contributors who participated in the generation and refinement of the Q&A pairs.\n\nLicense:\n\nThis work is licensed under the MIT license." ]
[ 58, 70, 11, 90, 4, 57, 50, 23, 6, 39, 107, 32, 5, 57, 39, 4, 45, 59, 130 ]
[ "passage: TAGS\n#task_categories-translation #task_categories-question-answering #task_categories-conversational #size_categories-100K<n<1M #language-English #language-Albanian #region-us \n# Lilium Albanicum Eng-Alb\n\n\n!Lilium Albanicum Dataset of QA Translation Pairs curated for LLM finetuning.\n\nTask Categories:\n- Translation\n- Question-Answering\n- Conversational\n\nLanguages: English (en), Albanian (sq)\n\nSize Categories: 100K < n < 1M\n\n---# Dataset Card for \"Lilium Albanicum\"## Dataset Summary\n\nThe Lilium Albanicum dataset is a comprehensive English-Albanian and Albanian-English parallel corpus. The dataset includes original translations and extended synthetic Q&A pairs, which are designed to support and optimize LLM translation tasks. The synthetic pairs are generated to mimic realistic conversational scenarios, aiding in the development of more effective translation models.## Dataset Attribution### Translation Process:\n\nThe dataset comprises expert-generated translations, ensuring high-quality language pairs. The Q&A pairs are machine-generated, followed by rigorous human review and refinement to guarantee natural and coherent translations.## Supported Tasks and Leaderboards\n\nThis dataset is primarily tailored for translation, question-answering, and conversational tasks, aiming to improve bilingual models' performance with a focus on contextual understanding.## Languages\n\nThe dataset includes bilingual data in English (en) and Albanian (sq).## Dataset Structure### Data Instances\n\nA typical data instance includes a text pair in English and Albanian, reflecting a conversational exchange or a Q&A format suited for translation tasks." ]
2e88336da24d6d430d64c02c0695c60e0b7e51e6
# Dataset Card for "code-dpo-classification" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DLI-Lab/code-dpo-classification
[ "region:us" ]
2023-11-05T16:04:25+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "eval", "path": "data/eval-*"}]}], "dataset_info": {"features": [{"name": "description", "dtype": "string"}, {"name": "index", "dtype": "int64"}, {"name": "invaluabe_feedback", "dtype": "string"}, {"name": "wrong_code", "dtype": "string"}, {"name": "valuabe_feedback", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27040658, "num_examples": 17140}, {"name": "eval", "num_bytes": 2998200, "num_examples": 1904}], "download_size": 9705149, "dataset_size": 30038858}}
2023-11-05T16:10:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for "code-dpo-classification" More Information needed
[ "# Dataset Card for \"code-dpo-classification\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"code-dpo-classification\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"code-dpo-classification\"\n\nMore Information needed" ]
d97ae2c2e84ba6dec64af06af4bf625674436692
# Dataset Card for "coastal2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
peldrak/coastal2
[ "region:us" ]
2023-11-05T16:23:02+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1098506769.894, "num_examples": 6594}, {"name": "test", "num_bytes": 173113819.0, "num_examples": 827}], "download_size": 1414219519, "dataset_size": 1271620588.894}}
2023-11-05T17:19:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "coastal2" More Information needed
[ "# Dataset Card for \"coastal2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"coastal2\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"coastal2\"\n\nMore Information needed" ]
39a1138db8e927dc8d6b9043305c9467aa2e7bce
# Dataset Card for "wikilingua_data-cstnews_results" rouge={'rouge1': 0.19556460453535948, 'rouge2': 0.05415685751189013, 'rougeL': 0.12269113071402012, 'rougeLsum': 0.12269113071402012} Bert={'precision': 0.6433123989588486, 'recall': 0.7274074976785885, 'f1': 0.6818849594112707} moverscore: 0.5582209741427033
arthurmluz/wikilingua_data-cstnews_results
[ "region:us" ]
2023-11-05T16:49:36+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 28819014, "num_examples": 8165}], "download_size": 17413682, "dataset_size": 28819014}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]}
2023-11-13T19:16:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wikilingua_data-cstnews_results" rouge={'rouge1': 0.19556460453535948, 'rouge2': 0.05415685751189013, 'rougeL': 0.12269113071402012, 'rougeLsum': 0.12269113071402012} Bert={'precision': 0.6433123989588486, 'recall': 0.7274074976785885, 'f1': 0.6818849594112707} moverscore: 0.5582209741427033
[ "# Dataset Card for \"wikilingua_data-cstnews_results\"\n\nrouge={'rouge1': 0.19556460453535948, 'rouge2': 0.05415685751189013, 'rougeL': 0.12269113071402012, 'rougeLsum': 0.12269113071402012}\n\nBert={'precision': 0.6433123989588486, 'recall': 0.7274074976785885, 'f1': 0.6818849594112707}\n\nmoverscore: 0.5582209741427033" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wikilingua_data-cstnews_results\"\n\nrouge={'rouge1': 0.19556460453535948, 'rouge2': 0.05415685751189013, 'rougeL': 0.12269113071402012, 'rougeLsum': 0.12269113071402012}\n\nBert={'precision': 0.6433123989588486, 'recall': 0.7274074976785885, 'f1': 0.6818849594112707}\n\nmoverscore: 0.5582209741427033" ]
[ 6, 138 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wikilingua_data-cstnews_results\"\n\nrouge={'rouge1': 0.19556460453535948, 'rouge2': 0.05415685751189013, 'rougeL': 0.12269113071402012, 'rougeLsum': 0.12269113071402012}\n\nBert={'precision': 0.6433123989588486, 'recall': 0.7274074976785885, 'f1': 0.6818849594112707}\n\nmoverscore: 0.5582209741427033" ]
eae8b25133ce0287b59e3097245b3f3b67d82ce4
Dataset using the bert-cased tokenizer, cutoff at 512 tokens. Original dataset: https://huggingface.co/datasets/wikipedia Variant: 20220301.en
gmongaras/wikipedia_BERT_512
[ "region:us" ]
2023-11-05T17:02:22+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 19918538280, "num_examples": 6458670}], "download_size": 4218892705, "dataset_size": 19918538280}}
2023-11-05T17:23:35+00:00
[]
[]
TAGS #region-us
Dataset using the bert-cased tokenizer, cutoff at 512 tokens. Original dataset: URL Variant: URL
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
515e7f3852f2b99f8d710b25a842dc45191d0aa1
# Dataset Card for "arxiv_maybe_about_new_datasets" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
davanstrien/arxiv_maybe_about_new_datasets
[ "region:us" ]
2023-11-05T17:30:04+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "submitter", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "comments", "dtype": "string"}, {"name": "journal-ref", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "report-no", "dtype": "string"}, {"name": "categories", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "versions", "list": [{"name": "version", "dtype": "string"}, {"name": "created", "dtype": "string"}]}, {"name": "update_date", "dtype": "timestamp[s]"}, {"name": "authors_parsed", "sequence": {"sequence": "string"}}, {"name": "predictions", "dtype": "string"}, {"name": "probabilities", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 746174024, "num_examples": 450403}], "download_size": 409173494, "dataset_size": 746174024}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-05T17:30:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "arxiv_maybe_about_new_datasets" More Information needed
[ "# Dataset Card for \"arxiv_maybe_about_new_datasets\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"arxiv_maybe_about_new_datasets\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"arxiv_maybe_about_new_datasets\"\n\nMore Information needed" ]
45b9e5b78a9afaa65b11f0053f4da735e23fac08
# Dataset Card for Titanic Data Training and testing data for Titanic passengers' survival. ## Dataset Details ### Dataset Description Train: - Dimensions --> 891x12 - Column names --> "PassengerId", "Survived", "Pclass", "Name", "Sex", "Age", "SibSp", "Parch", "Ticket", "Fare", "Cabin", and "Embarked" Test: - Dimensions --> 418x11 - Column names --> "PassengerId", "Pclass", "Name", "Sex", "Age", "SibSp", "Parch", "Ticket", "Fare", "Cabin", and "Embarked" ### Dataset Sources Kaggle Titanic dataset https://www.kaggle.com/competitions/titanic ## Uses Raw datasets being used in introduction to DVC and Amazon's S3 buckets. ## Dataset Structure # Column definitions: - "PassengerId" --> key for each passenger (int64) - "Survived" --> binary variable indicating survival (int64) - "Pclass" --> first, second, or third class (int64) - "Name" --> passenger name; maiden name in parentheses for married women (object) - "Sex" --> male or female (object) - "Age" --> passenger age (float64) - "SibSp" --> unknown meaning (int64) - "Parch" --> unknown meaning (int64) - "Ticket" --> ticket identifier (object) - "Fare" --> float variable (float64) - "Cabin" --> cabin identifier (object) - "Embarked" --> C, Q, or S (object) Categorical columns: "Name", "Sex", "Ticket", "Cabin", "Embarked" Continuous columns: "PassengerId", "Pclass", "SibSp", "Parch", "Age", "Fare" # Quick Facts: Train: - PassengerID, Survived, Pclass, Name, Sex, SibSp, Parch, Ticket, and Fare have no NA values - Age not documented for 177 passengers (19.8653% NA) - Cabin not documented for 687 passengers (77.1044% NA) - Embarked not documented for 2 passengers (0.2245% NA) Test: - PassengerID, Pclass, Name, Sex, SibSp, Parch, Ticket, and Embarked have no NA values - Age not documented for 86 passengers (20.5742% NA) - Fare not documented for 1 passenger (0.2392% NA) - Cabin not documented for 387 passengers (78.2297% NA) # Summary Statistics: Train: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65119c3f02dbe541c92539d4/AJLNDr1mDXEiTLn_JAH0h.png) Test: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65119c3f02dbe541c92539d4/PEnS25wxm6ymjgsI3QKtv.png) ## Dataset Card Author Maria Murphy
mariakmurphy55/titanicdata
[ "size_categories:1K<n<10K", "language:en", "region:us" ]
2023-11-05T17:32:10+00:00
{"language": ["en"], "size_categories": ["1K<n<10K"], "pretty_name": "titanic data"}
2023-11-11T00:14:14+00:00
[]
[ "en" ]
TAGS #size_categories-1K<n<10K #language-English #region-us
# Dataset Card for Titanic Data Training and testing data for Titanic passengers' survival. ## Dataset Details ### Dataset Description Train: - Dimensions --> 891x12 - Column names --> "PassengerId", "Survived", "Pclass", "Name", "Sex", "Age", "SibSp", "Parch", "Ticket", "Fare", "Cabin", and "Embarked" Test: - Dimensions --> 418x11 - Column names --> "PassengerId", "Pclass", "Name", "Sex", "Age", "SibSp", "Parch", "Ticket", "Fare", "Cabin", and "Embarked" ### Dataset Sources Kaggle Titanic dataset URL ## Uses Raw datasets being used in introduction to DVC and Amazon's S3 buckets. ## Dataset Structure # Column definitions: - "PassengerId" --> key for each passenger (int64) - "Survived" --> binary variable indicating survival (int64) - "Pclass" --> first, second, or third class (int64) - "Name" --> passenger name; maiden name in parentheses for married women (object) - "Sex" --> male or female (object) - "Age" --> passenger age (float64) - "SibSp" --> unknown meaning (int64) - "Parch" --> unknown meaning (int64) - "Ticket" --> ticket identifier (object) - "Fare" --> float variable (float64) - "Cabin" --> cabin identifier (object) - "Embarked" --> C, Q, or S (object) Categorical columns: "Name", "Sex", "Ticket", "Cabin", "Embarked" Continuous columns: "PassengerId", "Pclass", "SibSp", "Parch", "Age", "Fare" # Quick Facts: Train: - PassengerID, Survived, Pclass, Name, Sex, SibSp, Parch, Ticket, and Fare have no NA values - Age not documented for 177 passengers (19.8653% NA) - Cabin not documented for 687 passengers (77.1044% NA) - Embarked not documented for 2 passengers (0.2245% NA) Test: - PassengerID, Pclass, Name, Sex, SibSp, Parch, Ticket, and Embarked have no NA values - Age not documented for 86 passengers (20.5742% NA) - Fare not documented for 1 passenger (0.2392% NA) - Cabin not documented for 387 passengers (78.2297% NA) # Summary Statistics: Train: !image/png Test: !image/png ## Dataset Card Author Maria Murphy
[ "# Dataset Card for Titanic Data\n\nTraining and testing data for Titanic passengers' survival.", "## Dataset Details", "### Dataset Description\n\nTrain: \n- Dimensions --> 891x12\n- Column names --> \"PassengerId\", \"Survived\", \"Pclass\", \"Name\", \"Sex\", \"Age\", \"SibSp\", \"Parch\", \"Ticket\", \"Fare\", \"Cabin\", and \"Embarked\"\n\nTest:\n- Dimensions --> 418x11\n- Column names --> \"PassengerId\", \"Pclass\", \"Name\", \"Sex\", \"Age\", \"SibSp\", \"Parch\", \"Ticket\", \"Fare\", \"Cabin\", and \"Embarked\"", "### Dataset Sources\n\nKaggle Titanic dataset\nURL", "## Uses\n\nRaw datasets being used in introduction to DVC and Amazon's S3 buckets.", "## Dataset Structure", "# Column definitions:\n- \"PassengerId\" --> key for each passenger (int64)\n- \"Survived\" --> binary variable indicating survival (int64)\n- \"Pclass\" --> first, second, or third class (int64)\n- \"Name\" --> passenger name; maiden name in parentheses for married women (object)\n- \"Sex\" --> male or female (object)\n- \"Age\" --> passenger age (float64)\n- \"SibSp\" --> unknown meaning (int64)\n- \"Parch\" --> unknown meaning (int64)\n- \"Ticket\" --> ticket identifier (object)\n- \"Fare\" --> float variable (float64)\n- \"Cabin\" --> cabin identifier (object)\n- \"Embarked\" --> C, Q, or S (object)\n\nCategorical columns: \"Name\", \"Sex\", \"Ticket\", \"Cabin\", \"Embarked\"\n\nContinuous columns: \"PassengerId\", \"Pclass\", \"SibSp\", \"Parch\", \"Age\", \"Fare\"", "# Quick Facts:\nTrain:\n- PassengerID, Survived, Pclass, Name, Sex, SibSp, Parch, Ticket, and Fare have no NA values\n- Age not documented for 177 passengers (19.8653% NA)\n- Cabin not documented for 687 passengers (77.1044% NA)\n- Embarked not documented for 2 passengers (0.2245% NA)\n\nTest:\n- PassengerID, Pclass, Name, Sex, SibSp, Parch, Ticket, and Embarked have no NA values\n- Age not documented for 86 passengers (20.5742% NA)\n- Fare not documented for 1 passenger (0.2392% NA)\n- Cabin not documented for 387 passengers (78.2297% NA)", "# Summary Statistics:\nTrain:\n!image/png\n\nTest:\n!image/png", "## Dataset Card Author\n\nMaria Murphy" ]
[ "TAGS\n#size_categories-1K<n<10K #language-English #region-us \n", "# Dataset Card for Titanic Data\n\nTraining and testing data for Titanic passengers' survival.", "## Dataset Details", "### Dataset Description\n\nTrain: \n- Dimensions --> 891x12\n- Column names --> \"PassengerId\", \"Survived\", \"Pclass\", \"Name\", \"Sex\", \"Age\", \"SibSp\", \"Parch\", \"Ticket\", \"Fare\", \"Cabin\", and \"Embarked\"\n\nTest:\n- Dimensions --> 418x11\n- Column names --> \"PassengerId\", \"Pclass\", \"Name\", \"Sex\", \"Age\", \"SibSp\", \"Parch\", \"Ticket\", \"Fare\", \"Cabin\", and \"Embarked\"", "### Dataset Sources\n\nKaggle Titanic dataset\nURL", "## Uses\n\nRaw datasets being used in introduction to DVC and Amazon's S3 buckets.", "## Dataset Structure", "# Column definitions:\n- \"PassengerId\" --> key for each passenger (int64)\n- \"Survived\" --> binary variable indicating survival (int64)\n- \"Pclass\" --> first, second, or third class (int64)\n- \"Name\" --> passenger name; maiden name in parentheses for married women (object)\n- \"Sex\" --> male or female (object)\n- \"Age\" --> passenger age (float64)\n- \"SibSp\" --> unknown meaning (int64)\n- \"Parch\" --> unknown meaning (int64)\n- \"Ticket\" --> ticket identifier (object)\n- \"Fare\" --> float variable (float64)\n- \"Cabin\" --> cabin identifier (object)\n- \"Embarked\" --> C, Q, or S (object)\n\nCategorical columns: \"Name\", \"Sex\", \"Ticket\", \"Cabin\", \"Embarked\"\n\nContinuous columns: \"PassengerId\", \"Pclass\", \"SibSp\", \"Parch\", \"Age\", \"Fare\"", "# Quick Facts:\nTrain:\n- PassengerID, Survived, Pclass, Name, Sex, SibSp, Parch, Ticket, and Fare have no NA values\n- Age not documented for 177 passengers (19.8653% NA)\n- Cabin not documented for 687 passengers (77.1044% NA)\n- Embarked not documented for 2 passengers (0.2245% NA)\n\nTest:\n- PassengerID, Pclass, Name, Sex, SibSp, Parch, Ticket, and Embarked have no NA values\n- Age not documented for 86 passengers (20.5742% NA)\n- Fare not documented for 1 passenger (0.2392% NA)\n- Cabin not documented for 387 passengers (78.2297% NA)", "# Summary Statistics:\nTrain:\n!image/png\n\nTest:\n!image/png", "## Dataset Card Author\n\nMaria Murphy" ]
[ 22, 18, 4, 134, 12, 25, 6, 244, 167, 18, 7 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-English #region-us \n# Dataset Card for Titanic Data\n\nTraining and testing data for Titanic passengers' survival.## Dataset Details### Dataset Description\n\nTrain: \n- Dimensions --> 891x12\n- Column names --> \"PassengerId\", \"Survived\", \"Pclass\", \"Name\", \"Sex\", \"Age\", \"SibSp\", \"Parch\", \"Ticket\", \"Fare\", \"Cabin\", and \"Embarked\"\n\nTest:\n- Dimensions --> 418x11\n- Column names --> \"PassengerId\", \"Pclass\", \"Name\", \"Sex\", \"Age\", \"SibSp\", \"Parch\", \"Ticket\", \"Fare\", \"Cabin\", and \"Embarked\"### Dataset Sources\n\nKaggle Titanic dataset\nURL## Uses\n\nRaw datasets being used in introduction to DVC and Amazon's S3 buckets.## Dataset Structure# Column definitions:\n- \"PassengerId\" --> key for each passenger (int64)\n- \"Survived\" --> binary variable indicating survival (int64)\n- \"Pclass\" --> first, second, or third class (int64)\n- \"Name\" --> passenger name; maiden name in parentheses for married women (object)\n- \"Sex\" --> male or female (object)\n- \"Age\" --> passenger age (float64)\n- \"SibSp\" --> unknown meaning (int64)\n- \"Parch\" --> unknown meaning (int64)\n- \"Ticket\" --> ticket identifier (object)\n- \"Fare\" --> float variable (float64)\n- \"Cabin\" --> cabin identifier (object)\n- \"Embarked\" --> C, Q, or S (object)\n\nCategorical columns: \"Name\", \"Sex\", \"Ticket\", \"Cabin\", \"Embarked\"\n\nContinuous columns: \"PassengerId\", \"Pclass\", \"SibSp\", \"Parch\", \"Age\", \"Fare\"" ]
b64217381ba32f3cadcb74092464a285a69aa146
# Dataset Card for "wikilingua_data-xlsumm_results" rouge={'rouge1': 0.17979439718031803, 'rouge2': 0.042268965036649904, 'rougeL': 0.1366722006594874, 'rougeLsum': 0.1366722006594874} Bert={'precision': 0.7008500793632816, 'recall': 0.6581864103939514, 'f1': 0.6780840858169224} mover = 0.5849261045272657
arthurmluz/wikilingua_data-xlsum_results
[ "region:us" ]
2023-11-05T18:01:03+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 21339283, "num_examples": 8165}], "download_size": 12453886, "dataset_size": 21339283}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]}
2023-11-13T19:40:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wikilingua_data-xlsumm_results" rouge={'rouge1': 0.17979439718031803, 'rouge2': 0.042268965036649904, 'rougeL': 0.1366722006594874, 'rougeLsum': 0.1366722006594874} Bert={'precision': 0.7008500793632816, 'recall': 0.6581864103939514, 'f1': 0.6780840858169224} mover = 0.5849261045272657
[ "# Dataset Card for \"wikilingua_data-xlsumm_results\"\n\n\nrouge={'rouge1': 0.17979439718031803, 'rouge2': 0.042268965036649904, 'rougeL': 0.1366722006594874, 'rougeLsum': 0.1366722006594874}\n\nBert={'precision': 0.7008500793632816, 'recall': 0.6581864103939514, 'f1': 0.6780840858169224}\n\nmover = 0.5849261045272657" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wikilingua_data-xlsumm_results\"\n\n\nrouge={'rouge1': 0.17979439718031803, 'rouge2': 0.042268965036649904, 'rougeL': 0.1366722006594874, 'rougeLsum': 0.1366722006594874}\n\nBert={'precision': 0.7008500793632816, 'recall': 0.6581864103939514, 'f1': 0.6780840858169224}\n\nmover = 0.5849261045272657" ]
[ 6, 139 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wikilingua_data-xlsumm_results\"\n\n\nrouge={'rouge1': 0.17979439718031803, 'rouge2': 0.042268965036649904, 'rougeL': 0.1366722006594874, 'rougeLsum': 0.1366722006594874}\n\nBert={'precision': 0.7008500793632816, 'recall': 0.6581864103939514, 'f1': 0.6780840858169224}\n\nmover = 0.5849261045272657" ]
6fb5cfe084200f2b22aa2de218b2614711d29b0e
ORIGINALLY COMPILED ON WIKIPEDIA. Cleaned and improved from original version on Wikipedia, by completing some column information that was available elsewhere in the wikipedia article. This is a dataset of tabular data regarding treaties between the USA and Native American Tribes/Nations, to date, including many executive orders. Table columns include Year, Date, Treaty name, "Alternative Treaty name", Statutes, "Land cession reference (Royce Area)", Tribe(s). All of those listed include the USA (United States of America) as a party, however the Tribal or Native American Nations in each many vary and may on occasion not be the same decision making parties as currently go by those same tribal names now. Note that a significant number of these treaties have been broken and violated by the United States. There are constant and ongoing legal battles nationally in the USA and internationally to attempt to rectify the violations by the United States. From Wikipedia: From 1778 to 1871, the United States government entered into more than 500 treaties with the Native American tribes;[24] all of these treaties have since been violated in some way or outright broken by the U.S. government,[25][26][27][28] with Native Americans and First Nations peoples still fighting for their treaty rights in federal courts and at the United Nations.[26][29] In addition to treaties, which are ratified by the U.S. Senate and signed by the U.S. President, there were also Acts of Congress and Executive Orders which dealt with land agreements. The U.S. military and representatives of a tribe, or sub unit of a tribe, signed documents which were understood at the time to be treaties, rather than armistices, ceasefires and truces. The entries from 1784 to 1895 were initially created by information gathered by Charles C. Royce[30] and published in the U.S. Serial Set,[31] Number 4015, 56th Congress, 1st Session, in 1899. The purpose of the Schedule of Indian Land Cessions was to indicate the location of each cession by or reservation for the Indian Tribes. Royce's column headings are titled: "Date, Where or how concluded, Reference, Tribe, Description of cession or reservation, historical data and remarks, Designation of cession on map, Number, Location".[32] The Ratified Indian Treaties that were transferred from the U.S. State Department to the National Archives were recently conserved and imaged for the first time, and in 2020 made available online with additional context at the Indigenous Digital Archive's Treaties Explorer, or DigiTreaties.org.[33][34] References from the Wikipedia page: The Avalon Project : Documents in Law, History and Diplomacy Archived 2007-06-27 at the Wayback Machine "Treaty on maritime boundaries between the United Mexican States and the United States of America" (PDF). 1978-05-04. "IDA Treaties Explorer". Indigenous Digital Archive. Santa Fe, New Mexico: Museum of Indian Arts & Culture. Retrieved 2020-10-19. Vargo, Samuel (21 November 2014). "With more than ..500 treaties already broken, the government can do whatever it wants, it seems..." Daily Kos. Retrieved 9 October 2016. More than 500 treaties have been made between the government and Indian tribes and all were broken, nulllified or amended. Toensing, Gale Courey (23 August 2013). "'Honor the Treaties': UN Human Rights Chief's Message". Indian Country Today Media Network. Archived from the original on 7 October 2016. Retrieved 9 October 2016. The U.S. federal government entered into more than 500 treaties with Indian nations from 1778 to 1871; every one of them was "broken, changed or nullified when it served the government's interests," Helen Oliff wrote in "Treaties Made, Treaties Broken." Egan, Timothy (25 June 2000). "Mending a Trail of Broken Treaties". The New York Times. Retrieved 17 September 2016. DeLoria, Jr., Vine (2010). Behind the Trail of Broken Treaties: An Indian Declaration of Independence. University of Texas Press. ISBN 978-0-292-70754-2. Entire book is dedicated to examining these broken treaties. Wildenthal, Bryan H. (2003). Native American Sovereignty on Trial: A Handbook with Cases, Laws, and Documents. ABC-CLIO. p. 122. ISBN 1-57607-625-3. The field of Indian law rests mainly on the old treaties. Charles C. Royce U.S. Serial Set Page 648 US Serial Set, Number 4015, 56the Congress, 1st Session Hundreds of Native American Treaties Digitized for the First Time Smithsonian Magazine 2020 October 15 National Archives and Museum of Indian Arts & Culture Share New Online Education Tool Expanding Access to Treaties between the U.S. and Native Nations. Blog of the Archivist of the United States. 2020 October 13 Kappler, Charles J. (1904). "Indian Affairs Laws and Treaties - Acts of Forty-third Congress - First Session 1874 - Chapter 136". Archived from the original on February 28, 2001.
pseudolab/US_Native_American_Tribal_Treaties_Table_from_Wikipedia
[ "license:cc-by-sa-4.0", "region:us" ]
2023-11-05T18:22:23+00:00
{"license": "cc-by-sa-4.0"}
2023-11-05T22:13:25+00:00
[]
[]
TAGS #license-cc-by-sa-4.0 #region-us
ORIGINALLY COMPILED ON WIKIPEDIA. Cleaned and improved from original version on Wikipedia, by completing some column information that was available elsewhere in the wikipedia article. This is a dataset of tabular data regarding treaties between the USA and Native American Tribes/Nations, to date, including many executive orders. Table columns include Year, Date, Treaty name, "Alternative Treaty name", Statutes, "Land cession reference (Royce Area)", Tribe(s). All of those listed include the USA (United States of America) as a party, however the Tribal or Native American Nations in each many vary and may on occasion not be the same decision making parties as currently go by those same tribal names now. Note that a significant number of these treaties have been broken and violated by the United States. There are constant and ongoing legal battles nationally in the USA and internationally to attempt to rectify the violations by the United States. From Wikipedia: From 1778 to 1871, the United States government entered into more than 500 treaties with the Native American tribes;[24] all of these treaties have since been violated in some way or outright broken by the U.S. government,[25][26][27][28] with Native Americans and First Nations peoples still fighting for their treaty rights in federal courts and at the United Nations.[26][29] In addition to treaties, which are ratified by the U.S. Senate and signed by the U.S. President, there were also Acts of Congress and Executive Orders which dealt with land agreements. The U.S. military and representatives of a tribe, or sub unit of a tribe, signed documents which were understood at the time to be treaties, rather than armistices, ceasefires and truces. The entries from 1784 to 1895 were initially created by information gathered by Charles C. Royce[30] and published in the U.S. Serial Set,[31] Number 4015, 56th Congress, 1st Session, in 1899. The purpose of the Schedule of Indian Land Cessions was to indicate the location of each cession by or reservation for the Indian Tribes. Royce's column headings are titled: "Date, Where or how concluded, Reference, Tribe, Description of cession or reservation, historical data and remarks, Designation of cession on map, Number, Location".[32] The Ratified Indian Treaties that were transferred from the U.S. State Department to the National Archives were recently conserved and imaged for the first time, and in 2020 made available online with additional context at the Indigenous Digital Archive's Treaties Explorer, or URL.[33][34] References from the Wikipedia page: The Avalon Project : Documents in Law, History and Diplomacy Archived 2007-06-27 at the Wayback Machine "Treaty on maritime boundaries between the United Mexican States and the United States of America" (PDF). 1978-05-04. "IDA Treaties Explorer". Indigenous Digital Archive. Santa Fe, New Mexico: Museum of Indian Arts & Culture. Retrieved 2020-10-19. Vargo, Samuel (21 November 2014). "With more than ..500 treaties already broken, the government can do whatever it wants, it seems..." Daily Kos. Retrieved 9 October 2016. More than 500 treaties have been made between the government and Indian tribes and all were broken, nulllified or amended. Toensing, Gale Courey (23 August 2013). "'Honor the Treaties': UN Human Rights Chief's Message". Indian Country Today Media Network. Archived from the original on 7 October 2016. Retrieved 9 October 2016. The U.S. federal government entered into more than 500 treaties with Indian nations from 1778 to 1871; every one of them was "broken, changed or nullified when it served the government's interests," Helen Oliff wrote in "Treaties Made, Treaties Broken." Egan, Timothy (25 June 2000). "Mending a Trail of Broken Treaties". The New York Times. Retrieved 17 September 2016. DeLoria, Jr., Vine (2010). Behind the Trail of Broken Treaties: An Indian Declaration of Independence. University of Texas Press. ISBN 978-0-292-70754-2. Entire book is dedicated to examining these broken treaties. Wildenthal, Bryan H. (2003). Native American Sovereignty on Trial: A Handbook with Cases, Laws, and Documents. ABC-CLIO. p. 122. ISBN 1-57607-625-3. The field of Indian law rests mainly on the old treaties. Charles C. Royce U.S. Serial Set Page 648 US Serial Set, Number 4015, 56the Congress, 1st Session Hundreds of Native American Treaties Digitized for the First Time Smithsonian Magazine 2020 October 15 National Archives and Museum of Indian Arts & Culture Share New Online Education Tool Expanding Access to Treaties between the U.S. and Native Nations. Blog of the Archivist of the United States. 2020 October 13 Kappler, Charles J. (1904). "Indian Affairs Laws and Treaties - Acts of Forty-third Congress - First Session 1874 - Chapter 136". Archived from the original on February 28, 2001.
[]
[ "TAGS\n#license-cc-by-sa-4.0 #region-us \n" ]
[ 17 ]
[ "passage: TAGS\n#license-cc-by-sa-4.0 #region-us \n" ]
9ef74eb405fe73b22ec468b5902127ca91102012
# Dataset Card for "hestenettet" Subset of Gigaword. https://huggingface.co/datasets/DDSC/partial-danish-gigaword-no-twitter
mhenrichsen/hestenettet
[ "region:us" ]
2023-11-05T18:41:00+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "doc_id", "dtype": "string"}, {"name": "LICENSE", "dtype": "string"}, {"name": "uri", "dtype": "string"}, {"name": "date_built", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1227838360, "num_examples": 14498}], "download_size": 747772002, "dataset_size": 1227838360}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-05T18:56:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hestenettet" Subset of Gigaword. URL
[ "# Dataset Card for \"hestenettet\"\nSubset of Gigaword.\nURL" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hestenettet\"\nSubset of Gigaword.\nURL" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hestenettet\"\nSubset of Gigaword.\nURL" ]
26698cd3660c0c3d992d38cb173612d93c7e83ef
# Dataset Card for "github_classification_no_empty_readme" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Ujan/github_classification_no_empty_readme
[ "region:us" ]
2023-11-05T18:50:54+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "names", "dtype": "string"}, {"name": "readmes", "dtype": "string"}, {"name": "topics", "dtype": "string"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51299344.74701966, "num_examples": 10334}, {"name": "validation", "num_bytes": 6413659.126490169, "num_examples": 1292}, {"name": "test", "num_bytes": 6413659.126490169, "num_examples": 1292}], "download_size": 29121376, "dataset_size": 64126663.0}}
2023-11-05T18:51:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "github_classification_no_empty_readme" More Information needed
[ "# Dataset Card for \"github_classification_no_empty_readme\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"github_classification_no_empty_readme\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"github_classification_no_empty_readme\"\n\nMore Information needed" ]
756f9bbc84856abc48238c8a41fa2396c593384f
# Dataset Card for "sdu_es_train_topics_LDA" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tomashs/sdu_es_train_topics_LDA
[ "region:us" ]
2023-11-05T19:06:23+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "short_form", "dtype": "string"}, {"name": "long_form", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text_prep", "dtype": "string"}, {"name": "topic_vector", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 26198446, "num_examples": 22424}], "download_size": 4852552, "dataset_size": 26198446}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-05T19:06:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "sdu_es_train_topics_LDA" More Information needed
[ "# Dataset Card for \"sdu_es_train_topics_LDA\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"sdu_es_train_topics_LDA\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"sdu_es_train_topics_LDA\"\n\nMore Information needed" ]
78cc096f23874088da5cb65ca2ea60d9a67000fc
# Dataset Card for "sdu_es_dev_topics_LDA" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tomashs/sdu_es_dev_topics_LDA
[ "region:us" ]
2023-11-05T19:06:27+00:00
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "acronym", "dtype": "string"}, {"name": "ID", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "text_prep", "dtype": "string"}, {"name": "topic_vector", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 948444, "num_examples": 818}], "download_size": 342169, "dataset_size": 948444}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-05T19:06:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "sdu_es_dev_topics_LDA" More Information needed
[ "# Dataset Card for \"sdu_es_dev_topics_LDA\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"sdu_es_dev_topics_LDA\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"sdu_es_dev_topics_LDA\"\n\nMore Information needed" ]
be9d68d22b81887b98e33bdc0de515c445cdf2f8
# Dataset Card for "autodiagram" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Alchemy5/autodiagram
[ "region:us" ]
2023-11-05T19:16:13+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "images", "dtype": "image"}, {"name": "tex", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 63253979.0, "num_examples": 8000}, {"name": "validation", "num_bytes": 15682075.0, "num_examples": 2000}], "download_size": 62362302, "dataset_size": 78936054.0}}
2023-11-29T22:44:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "autodiagram" More Information needed
[ "# Dataset Card for \"autodiagram\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"autodiagram\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"autodiagram\"\n\nMore Information needed" ]
59141945cb3657f5478c4cb399a2734e1eec804e
### Format for Google Colaboratory ``` path = Path("rubiks_cube_segmentation") !git clone https://huggingface.co/datasets/seandavidreed/rubiks_cube_segmentation $path !python3 rubiks_cube_segmentation/format_colab.py $path ```
seandavidreed/rubiks_cube_segmentation
[ "license:apache-2.0", "region:us" ]
2023-11-05T19:55:56+00:00
{"license": "apache-2.0"}
2023-11-06T05:46:06+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
### Format for Google Colaboratory
[ "### Format for Google Colaboratory" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "### Format for Google Colaboratory" ]
[ 14, 8 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n### Format for Google Colaboratory" ]
e1bfa5c144fdca6d80ef8b6024d40809f7f0d71a
# Dataset Card for "genaidata3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Bsbell21/genaidata3
[ "region:us" ]
2023-11-05T19:59:53+00:00
{"dataset_info": {"features": [{"name": "item", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "ad", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 890, "num_examples": 5}], "download_size": 3305, "dataset_size": 890}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-05T19:59:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "genaidata3" More Information needed
[ "# Dataset Card for \"genaidata3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"genaidata3\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"genaidata3\"\n\nMore Information needed" ]
1c0a55e01d318ab0e9ffb40cf723c8bd5b403d50
# Dataset Card for "FLD_gen" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cestwc/FLD_gen
[ "region:us" ]
2023-11-05T20:30:05+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "hypothesis", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "hypothesis_formula", "dtype": "string"}, {"name": "context_formula", "dtype": "string"}, {"name": "proofs", "sequence": "string"}, {"name": "proof_label", "dtype": "string"}, {"name": "proofs_formula", "sequence": "string"}, {"name": "world_assump_label", "dtype": "string"}, {"name": "original_tree_depth", "dtype": "int64"}, {"name": "depth", "dtype": "int64"}, {"name": "num_formula_distractors", "dtype": "int64"}, {"name": "num_translation_distractors", "dtype": "int64"}, {"name": "num_all_distractors", "dtype": "int64"}, {"name": "negative_hypothesis", "dtype": "string"}, {"name": "negative_hypothesis_formula", "dtype": "string"}, {"name": "negative_original_tree_depth", "dtype": "int64"}, {"name": "negative_proofs", "sequence": "string"}, {"name": "negative_proof_label", "dtype": "string"}, {"name": "negative_world_assump_label", "dtype": "string"}, {"name": "prompt_serial", "dtype": "string"}, {"name": "proof_serial", "dtype": "string"}, {"name": "version", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "assumptions", "sequence": "string"}, {"name": "paraphrased_premises", "sequence": "string"}, {"name": "paraphrased_premise", "dtype": "string"}, {"name": "assumption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 154414314, "num_examples": 36401}, {"name": "validation", "num_bytes": 25351138, "num_examples": 6004}, {"name": "test", "num_bytes": 25945020, "num_examples": 6160}], "download_size": 45117566, "dataset_size": 205710472}}
2023-11-05T20:30:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "FLD_gen" More Information needed
[ "# Dataset Card for \"FLD_gen\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"FLD_gen\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"FLD_gen\"\n\nMore Information needed" ]
f4f54de8cfd3f6349963e174b5997f4aeb061fc0
# Dataset Card for "Calc-X" This dataset is a concatenation of all arithmetical reasoning datasets of [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) that can be used without data leakages for training, validation and testing of models for arithmetical reasoning. Find more details in the following resources: - [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers - [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF - [**Calc-X and Calcformers paper (EMNLP 2023)**](https://arxiv.org/abs/2305.15017) - [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x) ## How was this dataset created Below is the code that was used to generate this dataset. ```python calcx_ds_names = ["gsm8k", "ape210k", "aqua_rat", "math_qa", "svamp", "asdiv_a", "mawps"] all_ds = { ds_name: datasets.load_dataset(f"MU-NLPC/calc-{ds_name}") for ds_name in calcx_ds_names } common_cols = ["id", "question", "chain", "result"] calcx = datasets.DatasetDict({ split: datasets.concatenate_datasets([ (all_ds[ds_name][split] .select_columns(common_cols) .add_column("source_ds", [ds_name] * len(all_ds[ds_name][split])) ) for ds_name in calcx_ds_names if split in all_ds[ds_name] ]) for split in ["train", "validation", "test"] }) calcx["train"] = calcx["train"].shuffle(seed=0) ``` ## Cite If you use this version of the dataset in research, please cite the [original GSM8K paper](https://arxiv.org/abs/2110.14168), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows: ```bibtex @inproceedings{kadlcik-etal-2023-soft, title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems", author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek", booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track", month = dec, year = "2023", address = "Singapore, Singapore", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2305.15017", } ```
MU-NLPC/Calc-X
[ "arxiv:2305.15017", "arxiv:2110.14168", "region:us" ]
2023-11-05T21:27:34+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "chain", "dtype": "string"}, {"name": "result", "dtype": "string"}, {"name": "source_ds", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2783755, "num_examples": 6096}, {"name": "train", "num_bytes": 156087951, "num_examples": 319169}, {"name": "validation", "num_bytes": 1425660, "num_examples": 3277}], "download_size": 72905795, "dataset_size": 160297366}}
2024-01-22T16:28:14+00:00
[ "2305.15017", "2110.14168" ]
[]
TAGS #arxiv-2305.15017 #arxiv-2110.14168 #region-us
# Dataset Card for "Calc-X" This dataset is a concatenation of all arithmetical reasoning datasets of Calc-X collection that can be used without data leakages for training, validation and testing of models for arithmetical reasoning. Find more details in the following resources: - Calc-X collection - datasets for training Calcformers - Calcformers collection - calculator-using models we trained and published on HF - Calc-X and Calcformers paper (EMNLP 2023) - Calc-X and Calcformers repo ## How was this dataset created Below is the code that was used to generate this dataset. ## Cite If you use this version of the dataset in research, please cite the original GSM8K paper, and Calc-X collection as follows:
[ "# Dataset Card for \"Calc-X\"\n\nThis dataset is a concatenation of all arithmetical reasoning datasets of Calc-X collection\nthat can be used without data leakages for training, validation and testing of models for arithmetical reasoning.\n\nFind more details in the following resources:\n\n- Calc-X collection - datasets for training Calcformers\n- Calcformers collection - calculator-using models we trained and published on HF\n- Calc-X and Calcformers paper (EMNLP 2023)\n- Calc-X and Calcformers repo", "## How was this dataset created\n\nBelow is the code that was used to generate this dataset.", "## Cite\n\nIf you use this version of the dataset in research, please cite the original GSM8K paper, and Calc-X collection as follows:" ]
[ "TAGS\n#arxiv-2305.15017 #arxiv-2110.14168 #region-us \n", "# Dataset Card for \"Calc-X\"\n\nThis dataset is a concatenation of all arithmetical reasoning datasets of Calc-X collection\nthat can be used without data leakages for training, validation and testing of models for arithmetical reasoning.\n\nFind more details in the following resources:\n\n- Calc-X collection - datasets for training Calcformers\n- Calcformers collection - calculator-using models we trained and published on HF\n- Calc-X and Calcformers paper (EMNLP 2023)\n- Calc-X and Calcformers repo", "## How was this dataset created\n\nBelow is the code that was used to generate this dataset.", "## Cite\n\nIf you use this version of the dataset in research, please cite the original GSM8K paper, and Calc-X collection as follows:" ]
[ 24, 135, 21, 34 ]
[ "passage: TAGS\n#arxiv-2305.15017 #arxiv-2110.14168 #region-us \n# Dataset Card for \"Calc-X\"\n\nThis dataset is a concatenation of all arithmetical reasoning datasets of Calc-X collection\nthat can be used without data leakages for training, validation and testing of models for arithmetical reasoning.\n\nFind more details in the following resources:\n\n- Calc-X collection - datasets for training Calcformers\n- Calcformers collection - calculator-using models we trained and published on HF\n- Calc-X and Calcformers paper (EMNLP 2023)\n- Calc-X and Calcformers repo## How was this dataset created\n\nBelow is the code that was used to generate this dataset.## Cite\n\nIf you use this version of the dataset in research, please cite the original GSM8K paper, and Calc-X collection as follows:" ]
cac39521d4e98ab22c8527bdb9c9a38c8d768dec
Native American Treaty Table from Wikipedia, 2023, cleaned and joined.
Solshine/Native_American_Treaty_Table_Formatted_Autotrain
[ "license:mit", "region:us" ]
2023-11-05T22:25:18+00:00
{"license": "mit"}
2023-11-30T23:52:53+00:00
[]
[]
TAGS #license-mit #region-us
Native American Treaty Table from Wikipedia, 2023, cleaned and joined.
[]
[ "TAGS\n#license-mit #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-mit #region-us \n" ]
db4f51d5fa8f030b239925f700e3d96118975814
# Dataset Card for "mental-health-chat-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mpingale/mental-health-chat-dataset
[ "region:us" ]
2023-11-05T22:36:04+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "questionID", "dtype": "int64"}, {"name": "questionTitle", "dtype": "string"}, {"name": "questionText", "dtype": "string"}, {"name": "questionLink", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "therapistInfo", "dtype": "string"}, {"name": "therapistURL", "dtype": "string"}, {"name": "answerText", "dtype": "string"}, {"name": "upvotes", "dtype": "int64"}, {"name": "views", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7642829, "num_examples": 2775}], "download_size": 3638427, "dataset_size": 7642829}}
2023-11-05T22:36:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "mental-health-chat-dataset" More Information needed
[ "# Dataset Card for \"mental-health-chat-dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"mental-health-chat-dataset\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"mental-health-chat-dataset\"\n\nMore Information needed" ]
dfa257f119ad311019c106cb01858bca7656d33d
# Dataset Card for "wikilingua_data-xlsumm_cstnews_results" rouge={'rouge1': 0.22730958909234303, 'rouge2': 0.05480148947185013, 'rougeL': 0.1484336497540636, 'rougeLsum': 0.1484336497540636} Bert={'precision': 0.6786886892651607, 'recall': 0.7067214733716248, 'f1': 0.6914363930397652} mover = 0.5873519688127872
arthurmluz/wikilingua_data-xlsum_cstnews_results
[ "region:us" ]
2023-11-05T23:33:53+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 23663943, "num_examples": 8165}], "download_size": 14055635, "dataset_size": 23663943}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]}
2023-11-13T19:42:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wikilingua_data-xlsumm_cstnews_results" rouge={'rouge1': 0.22730958909234303, 'rouge2': 0.05480148947185013, 'rougeL': 0.1484336497540636, 'rougeLsum': 0.1484336497540636} Bert={'precision': 0.6786886892651607, 'recall': 0.7067214733716248, 'f1': 0.6914363930397652} mover = 0.5873519688127872
[ "# Dataset Card for \"wikilingua_data-xlsumm_cstnews_results\"\n\nrouge={'rouge1': 0.22730958909234303, 'rouge2': 0.05480148947185013, 'rougeL': 0.1484336497540636, 'rougeLsum': 0.1484336497540636}\n\nBert={'precision': 0.6786886892651607, 'recall': 0.7067214733716248, 'f1': 0.6914363930397652}\n\nmover = 0.5873519688127872" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wikilingua_data-xlsumm_cstnews_results\"\n\nrouge={'rouge1': 0.22730958909234303, 'rouge2': 0.05480148947185013, 'rougeL': 0.1484336497540636, 'rougeLsum': 0.1484336497540636}\n\nBert={'precision': 0.6786886892651607, 'recall': 0.7067214733716248, 'f1': 0.6914363930397652}\n\nmover = 0.5873519688127872" ]
[ 6, 140 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wikilingua_data-xlsumm_cstnews_results\"\n\nrouge={'rouge1': 0.22730958909234303, 'rouge2': 0.05480148947185013, 'rougeL': 0.1484336497540636, 'rougeLsum': 0.1484336497540636}\n\nBert={'precision': 0.6786886892651607, 'recall': 0.7067214733716248, 'f1': 0.6914363930397652}\n\nmover = 0.5873519688127872" ]
d871bcbc7412ecf8fea4b8c683fe88653419a2b0
# AutoTrain Dataset for project: rice_diagnosis ## Dataset Description This dataset has been automatically processed by AutoTrain for project rice_diagnosis. Originally from Kaggle, this shows rice leaves (leaf) up close pictures labeled with the disease of which they show symptoms. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<3081x897 RGB PIL image>", "target": 0 }, { "image": "<3081x897 RGB PIL image>", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(names=['Bacterial leaf blight', 'Brown spot', 'Leaf smut'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 96 | | valid | 24 |
Solshine/Rice_Diagnosis_Leaf_Images_FromKaggle
[ "task_categories:image-classification", "region:us" ]
2023-11-06T00:40:11+00:00
{"task_categories": ["image-classification"]}
2023-11-12T03:48:13+00:00
[]
[]
TAGS #task_categories-image-classification #region-us
AutoTrain Dataset for project: rice\_diagnosis ============================================== Dataset Description ------------------- This dataset has been automatically processed by AutoTrain for project rice\_diagnosis. Originally from Kaggle, this shows rice leaves (leaf) up close pictures labeled with the disease of which they show symptoms. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-image-classification #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ 17, 27, 17, 23, 27 ]
[ "passage: TAGS\n#task_categories-image-classification #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
637699d358923c260a8c16a9aa2611698fe65dc6
# Dataset Card for "ha-en_RL-grow2_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pranjali97/ha-en_RL-grow2_train
[ "region:us" ]
2023-11-06T01:10:57+00:00
{"dataset_info": {"features": [{"name": "src", "dtype": "string"}, {"name": "ref", "dtype": "string"}, {"name": "mt", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 22489313, "num_examples": 49090}], "download_size": 3841732, "dataset_size": 22489313}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T01:11:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ha-en_RL-grow2_train" More Information needed
[ "# Dataset Card for \"ha-en_RL-grow2_train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ha-en_RL-grow2_train\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ha-en_RL-grow2_train\"\n\nMore Information needed" ]
fc1a81123737dd43c563f9aa5e76b35799d0cd2b
# Dataset Card for "ha-en_RL-grow2_valid" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pranjali97/ha-en_RL-grow2_valid
[ "region:us" ]
2023-11-06T01:11:05+00:00
{"dataset_info": {"features": [{"name": "src", "dtype": "string"}, {"name": "ref", "dtype": "string"}, {"name": "mt", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 2573608, "num_examples": 5565}], "download_size": 442593, "dataset_size": 2573608}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T01:11:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ha-en_RL-grow2_valid" More Information needed
[ "# Dataset Card for \"ha-en_RL-grow2_valid\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ha-en_RL-grow2_valid\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ha-en_RL-grow2_valid\"\n\nMore Information needed" ]
7da00bf2987bd3512a3a17bfe760fecce530f890
# Dataset Card for "dataset_line_connect" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ademax/dataset_line_connect
[ "region:us" ]
2023-11-06T01:46:18+00:00
{"dataset_info": {"features": [{"name": "lineA", "dtype": "string"}, {"name": "lineB", "dtype": "string"}, {"name": "is_join", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1143553535, "num_examples": 10001530}], "download_size": 412153174, "dataset_size": 1143553535}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T01:47:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dataset_line_connect" More Information needed
[ "# Dataset Card for \"dataset_line_connect\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dataset_line_connect\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dataset_line_connect\"\n\nMore Information needed" ]
691cb018bb12ad9a5be60de7d56d4494439d2520
# Dataset Card for "mbt_0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
okigan/mbt_0
[ "region:us" ]
2023-11-06T01:47:42+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "int64"}}, {"name": "categories", "sequence": {"class_label": {"names": {"0": "mtb"}}}}]}], "splits": [{"name": "train", "num_bytes": 18453888.0, "num_examples": 97}], "download_size": 18392269, "dataset_size": 18453888.0}}
2023-11-06T13:57:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "mbt_0" More Information needed
[ "# Dataset Card for \"mbt_0\"\n\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"mbt_0\"\n\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"mbt_0\"\n\n\nMore Information needed" ]
1f31d0cbd52d0b78e518b308990ccd0b8fc53596
# Cloud Matrix Data ## Description This dataset contains information related to cybersecurity techniques, as cataloged by the MITRE ATT&CK framework(v14). The data includes details such as unique identifiers, names, descriptions, URLs to more information, associated tactics, detection methods, applicable platforms, and data sources for detection. It also specifies whether a technique is a sub-technique of another and lists defenses that the technique may bypass. ## Structure - **Rows**: 130 - **Columns**: 11 The columns are as follows: 1. `ID`: Unique identifier for the technique (e.g., T1189, T1566.002) 2. `Name`: Name of the technique (e.g., Drive-by Compromise, Phishing: Spearphishing Link) 3. `Description`: Brief description of the technique 4. `URL`: Link to more information about the technique 5. `Tactics`: Category of tactics the technique falls under 6. `Detection`: How the technique might be detected 7. `Platforms`: Operating systems or platforms the technique applies to 8. `Data Sources`: Sources of data for detection 9. `Is Sub-Technique`: Whether the entry is a sub-technique (True/False) 10. `Sub-Technique Of`: If the entry is a sub-technique, the parent technique's ID 11. `Defenses Bypassed`: Defenses the technique is known to bypass ## Usage This dataset can be used by cybersecurity professionals and researchers to analyze and categorize different types of cybersecurity threats and their characteristics. It can also assist in developing defensive strategies by providing detection methods and noting applicable platforms. ## Additional Information - The dataset is likely derived from the MITRE ATT&CK framework, as indicated by the URL structure and content. - The data may need to be updated periodically to reflect the latest information from the MITRE ATT&CK database.
voyagar/mitre_cit_v14
[ "language:en", "license:unlicense", "region:us" ]
2023-11-06T01:51:16+00:00
{"language": ["en"], "license": "unlicense"}
2023-11-06T01:59:00+00:00
[]
[ "en" ]
TAGS #language-English #license-unlicense #region-us
# Cloud Matrix Data ## Description This dataset contains information related to cybersecurity techniques, as cataloged by the MITRE ATT&CK framework(v14). The data includes details such as unique identifiers, names, descriptions, URLs to more information, associated tactics, detection methods, applicable platforms, and data sources for detection. It also specifies whether a technique is a sub-technique of another and lists defenses that the technique may bypass. ## Structure - Rows: 130 - Columns: 11 The columns are as follows: 1. 'ID': Unique identifier for the technique (e.g., T1189, T1566.002) 2. 'Name': Name of the technique (e.g., Drive-by Compromise, Phishing: Spearphishing Link) 3. 'Description': Brief description of the technique 4. 'URL': Link to more information about the technique 5. 'Tactics': Category of tactics the technique falls under 6. 'Detection': How the technique might be detected 7. 'Platforms': Operating systems or platforms the technique applies to 8. 'Data Sources': Sources of data for detection 9. 'Is Sub-Technique': Whether the entry is a sub-technique (True/False) 10. 'Sub-Technique Of': If the entry is a sub-technique, the parent technique's ID 11. 'Defenses Bypassed': Defenses the technique is known to bypass ## Usage This dataset can be used by cybersecurity professionals and researchers to analyze and categorize different types of cybersecurity threats and their characteristics. It can also assist in developing defensive strategies by providing detection methods and noting applicable platforms. ## Additional Information - The dataset is likely derived from the MITRE ATT&CK framework, as indicated by the URL structure and content. - The data may need to be updated periodically to reflect the latest information from the MITRE ATT&CK database.
[ "# Cloud Matrix Data", "## Description\n\nThis dataset contains information related to cybersecurity techniques, as cataloged by the MITRE ATT&CK framework(v14). \nThe data includes details such as unique identifiers, names, descriptions, URLs to more information, associated tactics, detection methods, applicable platforms, and data sources for detection. It also specifies whether a technique is a sub-technique of another and lists defenses that the technique may bypass.", "## Structure\n\n- Rows: 130\n- Columns: 11\n\nThe columns are as follows:\n\n1. 'ID': Unique identifier for the technique (e.g., T1189, T1566.002)\n2. 'Name': Name of the technique (e.g., Drive-by Compromise, Phishing: Spearphishing Link)\n3. 'Description': Brief description of the technique\n4. 'URL': Link to more information about the technique\n5. 'Tactics': Category of tactics the technique falls under\n6. 'Detection': How the technique might be detected\n7. 'Platforms': Operating systems or platforms the technique applies to\n8. 'Data Sources': Sources of data for detection\n9. 'Is Sub-Technique': Whether the entry is a sub-technique (True/False)\n10. 'Sub-Technique Of': If the entry is a sub-technique, the parent technique's ID\n11. 'Defenses Bypassed': Defenses the technique is known to bypass", "## Usage\n\nThis dataset can be used by cybersecurity professionals and researchers to analyze and categorize different types of cybersecurity threats and their characteristics. It can also assist in developing defensive strategies by providing detection methods and noting applicable platforms.", "## Additional Information\n\n- The dataset is likely derived from the MITRE ATT&CK framework, as indicated by the URL structure and content.\n- The data may need to be updated periodically to reflect the latest information from the MITRE ATT&CK database." ]
[ "TAGS\n#language-English #license-unlicense #region-us \n", "# Cloud Matrix Data", "## Description\n\nThis dataset contains information related to cybersecurity techniques, as cataloged by the MITRE ATT&CK framework(v14). \nThe data includes details such as unique identifiers, names, descriptions, URLs to more information, associated tactics, detection methods, applicable platforms, and data sources for detection. It also specifies whether a technique is a sub-technique of another and lists defenses that the technique may bypass.", "## Structure\n\n- Rows: 130\n- Columns: 11\n\nThe columns are as follows:\n\n1. 'ID': Unique identifier for the technique (e.g., T1189, T1566.002)\n2. 'Name': Name of the technique (e.g., Drive-by Compromise, Phishing: Spearphishing Link)\n3. 'Description': Brief description of the technique\n4. 'URL': Link to more information about the technique\n5. 'Tactics': Category of tactics the technique falls under\n6. 'Detection': How the technique might be detected\n7. 'Platforms': Operating systems or platforms the technique applies to\n8. 'Data Sources': Sources of data for detection\n9. 'Is Sub-Technique': Whether the entry is a sub-technique (True/False)\n10. 'Sub-Technique Of': If the entry is a sub-technique, the parent technique's ID\n11. 'Defenses Bypassed': Defenses the technique is known to bypass", "## Usage\n\nThis dataset can be used by cybersecurity professionals and researchers to analyze and categorize different types of cybersecurity threats and their characteristics. It can also assist in developing defensive strategies by providing detection methods and noting applicable platforms.", "## Additional Information\n\n- The dataset is likely derived from the MITRE ATT&CK framework, as indicated by the URL structure and content.\n- The data may need to be updated periodically to reflect the latest information from the MITRE ATT&CK database." ]
[ 17, 4, 98, 233, 58, 58 ]
[ "passage: TAGS\n#language-English #license-unlicense #region-us \n# Cloud Matrix Data## Description\n\nThis dataset contains information related to cybersecurity techniques, as cataloged by the MITRE ATT&CK framework(v14). \nThe data includes details such as unique identifiers, names, descriptions, URLs to more information, associated tactics, detection methods, applicable platforms, and data sources for detection. It also specifies whether a technique is a sub-technique of another and lists defenses that the technique may bypass.## Structure\n\n- Rows: 130\n- Columns: 11\n\nThe columns are as follows:\n\n1. 'ID': Unique identifier for the technique (e.g., T1189, T1566.002)\n2. 'Name': Name of the technique (e.g., Drive-by Compromise, Phishing: Spearphishing Link)\n3. 'Description': Brief description of the technique\n4. 'URL': Link to more information about the technique\n5. 'Tactics': Category of tactics the technique falls under\n6. 'Detection': How the technique might be detected\n7. 'Platforms': Operating systems or platforms the technique applies to\n8. 'Data Sources': Sources of data for detection\n9. 'Is Sub-Technique': Whether the entry is a sub-technique (True/False)\n10. 'Sub-Technique Of': If the entry is a sub-technique, the parent technique's ID\n11. 'Defenses Bypassed': Defenses the technique is known to bypass## Usage\n\nThis dataset can be used by cybersecurity professionals and researchers to analyze and categorize different types of cybersecurity threats and their characteristics. It can also assist in developing defensive strategies by providing detection methods and noting applicable platforms.## Additional Information\n\n- The dataset is likely derived from the MITRE ATT&CK framework, as indicated by the URL structure and content.\n- The data may need to be updated periodically to reflect the latest information from the MITRE ATT&CK database." ]
4459fe3e63907fbd69c3aeef12545ee3b321f3fa
# Dataset Card for "cowteats" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lakshmikarpolam/cowteats
[ "region:us" ]
2023-11-06T02:11:01+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 12526407.0, "num_examples": 450}], "download_size": 12525446, "dataset_size": 12526407.0}}
2023-11-06T02:17:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cowteats" More Information needed
[ "# Dataset Card for \"cowteats\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cowteats\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cowteats\"\n\nMore Information needed" ]
40189ab3ec38cfed7e0177d24e027153620c6572
# Video and Key Frame Data ## Description This repository contains video data, extracted key frames, and associated metadata for a collection of scenarios each corresponding to different behaviors in a simulation environment. ## Directory Structure - `video/`: This directory holds the original MP4 video files organized by scenario and behavior. We provide around 40 mp4 for the same scenario and behavior pair with different routes, speeds, surrounding environments. - `key_frames/`: Here, five key frames extracted from each video are stored. They are organized into folders mirroring the structure of the `video/` directory. - `scenario_descriptions.csv`: This file provides word descriptions of each scene in video. - `video_statistics.csv`: This file contains statistics extracted from the videos, including details like velocity, acceleration, collision situation for each frame on the corresponding mp4. ## Usage The videos can be used to analyze the behavior in each scenario. The key frames provide quick snapshots of the scenarios at different time intervals, which can be used for further analysis or for generating thumbnails. ## Scripts - `extract_frames.py`: A Python script used to extract key frames from the videos.
AI-Secure/ChatScene-v1
[ "task_categories:text-to-image", "task_categories:text-to-video", "size_categories:n<1K", "language:en", "license:cc", "region:us" ]
2023-11-06T02:44:51+00:00
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["text-to-image", "text-to-video"]}
2023-11-06T03:49:47+00:00
[]
[ "en" ]
TAGS #task_categories-text-to-image #task_categories-text-to-video #size_categories-n<1K #language-English #license-cc #region-us
# Video and Key Frame Data ## Description This repository contains video data, extracted key frames, and associated metadata for a collection of scenarios each corresponding to different behaviors in a simulation environment. ## Directory Structure - 'video/': This directory holds the original MP4 video files organized by scenario and behavior. We provide around 40 mp4 for the same scenario and behavior pair with different routes, speeds, surrounding environments. - 'key_frames/': Here, five key frames extracted from each video are stored. They are organized into folders mirroring the structure of the 'video/' directory. - 'scenario_descriptions.csv': This file provides word descriptions of each scene in video. - 'video_statistics.csv': This file contains statistics extracted from the videos, including details like velocity, acceleration, collision situation for each frame on the corresponding mp4. ## Usage The videos can be used to analyze the behavior in each scenario. The key frames provide quick snapshots of the scenarios at different time intervals, which can be used for further analysis or for generating thumbnails. ## Scripts - 'extract_frames.py': A Python script used to extract key frames from the videos.
[ "# Video and Key Frame Data", "## Description\n\nThis repository contains video data, extracted key frames, and associated metadata for a collection of scenarios each corresponding to different behaviors in a simulation environment.", "## Directory Structure\n\n- 'video/': This directory holds the original MP4 video files organized by scenario and behavior. We provide around 40 mp4 for the same scenario and behavior pair with different routes, speeds, surrounding environments.\n- 'key_frames/': Here, five key frames extracted from each video are stored. They are organized into folders mirroring the structure of the 'video/' directory.\n- 'scenario_descriptions.csv': This file provides word descriptions of each scene in video.\n- 'video_statistics.csv': This file contains statistics extracted from the videos, including details like velocity, acceleration, collision situation for each frame on the corresponding mp4.", "## Usage\n\nThe videos can be used to analyze the behavior in each scenario. The key frames provide quick snapshots of the scenarios at different time intervals, which can be used for further analysis or for generating thumbnails.", "## Scripts\n\n- 'extract_frames.py': A Python script used to extract key frames from the videos." ]
[ "TAGS\n#task_categories-text-to-image #task_categories-text-to-video #size_categories-n<1K #language-English #license-cc #region-us \n", "# Video and Key Frame Data", "## Description\n\nThis repository contains video data, extracted key frames, and associated metadata for a collection of scenarios each corresponding to different behaviors in a simulation environment.", "## Directory Structure\n\n- 'video/': This directory holds the original MP4 video files organized by scenario and behavior. We provide around 40 mp4 for the same scenario and behavior pair with different routes, speeds, surrounding environments.\n- 'key_frames/': Here, five key frames extracted from each video are stored. They are organized into folders mirroring the structure of the 'video/' directory.\n- 'scenario_descriptions.csv': This file provides word descriptions of each scene in video.\n- 'video_statistics.csv': This file contains statistics extracted from the videos, including details like velocity, acceleration, collision situation for each frame on the corresponding mp4.", "## Usage\n\nThe videos can be used to analyze the behavior in each scenario. The key frames provide quick snapshots of the scenarios at different time intervals, which can be used for further analysis or for generating thumbnails.", "## Scripts\n\n- 'extract_frames.py': A Python script used to extract key frames from the videos." ]
[ 49, 7, 40, 170, 53, 27 ]
[ "passage: TAGS\n#task_categories-text-to-image #task_categories-text-to-video #size_categories-n<1K #language-English #license-cc #region-us \n# Video and Key Frame Data## Description\n\nThis repository contains video data, extracted key frames, and associated metadata for a collection of scenarios each corresponding to different behaviors in a simulation environment.## Directory Structure\n\n- 'video/': This directory holds the original MP4 video files organized by scenario and behavior. We provide around 40 mp4 for the same scenario and behavior pair with different routes, speeds, surrounding environments.\n- 'key_frames/': Here, five key frames extracted from each video are stored. They are organized into folders mirroring the structure of the 'video/' directory.\n- 'scenario_descriptions.csv': This file provides word descriptions of each scene in video.\n- 'video_statistics.csv': This file contains statistics extracted from the videos, including details like velocity, acceleration, collision situation for each frame on the corresponding mp4.## Usage\n\nThe videos can be used to analyze the behavior in each scenario. The key frames provide quick snapshots of the scenarios at different time intervals, which can be used for further analysis or for generating thumbnails.## Scripts\n\n- 'extract_frames.py': A Python script used to extract key frames from the videos." ]
f8c36568a4981146d690f35ba448847108f097ea
Azerbaijani Sentiment Classification Dataset with ~160K reviews. Dataset contains 3 columns: Content, Score, Upvotes
hajili/azerbaijani_review_sentiment_classification
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:az", "license:mit", "doi:10.57967/hf/1363", "region:us" ]
2023-11-06T02:52:46+00:00
{"language": ["az"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"]}
2023-11-06T03:03:43+00:00
[]
[ "az" ]
TAGS #task_categories-text-classification #size_categories-100K<n<1M #language-Azerbaijani #license-mit #doi-10.57967/hf/1363 #region-us
Azerbaijani Sentiment Classification Dataset with ~160K reviews. Dataset contains 3 columns: Content, Score, Upvotes
[]
[ "TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-Azerbaijani #license-mit #doi-10.57967/hf/1363 #region-us \n" ]
[ 53 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-Azerbaijani #license-mit #doi-10.57967/hf/1363 #region-us \n" ]
27e25e85a9482c0994a870f98ee013171dd5abfa
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
reckitt-anugrahakbarp/SNS_caption_checker
[ "region:us" ]
2023-11-06T03:00:26+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]}
2023-11-29T10:28:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 8, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
5319f27743a3f7f686c243a85fef33f4a0441a53
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
reckitt-anugrahakbarp/SNS_audio_translation
[ "region:us" ]
2023-11-06T03:00:35+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]}
2023-11-29T10:29:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 8, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
4c704daa0f05260c6911df01e2b0fa03d4c5f007
# Data source: Clore,John, Cios,Krzysztof, DeShazo,Jon, and Strack,Beata. (2014). Diabetes 130-US hospitals for years 1999-2008. UCI Machine Learning Repository. https://doi.org/10.24432/C5230J. # Basic data preprocessing was based on this [notebook](https://github.com/csinva/imodels-data/blob/master/notebooks_fetch_data/00_get_datasets_custom.ipynb). # To load raw train and test sets from datasets import load_dataset train_set = load_dataset(dataset_name, data_files="train.csv") test_set = load_dataset(dataset_name, data_files="test.csv") # To load preprocessed train set from datasets import load_dataset preprocessed_train_set = load_dataset(dataset_name, data_files="preprocessed_train_set.csv")
Bena345/diabetes-readmission
[ "language:en", "license:mit", "medical", "region:us" ]
2023-11-06T03:10:05+00:00
{"language": ["en"], "license": "mit", "pretty_name": "Diabetes Readmissions", "tags": ["medical"]}
2023-11-19T06:28:56+00:00
[]
[ "en" ]
TAGS #language-English #license-mit #medical #region-us
# Data source: Clore,John, Cios,Krzysztof, DeShazo,Jon, and Strack,Beata. (2014). Diabetes 130-US hospitals for years 1999-2008. UCI Machine Learning Repository. URL # Basic data preprocessing was based on this notebook. # To load raw train and test sets from datasets import load_dataset train_set = load_dataset(dataset_name, data_files="URL") test_set = load_dataset(dataset_name, data_files="URL") # To load preprocessed train set from datasets import load_dataset preprocessed_train_set = load_dataset(dataset_name, data_files="preprocessed_train_set.csv")
[ "# Data source:\n Clore,John, Cios,Krzysztof, DeShazo,Jon, and Strack,Beata. (2014).\n Diabetes 130-US hospitals for years 1999-2008. UCI Machine Learning\n Repository. URL", "# Basic data preprocessing was based on this notebook.", "# To load raw train and test sets\n\n from datasets import load_dataset\n \n train_set = load_dataset(dataset_name, data_files=\"URL\")\n \n test_set = load_dataset(dataset_name, data_files=\"URL\")", "# To load preprocessed train set\n\n from datasets import load_dataset\n \n preprocessed_train_set = load_dataset(dataset_name, data_files=\"preprocessed_train_set.csv\")" ]
[ "TAGS\n#language-English #license-mit #medical #region-us \n", "# Data source:\n Clore,John, Cios,Krzysztof, DeShazo,Jon, and Strack,Beata. (2014).\n Diabetes 130-US hospitals for years 1999-2008. UCI Machine Learning\n Repository. URL", "# Basic data preprocessing was based on this notebook.", "# To load raw train and test sets\n\n from datasets import load_dataset\n \n train_set = load_dataset(dataset_name, data_files=\"URL\")\n \n test_set = load_dataset(dataset_name, data_files=\"URL\")", "# To load preprocessed train set\n\n from datasets import load_dataset\n \n preprocessed_train_set = load_dataset(dataset_name, data_files=\"preprocessed_train_set.csv\")" ]
[ 18, 53, 12, 60, 53 ]
[ "passage: TAGS\n#language-English #license-mit #medical #region-us \n# Data source:\n Clore,John, Cios,Krzysztof, DeShazo,Jon, and Strack,Beata. (2014).\n Diabetes 130-US hospitals for years 1999-2008. UCI Machine Learning\n Repository. URL# Basic data preprocessing was based on this notebook.# To load raw train and test sets\n\n from datasets import load_dataset\n \n train_set = load_dataset(dataset_name, data_files=\"URL\")\n \n test_set = load_dataset(dataset_name, data_files=\"URL\")# To load preprocessed train set\n\n from datasets import load_dataset\n \n preprocessed_train_set = load_dataset(dataset_name, data_files=\"preprocessed_train_set.csv\")" ]
214ba4d00eb0ecdb2f15a48c4de0d400960a95d1
# Dataset Card for "3399e196" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/3399e196
[ "region:us" ]
2023-11-06T03:21:50+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 158, "num_examples": 10}], "download_size": 1308, "dataset_size": 158}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T03:21:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "3399e196" More Information needed
[ "# Dataset Card for \"3399e196\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"3399e196\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"3399e196\"\n\nMore Information needed" ]
92ea0b99f7f1663a3ac9950c256f8cae0a64a351
# Dataset Card for "2e566d50" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/2e566d50
[ "region:us" ]
2023-11-06T03:21:52+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 158, "num_examples": 10}], "download_size": 1308, "dataset_size": 158}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T03:21:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "2e566d50" More Information needed
[ "# Dataset Card for \"2e566d50\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"2e566d50\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"2e566d50\"\n\nMore Information needed" ]
c967c9f28642f4cd69df23a8d9a159c1dd93f4af
# Purpose and Features World's largest open source privacy dataset. The purpose of the dataset is to train models to remove personally identifiable information (PII) from text, especially in the context of AI assistants and LLMs. The example texts have **54 PII classes** (types of sensitive data), targeting **229 discussion subjects / use cases** split across business, education, psychology and legal fields, and 5 interactions styles (e.g. casual conversation, formal document, emails etc...). Key facts: - Size: 13.6m text tokens in ~209k examples with 649k PII tokens (see [summary.json](summary.json)) - 4 languages, more to come! - English - French - German - Italian - Synthetic data generated using proprietary algorithms - No privacy violations! - Human-in-the-loop validated high quality dataset # Getting started Option 1: Python ```terminal pip install datasets ``` ```python from datasets import load_dataset dataset = load_dataset("ai4privacy/pii-masking-200k") ``` # Token distribution across PII classes We have taken steps to balance the token distribution across PII classes covered by the dataset. This graph shows the distribution of observations across the different PII classes in this release: ![Token distribution across PII classes](pii_class_count_histogram.png) There is 1 class that is still overrepresented in the dataset: firstname. We will further improve the balance with future dataset releases. This is the token distribution excluding the FIRSTNAME class: ![Token distribution across PII classes excluding `FIRSTNAME`](pii_class_count_histogram_without_FIRSTNAME.png) # Compatible Machine Learning Tasks: - Tokenclassification. Check out a HuggingFace's [guide on token classification](https://huggingface.co/docs/transformers/tasks/token_classification). - [ALBERT](https://huggingface.co/docs/transformers/model_doc/albert), [BERT](https://huggingface.co/docs/transformers/model_doc/bert), [BigBird](https://huggingface.co/docs/transformers/model_doc/big_bird), [BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt), [BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom), [BROS](https://huggingface.co/docs/transformers/model_doc/bros), [CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert), [CANINE](https://huggingface.co/docs/transformers/model_doc/canine), [ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert), [Data2VecText](https://huggingface.co/docs/transformers/model_doc/data2vec-text), [DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta), [DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2), [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert), [ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra), [ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie), [ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m), [ESM](https://huggingface.co/docs/transformers/model_doc/esm), [Falcon](https://huggingface.co/docs/transformers/model_doc/falcon), [FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert), [FNet](https://huggingface.co/docs/transformers/model_doc/fnet), [Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel), [GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3), [OpenAI GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2), [GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode), [GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo), [GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox), [I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert), [LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm), [LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2), [LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3), [LiLT](https://huggingface.co/docs/transformers/model_doc/lilt), [Longformer](https://huggingface.co/docs/transformers/model_doc/longformer), [LUKE](https://huggingface.co/docs/transformers/model_doc/luke), [MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm), [MEGA](https://huggingface.co/docs/transformers/model_doc/mega), [Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert), [MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert), [MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet), [MPT](https://huggingface.co/docs/transformers/model_doc/mpt), [MRA](https://huggingface.co/docs/transformers/model_doc/mra), [Nezha](https://huggingface.co/docs/transformers/model_doc/nezha), [Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer), [QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert), [RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert), [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta), [RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm), [RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert), [RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer), [SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm), [XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta), [XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl), [XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet), [X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod), [YOSO](https://huggingface.co/docs/transformers/model_doc/yoso) - Text Generation: Mapping the unmasked_text to to the masked_text or privacy_mask attributes. Check out HuggingFace's [guide to fine-tunning](https://huggingface.co/docs/transformers/v4.15.0/training) - [T5 Family](https://huggingface.co/docs/transformers/model_doc/t5), [Llama2](https://huggingface.co/docs/transformers/main/model_doc/llama2) # Information regarding the rows: - Each row represents a json object with a natural language text that includes placeholders for PII (and could plausibly be written by a human to an AI assistant). - Sample row: - "masked_text" contains a PII free natural text - "Product officially launching in [COUNTY_1]. Estimate profit of [CURRENCYSYMBOL_1][AMOUNT_1]. Expenses by [ACCOUNTNAME_1].", - "unmasked_text" shows a natural sentence containing PII - "Product officially launching in Washington County. Estimate profit of $488293.16. Expenses by Checking Account." - "privacy_mask" indicates the mapping between the privacy token instances and the string within the natural text.* - "{'[COUNTY_1]': 'Washington County', '[CURRENCYSYMBOL_1]': '$', '[AMOUNT_1]': '488293.16', '[ACCOUNTNAME_1]': 'Checking Account'}" - "span_labels" is an array of arrays formatted in the following way [start, end, pii token instance].* - "[[0, 32, 'O'], [32, 49, 'COUNTY_1'], [49, 70, 'O'], [70, 71, 'CURRENCYSYMBOL_1'], [71, 80, 'AMOUNT_1'], [80, 94, 'O'], [94, 110, 'ACCOUNTNAME_1'], [110, 111, 'O']]", - "bio_labels" follows the common place notation for "beginning", "inside" and "outside" of where each private tokens starts.[original paper](https://arxiv.org/abs/cmp-lg/9505040) -["O", "O", "O", "O", "B-COUNTY", "I-COUNTY", "O", "O", "O", "O", "B-CURRENCYSYMBOL", "O", "O", "I-AMOUNT", "I-AMOUNT", "I-AMOUNT", "I-AMOUNT", "O", "O", "O", "B-ACCOUNTNAME", "I-ACCOUNTNAME", "O"], - "tokenised_text" breaks down the unmasked sentence into tokens using Bert Family tokeniser to help fine-tune large language models. - ["product", "officially", "launching", "in", "washington", "county", ".", "estimate", "profit", "of", "$", "48", "##8", "##29", "##3", ".", "16", ".", "expenses", "by", "checking", "account", "."] *note for the nested objects, we store them as string to maximise compability between various software. # About Us: At Ai4Privacy, we are commited to building the global seatbelt of the 21st century for Artificial Intelligence to help fight against potential risks of personal information being integrated into data pipelines. Newsletter & updates: [www.Ai4Privacy.com](www.Ai4Privacy.com) - Looking for ML engineers, developers, beta-testers, human in the loop validators (all languages) - Integrations with already existing open source solutions - Ask us a question on discord: [https://discord.gg/kxSbJrUQZF](https://discord.gg/kxSbJrUQZF) # Roadmap and Future Development - Carbon Neutral - Benchmarking - Better multilingual and especially localisation - Extended integrations - Continuously increase the training set - Further optimisation to the model to reduce size and increase generalisability - Next released major update is planned for the 14th of December 2023 (subscribe to newsletter for updates) # Use Cases and Applications **Chatbots**: Incorporating a PII masking model into chatbot systems can ensure the privacy and security of user conversations by automatically redacting sensitive information such as names, addresses, phone numbers, and email addresses. **Customer Support Systems**: When interacting with customers through support tickets or live chats, masking PII can help protect sensitive customer data, enabling support agents to handle inquiries without the risk of exposing personal information. **Email Filtering**: Email providers can utilize a PII masking model to automatically detect and redact PII from incoming and outgoing emails, reducing the chances of accidental disclosure of sensitive information. **Data Anonymization**: Organizations dealing with large datasets containing PII, such as medical or financial records, can leverage a PII masking model to anonymize the data before sharing it for research, analysis, or collaboration purposes. **Social Media Platforms**: Integrating PII masking capabilities into social media platforms can help users protect their personal information from unauthorized access, ensuring a safer online environment. **Content Moderation**: PII masking can assist content moderation systems in automatically detecting and blurring or redacting sensitive information in user-generated content, preventing the accidental sharing of personal details. **Online Forms**: Web applications that collect user data through online forms, such as registration forms or surveys, can employ a PII masking model to anonymize or mask the collected information in real-time, enhancing privacy and data protection. **Collaborative Document Editing**: Collaboration platforms and document editing tools can use a PII masking model to automatically mask or redact sensitive information when multiple users are working on shared documents. **Research and Data Sharing**: Researchers and institutions can leverage a PII masking model to ensure privacy and confidentiality when sharing datasets for collaboration, analysis, or publication purposes, reducing the risk of data breaches or identity theft. **Content Generation**: Content generation systems, such as article generators or language models, can benefit from PII masking to automatically mask or generate fictional PII when creating sample texts or examples, safeguarding the privacy of individuals. (...and whatever else your creative mind can think of) # Support and Maintenance AI4Privacy is a project affiliated with [AISuisse SA](https://www.aisuisse.com/).
ai4privacy/pii-masking-200k
[ "task_categories:conversational", "task_categories:text-classification", "task_categories:token-classification", "task_categories:table-question-answering", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:summarization", "task_categories:feature-extraction", "task_categories:text-generation", "task_categories:text2text-generation", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "language:fr", "language:de", "language:it", "legal", "business", "psychology", "privacy", "doi:10.57967/hf/1532", "region:us" ]
2023-11-06T03:34:07+00:00
{"language": ["en", "fr", "de", "it"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["conversational", "text-classification", "token-classification", "table-question-answering", "question-answering", "zero-shot-classification", "summarization", "feature-extraction", "text-generation", "text2text-generation"], "pretty_name": "Ai4Privacy PII200k Dataset", "tags": ["legal", "business", "psychology", "privacy"], "configs": [{"config_name": "default", "data_files": "*.jsonl"}]}
2024-02-13T13:12:18+00:00
[]
[ "en", "fr", "de", "it" ]
TAGS #task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-English #language-French #language-German #language-Italian #legal #business #psychology #privacy #doi-10.57967/hf/1532 #region-us
# Purpose and Features World's largest open source privacy dataset. The purpose of the dataset is to train models to remove personally identifiable information (PII) from text, especially in the context of AI assistants and LLMs. The example texts have 54 PII classes (types of sensitive data), targeting 229 discussion subjects / use cases split across business, education, psychology and legal fields, and 5 interactions styles (e.g. casual conversation, formal document, emails etc...). Key facts: - Size: 13.6m text tokens in ~209k examples with 649k PII tokens (see URL) - 4 languages, more to come! - English - French - German - Italian - Synthetic data generated using proprietary algorithms - No privacy violations! - Human-in-the-loop validated high quality dataset # Getting started Option 1: Python # Token distribution across PII classes We have taken steps to balance the token distribution across PII classes covered by the dataset. This graph shows the distribution of observations across the different PII classes in this release: !Token distribution across PII classes There is 1 class that is still overrepresented in the dataset: firstname. We will further improve the balance with future dataset releases. This is the token distribution excluding the FIRSTNAME class: !Token distribution across PII classes excluding 'FIRSTNAME' # Compatible Machine Learning Tasks: - Tokenclassification. Check out a HuggingFace's guide on token classification. - ALBERT, BERT, BigBird, BioGpt, BLOOM, BROS, CamemBERT, CANINE, ConvBERT, Data2VecText, DeBERTa, DeBERTa-v2, DistilBERT, ELECTRA, ERNIE, ErnieM, ESM, Falcon, FlauBERT, FNet, Funnel Transformer, GPT-Sw3, OpenAI GPT-2, GPTBigCode, GPT Neo, GPT NeoX, I-BERT, LayoutLM, LayoutLMv2, LayoutLMv3, LiLT, Longformer, LUKE, MarkupLM, MEGA, Megatron-BERT, MobileBERT, MPNet, MPT, MRA, Nezha, Nyströmformer, QDQBert, RemBERT, RoBERTa, RoBERTa-PreLayerNorm, RoCBert, RoFormer, SqueezeBERT, XLM, XLM-RoBERTa, XLM-RoBERTa-XL, XLNet, X-MOD, YOSO - Text Generation: Mapping the unmasked_text to to the masked_text or privacy_mask attributes. Check out HuggingFace's guide to fine-tunning - T5 Family, Llama2 # Information regarding the rows: - Each row represents a json object with a natural language text that includes placeholders for PII (and could plausibly be written by a human to an AI assistant). - Sample row: - "masked_text" contains a PII free natural text - "Product officially launching in [COUNTY_1]. Estimate profit of [CURRENCYSYMBOL_1][AMOUNT_1]. Expenses by [ACCOUNTNAME_1].", - "unmasked_text" shows a natural sentence containing PII - "Product officially launching in Washington County. Estimate profit of $488293.16. Expenses by Checking Account." - "privacy_mask" indicates the mapping between the privacy token instances and the string within the natural text.* - "{'[COUNTY_1]': 'Washington County', '[CURRENCYSYMBOL_1]': '$', '[AMOUNT_1]': '488293.16', '[ACCOUNTNAME_1]': 'Checking Account'}" - "span_labels" is an array of arrays formatted in the following way [start, end, pii token instance].* - "[[0, 32, 'O'], [32, 49, 'COUNTY_1'], [49, 70, 'O'], [70, 71, 'CURRENCYSYMBOL_1'], [71, 80, 'AMOUNT_1'], [80, 94, 'O'], [94, 110, 'ACCOUNTNAME_1'], [110, 111, 'O']]", - "bio_labels" follows the common place notation for "beginning", "inside" and "outside" of where each private tokens starts.original paper -["O", "O", "O", "O", "B-COUNTY", "I-COUNTY", "O", "O", "O", "O", "B-CURRENCYSYMBOL", "O", "O", "I-AMOUNT", "I-AMOUNT", "I-AMOUNT", "I-AMOUNT", "O", "O", "O", "B-ACCOUNTNAME", "I-ACCOUNTNAME", "O"], - "tokenised_text" breaks down the unmasked sentence into tokens using Bert Family tokeniser to help fine-tune large language models. - ["product", "officially", "launching", "in", "washington", "county", ".", "estimate", "profit", "of", "$", "48", "##8", "##29", "##3", ".", "16", ".", "expenses", "by", "checking", "account", "."] *note for the nested objects, we store them as string to maximise compability between various software. # About Us: At Ai4Privacy, we are commited to building the global seatbelt of the 21st century for Artificial Intelligence to help fight against potential risks of personal information being integrated into data pipelines. Newsletter & updates: URL - Looking for ML engineers, developers, beta-testers, human in the loop validators (all languages) - Integrations with already existing open source solutions - Ask us a question on discord: URL # Roadmap and Future Development - Carbon Neutral - Benchmarking - Better multilingual and especially localisation - Extended integrations - Continuously increase the training set - Further optimisation to the model to reduce size and increase generalisability - Next released major update is planned for the 14th of December 2023 (subscribe to newsletter for updates) # Use Cases and Applications Chatbots: Incorporating a PII masking model into chatbot systems can ensure the privacy and security of user conversations by automatically redacting sensitive information such as names, addresses, phone numbers, and email addresses. Customer Support Systems: When interacting with customers through support tickets or live chats, masking PII can help protect sensitive customer data, enabling support agents to handle inquiries without the risk of exposing personal information. Email Filtering: Email providers can utilize a PII masking model to automatically detect and redact PII from incoming and outgoing emails, reducing the chances of accidental disclosure of sensitive information. Data Anonymization: Organizations dealing with large datasets containing PII, such as medical or financial records, can leverage a PII masking model to anonymize the data before sharing it for research, analysis, or collaboration purposes. Social Media Platforms: Integrating PII masking capabilities into social media platforms can help users protect their personal information from unauthorized access, ensuring a safer online environment. Content Moderation: PII masking can assist content moderation systems in automatically detecting and blurring or redacting sensitive information in user-generated content, preventing the accidental sharing of personal details. Online Forms: Web applications that collect user data through online forms, such as registration forms or surveys, can employ a PII masking model to anonymize or mask the collected information in real-time, enhancing privacy and data protection. Collaborative Document Editing: Collaboration platforms and document editing tools can use a PII masking model to automatically mask or redact sensitive information when multiple users are working on shared documents. Research and Data Sharing: Researchers and institutions can leverage a PII masking model to ensure privacy and confidentiality when sharing datasets for collaboration, analysis, or publication purposes, reducing the risk of data breaches or identity theft. Content Generation: Content generation systems, such as article generators or language models, can benefit from PII masking to automatically mask or generate fictional PII when creating sample texts or examples, safeguarding the privacy of individuals. (...and whatever else your creative mind can think of) # Support and Maintenance AI4Privacy is a project affiliated with AISuisse SA.
[ "# Purpose and Features\n\n\nWorld's largest open source privacy dataset. \n\nThe purpose of the dataset is to train models to remove personally identifiable information (PII) from text, especially in the context of AI assistants and LLMs. \n\n\nThe example texts have 54 PII classes (types of sensitive data), targeting 229 discussion subjects / use cases split across business, education, psychology and legal fields, and 5 interactions styles (e.g. casual conversation, formal document, emails etc...).\n\nKey facts:\n\n- Size: 13.6m text tokens in ~209k examples with 649k PII tokens (see URL)\n- 4 languages, more to come!\n - English\n - French\n - German\n - Italian\n- Synthetic data generated using proprietary algorithms\n - No privacy violations!\n- Human-in-the-loop validated high quality dataset", "# Getting started\n\n\nOption 1: Python", "# Token distribution across PII classes\n\nWe have taken steps to balance the token distribution across PII classes covered by the dataset.\nThis graph shows the distribution of observations across the different PII classes in this release:\n\n!Token distribution across PII classes\n\nThere is 1 class that is still overrepresented in the dataset: firstname.\nWe will further improve the balance with future dataset releases.\nThis is the token distribution excluding the FIRSTNAME class:\n\n!Token distribution across PII classes excluding 'FIRSTNAME'", "# Compatible Machine Learning Tasks:\n- Tokenclassification. Check out a HuggingFace's guide on token classification.\n - ALBERT, BERT, BigBird, BioGpt, BLOOM, BROS, CamemBERT, CANINE, ConvBERT, Data2VecText, DeBERTa, DeBERTa-v2, DistilBERT, ELECTRA, ERNIE, ErnieM, ESM, Falcon, FlauBERT, FNet, Funnel Transformer, GPT-Sw3, OpenAI GPT-2, GPTBigCode, GPT Neo, GPT NeoX, I-BERT, LayoutLM, LayoutLMv2, LayoutLMv3, LiLT, Longformer, LUKE, MarkupLM, MEGA, Megatron-BERT, MobileBERT, MPNet, MPT, MRA, Nezha, Nyströmformer, QDQBert, RemBERT, RoBERTa, RoBERTa-PreLayerNorm, RoCBert, RoFormer, SqueezeBERT, XLM, XLM-RoBERTa, XLM-RoBERTa-XL, XLNet, X-MOD, YOSO\n- Text Generation: Mapping the unmasked_text to to the masked_text or privacy_mask attributes. Check out HuggingFace's guide to fine-tunning\n - T5 Family, Llama2", "# Information regarding the rows:\n- Each row represents a json object with a natural language text that includes placeholders for PII (and could plausibly be written by a human to an AI assistant).\n- Sample row:\n - \"masked_text\" contains a PII free natural text\n - \"Product officially launching in [COUNTY_1]. Estimate profit of [CURRENCYSYMBOL_1][AMOUNT_1]. Expenses by [ACCOUNTNAME_1].\",\n - \"unmasked_text\" shows a natural sentence containing PII\n - \"Product officially launching in Washington County. Estimate profit of $488293.16. Expenses by Checking Account.\"\n - \"privacy_mask\" indicates the mapping between the privacy token instances and the string within the natural text.*\n - \"{'[COUNTY_1]': 'Washington County', '[CURRENCYSYMBOL_1]': '$', '[AMOUNT_1]': '488293.16', '[ACCOUNTNAME_1]': 'Checking Account'}\"\n - \"span_labels\" is an array of arrays formatted in the following way [start, end, pii token instance].*\n - \"[[0, 32, 'O'], [32, 49, 'COUNTY_1'], [49, 70, 'O'], [70, 71, 'CURRENCYSYMBOL_1'], [71, 80, 'AMOUNT_1'], [80, 94, 'O'], [94, 110, 'ACCOUNTNAME_1'], [110, 111, 'O']]\",\n - \"bio_labels\" follows the common place notation for \"beginning\", \"inside\" and \"outside\" of where each private tokens starts.original paper\n -[\"O\", \"O\", \"O\", \"O\", \"B-COUNTY\", \"I-COUNTY\", \"O\", \"O\", \"O\", \"O\", \"B-CURRENCYSYMBOL\", \"O\", \"O\", \"I-AMOUNT\", \"I-AMOUNT\", \"I-AMOUNT\", \"I-AMOUNT\", \"O\", \"O\", \"O\", \"B-ACCOUNTNAME\", \"I-ACCOUNTNAME\", \"O\"],\n - \"tokenised_text\" breaks down the unmasked sentence into tokens using Bert Family tokeniser to help fine-tune large language models.\n - [\"product\", \"officially\", \"launching\", \"in\", \"washington\", \"county\", \".\", \"estimate\", \"profit\", \"of\", \"$\", \"48\", \"##8\", \"##29\", \"##3\", \".\", \"16\", \".\", \"expenses\", \"by\", \"checking\", \"account\", \".\"]\n\n*note for the nested objects, we store them as string to maximise compability between various software.", "# About Us:\n\nAt Ai4Privacy, we are commited to building the global seatbelt of the 21st century for Artificial Intelligence to help fight against potential risks of personal information being integrated into data pipelines.\n\nNewsletter & updates: URL\n- Looking for ML engineers, developers, beta-testers, human in the loop validators (all languages)\n- Integrations with already existing open source solutions\n- Ask us a question on discord: URL", "# Roadmap and Future Development\n\n- Carbon Neutral\n- Benchmarking\n- Better multilingual and especially localisation\n- Extended integrations\n- Continuously increase the training set\n- Further optimisation to the model to reduce size and increase generalisability \n- Next released major update is planned for the 14th of December 2023 (subscribe to newsletter for updates)", "# Use Cases and Applications\n\nChatbots: Incorporating a PII masking model into chatbot systems can ensure the privacy and security of user conversations by automatically redacting sensitive information such as names, addresses, phone numbers, and email addresses.\n\nCustomer Support Systems: When interacting with customers through support tickets or live chats, masking PII can help protect sensitive customer data, enabling support agents to handle inquiries without the risk of exposing personal information.\n\nEmail Filtering: Email providers can utilize a PII masking model to automatically detect and redact PII from incoming and outgoing emails, reducing the chances of accidental disclosure of sensitive information.\n\nData Anonymization: Organizations dealing with large datasets containing PII, such as medical or financial records, can leverage a PII masking model to anonymize the data before sharing it for research, analysis, or collaboration purposes.\n\nSocial Media Platforms: Integrating PII masking capabilities into social media platforms can help users protect their personal information from unauthorized access, ensuring a safer online environment.\n\nContent Moderation: PII masking can assist content moderation systems in automatically detecting and blurring or redacting sensitive information in user-generated content, preventing the accidental sharing of personal details.\n\nOnline Forms: Web applications that collect user data through online forms, such as registration forms or surveys, can employ a PII masking model to anonymize or mask the collected information in real-time, enhancing privacy and data protection.\n\nCollaborative Document Editing: Collaboration platforms and document editing tools can use a PII masking model to automatically mask or redact sensitive information when multiple users are working on shared documents.\n\nResearch and Data Sharing: Researchers and institutions can leverage a PII masking model to ensure privacy and confidentiality when sharing datasets for collaboration, analysis, or publication purposes, reducing the risk of data breaches or identity theft.\n\nContent Generation: Content generation systems, such as article generators or language models, can benefit from PII masking to automatically mask or generate fictional PII when creating sample texts or examples, safeguarding the privacy of individuals.\n\n(...and whatever else your creative mind can think of)", "# Support and Maintenance\n\nAI4Privacy is a project affiliated with AISuisse SA." ]
[ "TAGS\n#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-English #language-French #language-German #language-Italian #legal #business #psychology #privacy #doi-10.57967/hf/1532 #region-us \n", "# Purpose and Features\n\n\nWorld's largest open source privacy dataset. \n\nThe purpose of the dataset is to train models to remove personally identifiable information (PII) from text, especially in the context of AI assistants and LLMs. \n\n\nThe example texts have 54 PII classes (types of sensitive data), targeting 229 discussion subjects / use cases split across business, education, psychology and legal fields, and 5 interactions styles (e.g. casual conversation, formal document, emails etc...).\n\nKey facts:\n\n- Size: 13.6m text tokens in ~209k examples with 649k PII tokens (see URL)\n- 4 languages, more to come!\n - English\n - French\n - German\n - Italian\n- Synthetic data generated using proprietary algorithms\n - No privacy violations!\n- Human-in-the-loop validated high quality dataset", "# Getting started\n\n\nOption 1: Python", "# Token distribution across PII classes\n\nWe have taken steps to balance the token distribution across PII classes covered by the dataset.\nThis graph shows the distribution of observations across the different PII classes in this release:\n\n!Token distribution across PII classes\n\nThere is 1 class that is still overrepresented in the dataset: firstname.\nWe will further improve the balance with future dataset releases.\nThis is the token distribution excluding the FIRSTNAME class:\n\n!Token distribution across PII classes excluding 'FIRSTNAME'", "# Compatible Machine Learning Tasks:\n- Tokenclassification. Check out a HuggingFace's guide on token classification.\n - ALBERT, BERT, BigBird, BioGpt, BLOOM, BROS, CamemBERT, CANINE, ConvBERT, Data2VecText, DeBERTa, DeBERTa-v2, DistilBERT, ELECTRA, ERNIE, ErnieM, ESM, Falcon, FlauBERT, FNet, Funnel Transformer, GPT-Sw3, OpenAI GPT-2, GPTBigCode, GPT Neo, GPT NeoX, I-BERT, LayoutLM, LayoutLMv2, LayoutLMv3, LiLT, Longformer, LUKE, MarkupLM, MEGA, Megatron-BERT, MobileBERT, MPNet, MPT, MRA, Nezha, Nyströmformer, QDQBert, RemBERT, RoBERTa, RoBERTa-PreLayerNorm, RoCBert, RoFormer, SqueezeBERT, XLM, XLM-RoBERTa, XLM-RoBERTa-XL, XLNet, X-MOD, YOSO\n- Text Generation: Mapping the unmasked_text to to the masked_text or privacy_mask attributes. Check out HuggingFace's guide to fine-tunning\n - T5 Family, Llama2", "# Information regarding the rows:\n- Each row represents a json object with a natural language text that includes placeholders for PII (and could plausibly be written by a human to an AI assistant).\n- Sample row:\n - \"masked_text\" contains a PII free natural text\n - \"Product officially launching in [COUNTY_1]. Estimate profit of [CURRENCYSYMBOL_1][AMOUNT_1]. Expenses by [ACCOUNTNAME_1].\",\n - \"unmasked_text\" shows a natural sentence containing PII\n - \"Product officially launching in Washington County. Estimate profit of $488293.16. Expenses by Checking Account.\"\n - \"privacy_mask\" indicates the mapping between the privacy token instances and the string within the natural text.*\n - \"{'[COUNTY_1]': 'Washington County', '[CURRENCYSYMBOL_1]': '$', '[AMOUNT_1]': '488293.16', '[ACCOUNTNAME_1]': 'Checking Account'}\"\n - \"span_labels\" is an array of arrays formatted in the following way [start, end, pii token instance].*\n - \"[[0, 32, 'O'], [32, 49, 'COUNTY_1'], [49, 70, 'O'], [70, 71, 'CURRENCYSYMBOL_1'], [71, 80, 'AMOUNT_1'], [80, 94, 'O'], [94, 110, 'ACCOUNTNAME_1'], [110, 111, 'O']]\",\n - \"bio_labels\" follows the common place notation for \"beginning\", \"inside\" and \"outside\" of where each private tokens starts.original paper\n -[\"O\", \"O\", \"O\", \"O\", \"B-COUNTY\", \"I-COUNTY\", \"O\", \"O\", \"O\", \"O\", \"B-CURRENCYSYMBOL\", \"O\", \"O\", \"I-AMOUNT\", \"I-AMOUNT\", \"I-AMOUNT\", \"I-AMOUNT\", \"O\", \"O\", \"O\", \"B-ACCOUNTNAME\", \"I-ACCOUNTNAME\", \"O\"],\n - \"tokenised_text\" breaks down the unmasked sentence into tokens using Bert Family tokeniser to help fine-tune large language models.\n - [\"product\", \"officially\", \"launching\", \"in\", \"washington\", \"county\", \".\", \"estimate\", \"profit\", \"of\", \"$\", \"48\", \"##8\", \"##29\", \"##3\", \".\", \"16\", \".\", \"expenses\", \"by\", \"checking\", \"account\", \".\"]\n\n*note for the nested objects, we store them as string to maximise compability between various software.", "# About Us:\n\nAt Ai4Privacy, we are commited to building the global seatbelt of the 21st century for Artificial Intelligence to help fight against potential risks of personal information being integrated into data pipelines.\n\nNewsletter & updates: URL\n- Looking for ML engineers, developers, beta-testers, human in the loop validators (all languages)\n- Integrations with already existing open source solutions\n- Ask us a question on discord: URL", "# Roadmap and Future Development\n\n- Carbon Neutral\n- Benchmarking\n- Better multilingual and especially localisation\n- Extended integrations\n- Continuously increase the training set\n- Further optimisation to the model to reduce size and increase generalisability \n- Next released major update is planned for the 14th of December 2023 (subscribe to newsletter for updates)", "# Use Cases and Applications\n\nChatbots: Incorporating a PII masking model into chatbot systems can ensure the privacy and security of user conversations by automatically redacting sensitive information such as names, addresses, phone numbers, and email addresses.\n\nCustomer Support Systems: When interacting with customers through support tickets or live chats, masking PII can help protect sensitive customer data, enabling support agents to handle inquiries without the risk of exposing personal information.\n\nEmail Filtering: Email providers can utilize a PII masking model to automatically detect and redact PII from incoming and outgoing emails, reducing the chances of accidental disclosure of sensitive information.\n\nData Anonymization: Organizations dealing with large datasets containing PII, such as medical or financial records, can leverage a PII masking model to anonymize the data before sharing it for research, analysis, or collaboration purposes.\n\nSocial Media Platforms: Integrating PII masking capabilities into social media platforms can help users protect their personal information from unauthorized access, ensuring a safer online environment.\n\nContent Moderation: PII masking can assist content moderation systems in automatically detecting and blurring or redacting sensitive information in user-generated content, preventing the accidental sharing of personal details.\n\nOnline Forms: Web applications that collect user data through online forms, such as registration forms or surveys, can employ a PII masking model to anonymize or mask the collected information in real-time, enhancing privacy and data protection.\n\nCollaborative Document Editing: Collaboration platforms and document editing tools can use a PII masking model to automatically mask or redact sensitive information when multiple users are working on shared documents.\n\nResearch and Data Sharing: Researchers and institutions can leverage a PII masking model to ensure privacy and confidentiality when sharing datasets for collaboration, analysis, or publication purposes, reducing the risk of data breaches or identity theft.\n\nContent Generation: Content generation systems, such as article generators or language models, can benefit from PII masking to automatically mask or generate fictional PII when creating sample texts or examples, safeguarding the privacy of individuals.\n\n(...and whatever else your creative mind can think of)", "# Support and Maintenance\n\nAI4Privacy is a project affiliated with AISuisse SA." ]
[ 193, 194, 6, 117, 321, 680, 99, 76, 495, 23 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-English #language-French #language-German #language-Italian #legal #business #psychology #privacy #doi-10.57967/hf/1532 #region-us \n# Purpose and Features\n\n\nWorld's largest open source privacy dataset. \n\nThe purpose of the dataset is to train models to remove personally identifiable information (PII) from text, especially in the context of AI assistants and LLMs. \n\n\nThe example texts have 54 PII classes (types of sensitive data), targeting 229 discussion subjects / use cases split across business, education, psychology and legal fields, and 5 interactions styles (e.g. casual conversation, formal document, emails etc...).\n\nKey facts:\n\n- Size: 13.6m text tokens in ~209k examples with 649k PII tokens (see URL)\n- 4 languages, more to come!\n - English\n - French\n - German\n - Italian\n- Synthetic data generated using proprietary algorithms\n - No privacy violations!\n- Human-in-the-loop validated high quality dataset# Getting started\n\n\nOption 1: Python", "passage: # Token distribution across PII classes\n\nWe have taken steps to balance the token distribution across PII classes covered by the dataset.\nThis graph shows the distribution of observations across the different PII classes in this release:\n\n!Token distribution across PII classes\n\nThere is 1 class that is still overrepresented in the dataset: firstname.\nWe will further improve the balance with future dataset releases.\nThis is the token distribution excluding the FIRSTNAME class:\n\n!Token distribution across PII classes excluding 'FIRSTNAME'# Compatible Machine Learning Tasks:\n- Tokenclassification. Check out a HuggingFace's guide on token classification.\n - ALBERT, BERT, BigBird, BioGpt, BLOOM, BROS, CamemBERT, CANINE, ConvBERT, Data2VecText, DeBERTa, DeBERTa-v2, DistilBERT, ELECTRA, ERNIE, ErnieM, ESM, Falcon, FlauBERT, FNet, Funnel Transformer, GPT-Sw3, OpenAI GPT-2, GPTBigCode, GPT Neo, GPT NeoX, I-BERT, LayoutLM, LayoutLMv2, LayoutLMv3, LiLT, Longformer, LUKE, MarkupLM, MEGA, Megatron-BERT, MobileBERT, MPNet, MPT, MRA, Nezha, Nyströmformer, QDQBert, RemBERT, RoBERTa, RoBERTa-PreLayerNorm, RoCBert, RoFormer, SqueezeBERT, XLM, XLM-RoBERTa, XLM-RoBERTa-XL, XLNet, X-MOD, YOSO\n- Text Generation: Mapping the unmasked_text to to the masked_text or privacy_mask attributes. Check out HuggingFace's guide to fine-tunning\n - T5 Family, Llama2", "passage: # Information regarding the rows:\n- Each row represents a json object with a natural language text that includes placeholders for PII (and could plausibly be written by a human to an AI assistant).\n- Sample row:\n - \"masked_text\" contains a PII free natural text\n - \"Product officially launching in [COUNTY_1]. Estimate profit of [CURRENCYSYMBOL_1][AMOUNT_1]. Expenses by [ACCOUNTNAME_1].\",\n - \"unmasked_text\" shows a natural sentence containing PII\n - \"Product officially launching in Washington County. Estimate profit of $488293.16. Expenses by Checking Account.\"\n - \"privacy_mask\" indicates the mapping between the privacy token instances and the string within the natural text.*\n - \"{'[COUNTY_1]': 'Washington County', '[CURRENCYSYMBOL_1]': '$', '[AMOUNT_1]': '488293.16', '[ACCOUNTNAME_1]': 'Checking Account'}\"\n - \"span_labels\" is an array of arrays formatted in the following way [start, end, pii token instance].*\n - \"[[0, 32, 'O'], [32, 49, 'COUNTY_1'], [49, 70, 'O'], [70, 71, 'CURRENCYSYMBOL_1'], [71, 80, 'AMOUNT_1'], [80, 94, 'O'], [94, 110, 'ACCOUNTNAME_1'], [110, 111, 'O']]\",\n - \"bio_labels\" follows the common place notation for \"beginning\", \"inside\" and \"outside\" of where each private tokens starts.original paper\n -[\"O\", \"O\", \"O\", \"O\", \"B-COUNTY\", \"I-COUNTY\", \"O\", \"O\", \"O\", \"O\", \"B-CURRENCYSYMBOL\", \"O\", \"O\", \"I-AMOUNT\", \"I-AMOUNT\", \"I-AMOUNT\", \"I-AMOUNT\", \"O\", \"O\", \"O\", \"B-ACCOUNTNAME\", \"I-ACCOUNTNAME\", \"O\"],\n - \"tokenised_text\" breaks down the unmasked sentence into tokens using Bert Family tokeniser to help fine-tune large language models.\n - [\"product\", \"officially\", \"launching\", \"in\", \"washington\", \"county\", \".\", \"estimate\", \"profit\", \"of\", \"$\", \"48\", \"##8\", \"##29\", \"##3\", \".\", \"16\", \".\", \"expenses\", \"by\", \"checking\", \"account\", \".\"]\n\n*note for the nested objects, we store them as string to maximise compability between various software.# About Us:\n\nAt Ai4Privacy, we are commited to building the global seatbelt of the 21st century for Artificial Intelligence to help fight against potential risks of personal information being integrated into data pipelines.\n\nNewsletter & updates: URL\n- Looking for ML engineers, developers, beta-testers, human in the loop validators (all languages)\n- Integrations with already existing open source solutions\n- Ask us a question on discord: URL# Roadmap and Future Development\n\n- Carbon Neutral\n- Benchmarking\n- Better multilingual and especially localisation\n- Extended integrations\n- Continuously increase the training set\n- Further optimisation to the model to reduce size and increase generalisability \n- Next released major update is planned for the 14th of December 2023 (subscribe to newsletter for updates)" ]
f759b25961ba634703c440a0e6bf76bd39429e16
# Dataset Card for "mywitch" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
beyonddata/mywitch
[ "region:us" ]
2023-11-06T03:41:37+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 498141.0, "num_examples": 11}], "download_size": 499720, "dataset_size": 498141.0}}
2023-11-06T03:41:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "mywitch" More Information needed
[ "# Dataset Card for \"mywitch\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"mywitch\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"mywitch\"\n\nMore Information needed" ]
e841588c7a8c7aa4f37f08403464fdb70a281222
HSSD: Habitat Synthetic Scenes Dataset ================================== The [Habitat Synthetic Scenes Dataset (HSSD)](https://3dlg-hcvc.github.io/hssd/) is a human-authored 3D scene dataset that more closely mirrors real scenes than prior datasets. Our dataset represents real interiors and contains a diverse set of 211 scenes and more than 18000 models of real-world objects. <img src="https://i.imgur.com/XEkLxNs.png" width=50%> This repository provides a Habitat consumption-ready compressed version of HSSD. See [this repository](https://huggingface.co/datasets/hssd/hssd-models) for corresponding uncompressed assets. ## Dataset Structure ``` ├── objects │ ├── */*.glb │ ├── */*.collider.glb │ ├── */*.filteredSupportSurface(.ply|.glb) │ ├── */*.object_config.json ├── stages │ ├── *.glb │ ├── *.stage_config.json ├── scenes │ ├── *.scene_instance.json ├── scenes_uncluttered │ ├── *.scene_instance.json ├── scene_filter_files │ ├── *.rec_filter.json └── hssd-hab.scene_dataset_config.json └── hssd-hab-uncluttered.scene_dataset_config.json ``` - `hssd-hab.scene_dataset_config.json`: This SceneDataset config file aggregates the assets and metadata necessary to fully describe the set of stages, objects, and scenes constituting the dataset. - `objects`: 3D models representing distinct objects that are used to compose scenes. Contains configuration files, render assets, collider assets, and Receptacle mesh assets. - `stages`: A stage in Habitat is the set of static mesh components which make up the backdrop of a scene (e.g. floor, walls, stairs, etc.). - `scenes`: A scene is a single 3D world composed of a static stage and a variable number of objects. ### Rearrange-ready assets: Supporting Habitat 3.0 embodied rearrangement tasks with updated colliders, adjusted and de-cluttered scene contents, receptacle meshes, and receptacle filter files. See [aihabitat.org/habitat3/](aihabitat.org/habitat3/) for more details. - `hssd-hab-uncluttered.scene_dataset_config.json`: This SceneDataset config file aggregates adds the adjusted and uncluttered scenes for rearrangement tasks. - `scenes_uncluttered`: Contains the adjusted scene instance configuration files. - `scene_filter_files`: A scene filter file organizes available Receptacle instances in a scene into active and inactive groups based on simualtion heuristics and manual edits. It is consumed by the RearrangeEpisodeGenerator to construct valid RearrangeEpisodeDatasets. ## Getting Started To load HSSD scenes into the Habitat simulator, you can start by installing [habitat-sim](https://github.com/facebookresearch/habitat-sim) using instructions specified [here](https://github.com/facebookresearch/habitat-sim#installation). Once installed, you can run the interactive Habitat viewer to load a scene: ``` habitat-viewer --dataset /path/to/hssd-hab/hssd-hab.scene_dataset_config.json -- 102344280 # or ./build/viewer if compiling from source ``` You can find more information about using the interactive viewer [here](https://github.com/facebookresearch/habitat-sim#testing:~:text=path/to/data/-,Interactive%20testing,-%3A%20Use%20the%20interactive). Habitat-Sim is typically used with [Habitat-Lab](https://github.com/facebookresearch/habitat-lab), a modular high-level library for end-to-end experiments in embodied AI. To define embodied AI tasks (e.g. navigation, instruction following, question answering), train agents, and benchmark their performance using standard metrics, you can download habitat-lab using the instructions provided [here](https://github.com/facebookresearch/habitat-lab#installation). ## Changelog - `v0.2.5` (work in progress): **Rearrange-ready HSSD** - Note: this is a checkpoint. Known issues exist and continued polish is ongoing. - Adds Receptacle meshes describing support surfaces for small objects (e.g. table or shelf surfaces). - Adds collider meshes (.collider.glb) for assets with Receptacle meshes to support simulation. - Adds new scenes 'scenes_uncluttered' and new SceneDataset 'hssd-hab-uncluttered' containing adjusted and de-cluttered versions of the scenes for use in embodied rearrangement tasks. - Adds 'scene_filter_files' which sort Receptacles in each scene into active and inactive groups for RearrangeEpisode generation. - `v0.2.4`: - Recompresses several object GLBs to preserve PBR material status. - Adds CSV with object metadata and semantic lexicon files for Habitat. - Adds train/val scene splits file. - `v0.2.3`: First release.
byeonghwikim/hssd-hab
[ "language:en", "license:cc-by-nc-4.0", "3D scenes", "Embodied AI", "region:us" ]
2023-11-06T03:44:55+00:00
{"language": ["en"], "license": "cc-by-nc-4.0", "pretty_name": "HSSD", "tags": ["3D scenes", "Embodied AI"], "extra_gated_heading": "Acknowledge license to accept the repository", "extra_gated_prompt": "You agree to use this dataset under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/) terms", "viewer": false}
2023-11-06T03:44:56+00:00
[]
[ "en" ]
TAGS #language-English #license-cc-by-nc-4.0 #3D scenes #Embodied AI #region-us
HSSD: Habitat Synthetic Scenes Dataset ================================== The Habitat Synthetic Scenes Dataset (HSSD) is a human-authored 3D scene dataset that more closely mirrors real scenes than prior datasets. Our dataset represents real interiors and contains a diverse set of 211 scenes and more than 18000 models of real-world objects. <img src="https://i.URL width=50%> This repository provides a Habitat consumption-ready compressed version of HSSD. See this repository for corresponding uncompressed assets. ## Dataset Structure - 'hssd-hab.scene_dataset_config.json': This SceneDataset config file aggregates the assets and metadata necessary to fully describe the set of stages, objects, and scenes constituting the dataset. - 'objects': 3D models representing distinct objects that are used to compose scenes. Contains configuration files, render assets, collider assets, and Receptacle mesh assets. - 'stages': A stage in Habitat is the set of static mesh components which make up the backdrop of a scene (e.g. floor, walls, stairs, etc.). - 'scenes': A scene is a single 3D world composed of a static stage and a variable number of objects. ### Rearrange-ready assets: Supporting Habitat 3.0 embodied rearrangement tasks with updated colliders, adjusted and de-cluttered scene contents, receptacle meshes, and receptacle filter files. See URL for more details. - 'hssd-hab-uncluttered.scene_dataset_config.json': This SceneDataset config file aggregates adds the adjusted and uncluttered scenes for rearrangement tasks. - 'scenes_uncluttered': Contains the adjusted scene instance configuration files. - 'scene_filter_files': A scene filter file organizes available Receptacle instances in a scene into active and inactive groups based on simualtion heuristics and manual edits. It is consumed by the RearrangeEpisodeGenerator to construct valid RearrangeEpisodeDatasets. ## Getting Started To load HSSD scenes into the Habitat simulator, you can start by installing habitat-sim using instructions specified here. Once installed, you can run the interactive Habitat viewer to load a scene: You can find more information about using the interactive viewer here. Habitat-Sim is typically used with Habitat-Lab, a modular high-level library for end-to-end experiments in embodied AI. To define embodied AI tasks (e.g. navigation, instruction following, question answering), train agents, and benchmark their performance using standard metrics, you can download habitat-lab using the instructions provided here. ## Changelog - 'v0.2.5' (work in progress): Rearrange-ready HSSD - Note: this is a checkpoint. Known issues exist and continued polish is ongoing. - Adds Receptacle meshes describing support surfaces for small objects (e.g. table or shelf surfaces). - Adds collider meshes (.URL) for assets with Receptacle meshes to support simulation. - Adds new scenes 'scenes_uncluttered' and new SceneDataset 'hssd-hab-uncluttered' containing adjusted and de-cluttered versions of the scenes for use in embodied rearrangement tasks. - Adds 'scene_filter_files' which sort Receptacles in each scene into active and inactive groups for RearrangeEpisode generation. - 'v0.2.4': - Recompresses several object GLBs to preserve PBR material status. - Adds CSV with object metadata and semantic lexicon files for Habitat. - Adds train/val scene splits file. - 'v0.2.3': First release.
[ "## Dataset Structure\n\n\n\n- 'hssd-hab.scene_dataset_config.json': This SceneDataset config file aggregates the assets and metadata necessary to fully describe the set of stages, objects, and scenes constituting the dataset.\n- 'objects': 3D models representing distinct objects that are used to compose scenes. Contains configuration files, render assets, collider assets, and Receptacle mesh assets.\n- 'stages': A stage in Habitat is the set of static mesh components which make up the backdrop of a scene (e.g. floor, walls, stairs, etc.).\n- 'scenes': A scene is a single 3D world composed of a static stage and a variable number of objects.", "### Rearrange-ready assets:\nSupporting Habitat 3.0 embodied rearrangement tasks with updated colliders, adjusted and de-cluttered scene contents, receptacle meshes, and receptacle filter files. See URL for more details.\n- 'hssd-hab-uncluttered.scene_dataset_config.json': This SceneDataset config file aggregates adds the adjusted and uncluttered scenes for rearrangement tasks.\n- 'scenes_uncluttered': Contains the adjusted scene instance configuration files.\n- 'scene_filter_files': A scene filter file organizes available Receptacle instances in a scene into active and inactive groups based on simualtion heuristics and manual edits. It is consumed by the RearrangeEpisodeGenerator to construct valid RearrangeEpisodeDatasets.", "## Getting Started\n\nTo load HSSD scenes into the Habitat simulator, you can start by installing habitat-sim using instructions specified here.\n\nOnce installed, you can run the interactive Habitat viewer to load a scene:\n\n\n\nYou can find more information about using the interactive viewer here.\n\nHabitat-Sim is typically used with Habitat-Lab, a modular high-level library for end-to-end experiments in embodied AI.\nTo define embodied AI tasks (e.g. navigation, instruction following, question answering), train agents, and benchmark their performance using standard metrics, you can download habitat-lab using the instructions provided here.", "## Changelog\n - 'v0.2.5' (work in progress): Rearrange-ready HSSD\n - Note: this is a checkpoint. Known issues exist and continued polish is ongoing.\n - Adds Receptacle meshes describing support surfaces for small objects (e.g. table or shelf surfaces).\n - Adds collider meshes (.URL) for assets with Receptacle meshes to support simulation.\n - Adds new scenes 'scenes_uncluttered' and new SceneDataset 'hssd-hab-uncluttered' containing adjusted and de-cluttered versions of the scenes for use in embodied rearrangement tasks.\n - Adds 'scene_filter_files' which sort Receptacles in each scene into active and inactive groups for RearrangeEpisode generation.\n - 'v0.2.4': \n - Recompresses several object GLBs to preserve PBR material status. \n - Adds CSV with object metadata and semantic lexicon files for Habitat.\n - Adds train/val scene splits file.\n- 'v0.2.3': First release." ]
[ "TAGS\n#language-English #license-cc-by-nc-4.0 #3D scenes #Embodied AI #region-us \n", "## Dataset Structure\n\n\n\n- 'hssd-hab.scene_dataset_config.json': This SceneDataset config file aggregates the assets and metadata necessary to fully describe the set of stages, objects, and scenes constituting the dataset.\n- 'objects': 3D models representing distinct objects that are used to compose scenes. Contains configuration files, render assets, collider assets, and Receptacle mesh assets.\n- 'stages': A stage in Habitat is the set of static mesh components which make up the backdrop of a scene (e.g. floor, walls, stairs, etc.).\n- 'scenes': A scene is a single 3D world composed of a static stage and a variable number of objects.", "### Rearrange-ready assets:\nSupporting Habitat 3.0 embodied rearrangement tasks with updated colliders, adjusted and de-cluttered scene contents, receptacle meshes, and receptacle filter files. See URL for more details.\n- 'hssd-hab-uncluttered.scene_dataset_config.json': This SceneDataset config file aggregates adds the adjusted and uncluttered scenes for rearrangement tasks.\n- 'scenes_uncluttered': Contains the adjusted scene instance configuration files.\n- 'scene_filter_files': A scene filter file organizes available Receptacle instances in a scene into active and inactive groups based on simualtion heuristics and manual edits. It is consumed by the RearrangeEpisodeGenerator to construct valid RearrangeEpisodeDatasets.", "## Getting Started\n\nTo load HSSD scenes into the Habitat simulator, you can start by installing habitat-sim using instructions specified here.\n\nOnce installed, you can run the interactive Habitat viewer to load a scene:\n\n\n\nYou can find more information about using the interactive viewer here.\n\nHabitat-Sim is typically used with Habitat-Lab, a modular high-level library for end-to-end experiments in embodied AI.\nTo define embodied AI tasks (e.g. navigation, instruction following, question answering), train agents, and benchmark their performance using standard metrics, you can download habitat-lab using the instructions provided here.", "## Changelog\n - 'v0.2.5' (work in progress): Rearrange-ready HSSD\n - Note: this is a checkpoint. Known issues exist and continued polish is ongoing.\n - Adds Receptacle meshes describing support surfaces for small objects (e.g. table or shelf surfaces).\n - Adds collider meshes (.URL) for assets with Receptacle meshes to support simulation.\n - Adds new scenes 'scenes_uncluttered' and new SceneDataset 'hssd-hab-uncluttered' containing adjusted and de-cluttered versions of the scenes for use in embodied rearrangement tasks.\n - Adds 'scene_filter_files' which sort Receptacles in each scene into active and inactive groups for RearrangeEpisode generation.\n - 'v0.2.4': \n - Recompresses several object GLBs to preserve PBR material status. \n - Adds CSV with object metadata and semantic lexicon files for Habitat.\n - Adds train/val scene splits file.\n- 'v0.2.3': First release." ]
[ 31, 181, 206, 145, 260 ]
[ "passage: TAGS\n#language-English #license-cc-by-nc-4.0 #3D scenes #Embodied AI #region-us \n## Dataset Structure\n\n\n\n- 'hssd-hab.scene_dataset_config.json': This SceneDataset config file aggregates the assets and metadata necessary to fully describe the set of stages, objects, and scenes constituting the dataset.\n- 'objects': 3D models representing distinct objects that are used to compose scenes. Contains configuration files, render assets, collider assets, and Receptacle mesh assets.\n- 'stages': A stage in Habitat is the set of static mesh components which make up the backdrop of a scene (e.g. floor, walls, stairs, etc.).\n- 'scenes': A scene is a single 3D world composed of a static stage and a variable number of objects.### Rearrange-ready assets:\nSupporting Habitat 3.0 embodied rearrangement tasks with updated colliders, adjusted and de-cluttered scene contents, receptacle meshes, and receptacle filter files. See URL for more details.\n- 'hssd-hab-uncluttered.scene_dataset_config.json': This SceneDataset config file aggregates adds the adjusted and uncluttered scenes for rearrangement tasks.\n- 'scenes_uncluttered': Contains the adjusted scene instance configuration files.\n- 'scene_filter_files': A scene filter file organizes available Receptacle instances in a scene into active and inactive groups based on simualtion heuristics and manual edits. It is consumed by the RearrangeEpisodeGenerator to construct valid RearrangeEpisodeDatasets." ]
64ad3fa689590d76084b2545fab457d9620c05e8
# Dataset Card for "mywitch1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
beyonddata/mywitch1
[ "region:us" ]
2023-11-06T03:45:10+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 498141.0, "num_examples": 11}], "download_size": 499720, "dataset_size": 498141.0}}
2023-11-06T03:45:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "mywitch1" More Information needed
[ "# Dataset Card for \"mywitch1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"mywitch1\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"mywitch1\"\n\nMore Information needed" ]
0ea385a3ac483f42806e88440edcee04a76a82ac
# Dataset Card for "paul_price" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Chunt0/paul_price
[ "region:us" ]
2023-11-06T03:51:55+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10352711.0, "num_examples": 41}], "download_size": 10344553, "dataset_size": 10352711.0}}
2023-11-06T03:51:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for "paul_price" More Information needed
[ "# Dataset Card for \"paul_price\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"paul_price\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"paul_price\"\n\nMore Information needed" ]
7e55b03008e32b91196c19ec33895d3878eaff1d
# Dataset Card for "movie-posters-genres-80k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
skvarre/movie-posters-genres-80k
[ "region:us" ]
2023-11-06T04:46:36+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "genres", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 4667452017.0, "num_examples": 79352}], "download_size": 4659054924, "dataset_size": 4667452017.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T05:01:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "movie-posters-genres-80k" More Information needed
[ "# Dataset Card for \"movie-posters-genres-80k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"movie-posters-genres-80k\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"movie-posters-genres-80k\"\n\nMore Information needed" ]
3eee9da15164933fdc82de879874b9f6f7446574
# Dataset Card for "drugs-composition-indonesian-donut" ## Generate Custom Data Please visit `https://huggingface.co/spaces/jonathanjordan21/donut-labelling` for the interface to generate custom data. The data format is (.zip). Images and Labels are stored in separated .zip files. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards
jonathanjordan21/drugs-composition-indonesian-donut
[ "region:us" ]
2023-11-06T04:54:01+00:00
{"dataset_info": {"features": [{"name": "images", "dtype": "image"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13650178.0, "num_examples": 22}], "download_size": 13642464, "dataset_size": 13650178.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T10:18:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "drugs-composition-indonesian-donut" ## Generate Custom Data Please visit 'URL for the interface to generate custom data. The data format is (.zip). Images and Labels are stored in separated .zip files. [More Information needed](URL
[ "# Dataset Card for \"drugs-composition-indonesian-donut\"", "## Generate Custom Data\nPlease visit 'URL for the interface to generate custom data.\nThe data format is (.zip). Images and Labels are stored in separated .zip files.\n\n[More Information needed](URL" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"drugs-composition-indonesian-donut\"", "## Generate Custom Data\nPlease visit 'URL for the interface to generate custom data.\nThe data format is (.zip). Images and Labels are stored in separated .zip files.\n\n[More Information needed](URL" ]
[ 6, 19, 46 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"drugs-composition-indonesian-donut\"## Generate Custom Data\nPlease visit 'URL for the interface to generate custom data.\nThe data format is (.zip). Images and Labels are stored in separated .zip files.\n\n[More Information needed](URL" ]
b7a972ce7efbe8332bd81c6801a406680dd1d835
# Dataset Card for "wikilingua_data-xlsumm_cstnews_1024_results" rouge={'rouge1': 0.22743786739182595, 'rouge2': 0.05472373000167142, 'rougeL': 0.1476443201167342, 'rougeLsum': 0.1476443201167342} Bert={'precision': 0.6776001552278398, 'recall': 0.7083012139841011, 'f1': 0.691628398211982} mover = 0.5865215803814507
arthurmluz/wikilingua_data-xlsum_cstnews_1024_results
[ "region:us" ]
2023-11-06T05:09:04+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 23822378, "num_examples": 8165}], "download_size": 14156723, "dataset_size": 23822378}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]}
2023-11-13T19:46:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wikilingua_data-xlsumm_cstnews_1024_results" rouge={'rouge1': 0.22743786739182595, 'rouge2': 0.05472373000167142, 'rougeL': 0.1476443201167342, 'rougeLsum': 0.1476443201167342} Bert={'precision': 0.6776001552278398, 'recall': 0.7083012139841011, 'f1': 0.691628398211982} mover = 0.5865215803814507
[ "# Dataset Card for \"wikilingua_data-xlsumm_cstnews_1024_results\"\n\nrouge={'rouge1': 0.22743786739182595, 'rouge2': 0.05472373000167142, 'rougeL': 0.1476443201167342, 'rougeLsum': 0.1476443201167342}\n\nBert={'precision': 0.6776001552278398, 'recall': 0.7083012139841011, 'f1': 0.691628398211982}\n\nmover = 0.5865215803814507" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wikilingua_data-xlsumm_cstnews_1024_results\"\n\nrouge={'rouge1': 0.22743786739182595, 'rouge2': 0.05472373000167142, 'rougeL': 0.1476443201167342, 'rougeLsum': 0.1476443201167342}\n\nBert={'precision': 0.6776001552278398, 'recall': 0.7083012139841011, 'f1': 0.691628398211982}\n\nmover = 0.5865215803814507" ]
[ 6, 140 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wikilingua_data-xlsumm_cstnews_1024_results\"\n\nrouge={'rouge1': 0.22743786739182595, 'rouge2': 0.05472373000167142, 'rougeL': 0.1476443201167342, 'rougeLsum': 0.1476443201167342}\n\nBert={'precision': 0.6776001552278398, 'recall': 0.7083012139841011, 'f1': 0.691628398211982}\n\nmover = 0.5865215803814507" ]
495fb5188b520f966583fce9f3f3b7c04c45f198
# Dataset Card for "FB-email-market" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dpaul93/FB-email-market
[ "region:us" ]
2023-11-06T05:19:45+00:00
{"dataset_info": {"features": [{"name": "product", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "marketing_email", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20754, "num_examples": 10}], "download_size": 27429, "dataset_size": 20754}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T05:22:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "FB-email-market" More Information needed
[ "# Dataset Card for \"FB-email-market\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"FB-email-market\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"FB-email-market\"\n\nMore Information needed" ]
251f632ff0a08a91b0a58828a3a7d4ccfd10d003
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
BoburAmirov/example
[ "task_categories:automatic-speech-recognition", "language:uz", "region:us" ]
2023-11-06T05:20:46+00:00
{"language": ["uz"], "task_categories": ["automatic-speech-recognition"]}
2023-11-06T06:24:19+00:00
[]
[ "uz" ]
TAGS #task_categories-automatic-speech-recognition #language-Uzbek #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #language-Uzbek #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ 27, 10, 125, 24, 6, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 19 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #language-Uzbek #region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @github-username for adding this dataset." ]
797974464d16d7f22671a20a43dd7651cabdea14
# Dataset Card for "hh-rlhf" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
JJhooww/hh-rlhf
[ "region:us" ]
2023-11-06T05:39:22+00:00
{"dataset_info": {"features": [{"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 331387735, "num_examples": 160798}], "download_size": 185570231, "dataset_size": 331387735}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T05:50:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hh-rlhf" More Information needed
[ "# Dataset Card for \"hh-rlhf\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hh-rlhf\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hh-rlhf\"\n\nMore Information needed" ]
d8490fb7e9665334a053f478da2a6191606478f4
**Question – Answer Dataset** The dataset contains 400 queries from two domains: Current Affairs and Creative Writing. It serves as a versatile resource for Natural Language Processing (NLP) tasks, including text classification, information retrieval, and model training. **Data attributes:** 1. Query: The user-generated question. Data type: string. 2. Answer: The response provided by a team of writers and editors in markdown format, containing information related to the query. 3. Citations: Up to 4 credible sources referenced by the writers to support and validate the information in the answers. **Use cases:** 1. Fine-tuning ML Models such as BERT, GPT-2, or RoBERTa for question-answering tasks. 2. Train custom LLM from scratch for question-answering tasks. 3. Model Evaluation for performance and accuracy. 4. Develop models for open-domain question-answering. 5. Create question-answering chatbots and virtual assistants. 6. Build models for answering questions about documents. The "Question – Answer Dataset" is a valuable resource for a wide range of NLP tasks and applications, from enhancing LLMs to developing chatbots and assisting with document question-answering. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6548832700ca373b7d5ed342/31FrgX_fnYx4UhynNhxDB.png)
Softage-AI/LLM_Fine-tuning_Conversational_datasample1
[ "license:mit", "region:us" ]
2023-11-06T05:45:31+00:00
{"license": "mit"}
2023-11-14T04:55:04+00:00
[]
[]
TAGS #license-mit #region-us
Question – Answer Dataset The dataset contains 400 queries from two domains: Current Affairs and Creative Writing. It serves as a versatile resource for Natural Language Processing (NLP) tasks, including text classification, information retrieval, and model training. Data attributes: 1. Query: The user-generated question. Data type: string. 2. Answer: The response provided by a team of writers and editors in markdown format, containing information related to the query. 3. Citations: Up to 4 credible sources referenced by the writers to support and validate the information in the answers. Use cases: 1. Fine-tuning ML Models such as BERT, GPT-2, or RoBERTa for question-answering tasks. 2. Train custom LLM from scratch for question-answering tasks. 3. Model Evaluation for performance and accuracy. 4. Develop models for open-domain question-answering. 5. Create question-answering chatbots and virtual assistants. 6. Build models for answering questions about documents. The "Question – Answer Dataset" is a valuable resource for a wide range of NLP tasks and applications, from enhancing LLMs to developing chatbots and assisting with document question-answering. !image/png
[]
[ "TAGS\n#license-mit #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-mit #region-us \n" ]
93deca97ad566a56adc8a7fab9c453276aa25973
# Dataset Card for "movie_posters-genres-80k-torchvision-transforms" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
skvarre/movie_posters-genres-80k-torchvision-transforms
[ "region:us" ]
2023-11-06T05:46:39+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "genres", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 23423754096, "num_examples": 79352}], "download_size": 22029501853, "dataset_size": 23423754096}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T06:12:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "movie_posters-genres-80k-torchvision-transforms" More Information needed
[ "# Dataset Card for \"movie_posters-genres-80k-torchvision-transforms\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"movie_posters-genres-80k-torchvision-transforms\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"movie_posters-genres-80k-torchvision-transforms\"\n\nMore Information needed" ]
4eedbbfdd3b4ec8bea35df016e97b2c135fcac2d
# Dataset Card for "must-c-en-fr-wait03_22.21" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-fr-wait03_22.21
[ "region:us" ]
2023-11-06T06:21:51+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1071696759, "num_examples": 5530635}, {"name": "test", "num_bytes": 11897959, "num_examples": 64317}, {"name": "validation", "num_bytes": 5584999, "num_examples": 29172}], "download_size": 189892905, "dataset_size": 1089179717}}
2023-11-06T06:22:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-fr-wait03_22.21" More Information needed
[ "# Dataset Card for \"must-c-en-fr-wait03_22.21\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-fr-wait03_22.21\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-fr-wait03_22.21\"\n\nMore Information needed" ]
6a4450d57ac8509e5342552220e7b678209d8eb8
# Dataset Card for "must-c-en-fr-wait05_22.22" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-fr-wait05_22.22
[ "region:us" ]
2023-11-06T06:22:41+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1117394934, "num_examples": 5530635}, {"name": "test", "num_bytes": 12413160, "num_examples": 64317}, {"name": "validation", "num_bytes": 5823766, "num_examples": 29172}], "download_size": 186632709, "dataset_size": 1135631860}}
2023-11-06T06:23:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-fr-wait05_22.22" More Information needed
[ "# Dataset Card for \"must-c-en-fr-wait05_22.22\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-fr-wait05_22.22\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-fr-wait05_22.22\"\n\nMore Information needed" ]
3b4e0de59944a2458e84663749a34de5c32a8ac7
# Dataset Card for "must-c-en-fr-wait07_22.23" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-fr-wait07_22.23
[ "region:us" ]
2023-11-06T06:23:29+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1157797754, "num_examples": 5530635}, {"name": "test", "num_bytes": 12864307, "num_examples": 64317}, {"name": "validation", "num_bytes": 6034981, "num_examples": 29172}], "download_size": 182901094, "dataset_size": 1176697042}}
2023-11-06T06:23:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-fr-wait07_22.23" More Information needed
[ "# Dataset Card for \"must-c-en-fr-wait07_22.23\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-fr-wait07_22.23\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-fr-wait07_22.23\"\n\nMore Information needed" ]
91f1597f80c59f42ac868fb03dec9518c3008c3f
## 🇻🇳 Vietnamese OpenOrca is here 🐋 <img src="https://i.ibb.co/kgmJG96/orca-viet.png" alt="drawing" width="512"/> Dive into the Vietnamese linguistic landscape with OpenOrca, a cutting-edge dataset crafted through a pioneering partnership between **Virtual Interactive** and **Alignment Lab AI**. Drawing inspiration and methodology from the renowned [Orca paper](https://arxiv.org/abs/2306.02707), we've expanded our horizons to distill knowledge from a more eclectic mix of leading LLMs including GPT-4, PaLM-2, and Claude. Our vision with this dataset is to fuel research and development that will catapult the performance of Vietnamese Language Models into uncharted territories. Join us on this exhilarating journey to redefine AI's linguistic prowess. The main original source of tasks/questions is a translated version of *FLAN*, **vi-FLAN**. We further augmented **vi-FLAN** on better state-of-the-art LLMs. ## Citation ``` @misc{OpenOrcaViet, title = {OpenOrca-Viet: GPT Augmented FLAN Reasoning for Vietnamese}, author = {Virtual Interactive and Alignment Lab AI}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/vilm/OpenOrca-Viet}}, } ```
vilm/OpenOrca-Viet
[ "license:apache-2.0", "arxiv:2306.02707", "region:us" ]
2023-11-06T06:29:15+00:00
{"license": "apache-2.0"}
2023-11-06T17:48:37+00:00
[ "2306.02707" ]
[]
TAGS #license-apache-2.0 #arxiv-2306.02707 #region-us
## 🇻🇳 Vietnamese OpenOrca is here <img src="https://i.URL alt="drawing" width="512"/> Dive into the Vietnamese linguistic landscape with OpenOrca, a cutting-edge dataset crafted through a pioneering partnership between Virtual Interactive and Alignment Lab AI. Drawing inspiration and methodology from the renowned Orca paper, we've expanded our horizons to distill knowledge from a more eclectic mix of leading LLMs including GPT-4, PaLM-2, and Claude. Our vision with this dataset is to fuel research and development that will catapult the performance of Vietnamese Language Models into uncharted territories. Join us on this exhilarating journey to redefine AI's linguistic prowess. The main original source of tasks/questions is a translated version of *FLAN*, vi-FLAN. We further augmented vi-FLAN on better state-of-the-art LLMs.
[ "## 🇻🇳 Vietnamese OpenOrca is here \n<img src=\"https://i.URL alt=\"drawing\" width=\"512\"/>\n\nDive into the Vietnamese linguistic landscape with OpenOrca, a cutting-edge dataset crafted through a pioneering partnership between Virtual Interactive and Alignment Lab AI. Drawing inspiration and methodology from the renowned Orca paper, we've expanded our horizons to distill knowledge from a more eclectic mix of leading LLMs including GPT-4, PaLM-2, and Claude. Our vision with this dataset is to fuel research and development that will catapult the performance of Vietnamese Language Models into uncharted territories. Join us on this exhilarating journey to redefine AI's linguistic prowess.\n\nThe main original source of tasks/questions is a translated version of *FLAN*, vi-FLAN. We further augmented vi-FLAN on better state-of-the-art LLMs." ]
[ "TAGS\n#license-apache-2.0 #arxiv-2306.02707 #region-us \n", "## 🇻🇳 Vietnamese OpenOrca is here \n<img src=\"https://i.URL alt=\"drawing\" width=\"512\"/>\n\nDive into the Vietnamese linguistic landscape with OpenOrca, a cutting-edge dataset crafted through a pioneering partnership between Virtual Interactive and Alignment Lab AI. Drawing inspiration and methodology from the renowned Orca paper, we've expanded our horizons to distill knowledge from a more eclectic mix of leading LLMs including GPT-4, PaLM-2, and Claude. Our vision with this dataset is to fuel research and development that will catapult the performance of Vietnamese Language Models into uncharted territories. Join us on this exhilarating journey to redefine AI's linguistic prowess.\n\nThe main original source of tasks/questions is a translated version of *FLAN*, vi-FLAN. We further augmented vi-FLAN on better state-of-the-art LLMs." ]
[ 22, 231 ]
[ "passage: TAGS\n#license-apache-2.0 #arxiv-2306.02707 #region-us \n## 🇻🇳 Vietnamese OpenOrca is here \n<img src=\"https://i.URL alt=\"drawing\" width=\"512\"/>\n\nDive into the Vietnamese linguistic landscape with OpenOrca, a cutting-edge dataset crafted through a pioneering partnership between Virtual Interactive and Alignment Lab AI. Drawing inspiration and methodology from the renowned Orca paper, we've expanded our horizons to distill knowledge from a more eclectic mix of leading LLMs including GPT-4, PaLM-2, and Claude. Our vision with this dataset is to fuel research and development that will catapult the performance of Vietnamese Language Models into uncharted territories. Join us on this exhilarating journey to redefine AI's linguistic prowess.\n\nThe main original source of tasks/questions is a translated version of *FLAN*, vi-FLAN. We further augmented vi-FLAN on better state-of-the-art LLMs." ]
6568b59ea2bb31a379ea32439a8387b763d07000
# Dataset Card for "test_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
RustamovPY/test_dataset
[ "region:us" ]
2023-11-06T06:36:44+00:00
{"dataset_info": {"features": [{"name": "voice", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "speaker", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1257942.0, "num_examples": 3}], "download_size": 1227002, "dataset_size": 1257942.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T09:52:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test_dataset" More Information needed
[ "# Dataset Card for \"test_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test_dataset\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"test_dataset\"\n\nMore Information needed" ]
13ae2e2ae871763083f677c09bcd2b009141621b
# Dataset Card for "must-c-en-fr-wait09_22.40" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-fr-wait09_22.40
[ "region:us" ]
2023-11-06T06:40:52+00:00
{"dataset_info": {"features": [{"name": "current_source", "dtype": "string"}, {"name": "current_target", "dtype": "string"}, {"name": "target_token", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1193107774, "num_examples": 5530635}, {"name": "test", "num_bytes": 13254177, "num_examples": 64317}, {"name": "validation", "num_bytes": 6219976, "num_examples": 29172}], "download_size": 179167875, "dataset_size": 1212581927}}
2023-11-06T06:41:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-fr-wait09_22.40" More Information needed
[ "# Dataset Card for \"must-c-en-fr-wait09_22.40\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-fr-wait09_22.40\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-fr-wait09_22.40\"\n\nMore Information needed" ]
ce918609b4f5b77716de1e6dfd18a53cd1dde94f
# Dataset Card for "must-c-en-fr_22.41" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maxolotl/must-c-en-fr_22.41
[ "region:us" ]
2023-11-06T06:41:28+00:00
{"dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "fr", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 61411559, "num_examples": 268645}, {"name": "test", "num_bytes": 697604, "num_examples": 3165}, {"name": "validation", "num_bytes": 321473, "num_examples": 1403}], "download_size": 37992225, "dataset_size": 62430636}}
2023-11-06T06:41:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "must-c-en-fr_22.41" More Information needed
[ "# Dataset Card for \"must-c-en-fr_22.41\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"must-c-en-fr_22.41\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"must-c-en-fr_22.41\"\n\nMore Information needed" ]
f3a61102e2e569351e80ec2b9dca59792e5a0ef1
# Dataset Card for "vqa_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Phando/vqa_v2
[ "region:us" ]
2023-11-06T06:44:04+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "question_type", "dtype": "string"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "answers", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "answer_type", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 67692137168.704, "num_examples": 443757}, {"name": "validation", "num_bytes": 33693404566.41, "num_examples": 214354}, {"name": "test", "num_bytes": 70169720510.0, "num_examples": 447793}], "download_size": 34818002031, "dataset_size": 171555262245.114}}
2023-12-07T04:17:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "vqa_v2" More Information needed
[ "# Dataset Card for \"vqa_v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"vqa_v2\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"vqa_v2\"\n\nMore Information needed" ]
60cc1da7d35566efa6d3e33060bb5568a86a4810
# Dataset Card for "mistral-intent-1K" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pankajemplay/mistral-intent-1K
[ "region:us" ]
2023-11-06T06:45:08+00:00
{"dataset_info": {"features": [{"name": "User Query", "dtype": "string"}, {"name": "Intent", "dtype": "string"}, {"name": "id type", "dtype": "string"}, {"name": "id value", "dtype": "string"}, {"name": "id slot filled", "dtype": "bool"}, {"name": "Task", "dtype": "string"}, {"name": "task slot filled", "dtype": "bool"}, {"name": "Bot Response", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 992882, "num_examples": 1308}], "download_size": 218767, "dataset_size": 992882}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T06:45:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "mistral-intent-1K" More Information needed
[ "# Dataset Card for \"mistral-intent-1K\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"mistral-intent-1K\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"mistral-intent-1K\"\n\nMore Information needed" ]