sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
64dd58348e0f7e1f09684f17addb55dc1a91dee8 | what is your name?My name is ram. Where do you work? I work at amdocs.
| purvansh97/GenAIDemo | [
"region:us"
]
| 2023-11-06T06:45:49+00:00 | {} | 2023-11-06T09:04:49+00:00 | []
| []
| TAGS
#region-us
| what is your name?My name is ram. Where do you work? I work at amdocs.
| []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
ea7dc2d108700214bbf5fd3dedfd825a089c3b36 | # Dataset Card for "test_voices"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | RustamovPY/test_voices | [
"region:us"
]
| 2023-11-06T06:53:21+00:00 | {"dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "voice", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "speaker", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 385, "num_examples": 3}], "download_size": 2746, "dataset_size": 385}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T07:23:34+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "test_voices"
More Information needed | [
"# Dataset Card for \"test_voices\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_voices\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"test_voices\"\n\nMore Information needed"
]
|
8298d9f7883e4f97dda500d718f8f3bc00907795 | # Dataset Card for "paradetox_editOps_preprocess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HamdanXI/paradetox_editOps_preprocess | [
"region:us"
]
| 2023-11-06T07:35:52+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "en_toxic_comment", "dtype": "string"}, {"name": "en_neutral_comment", "dtype": "string"}, {"name": "edit_ops", "sequence": {"sequence": "string"}}, {"name": "masked_comment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5469950, "num_examples": 19744}], "download_size": 0, "dataset_size": 5469950}} | 2023-11-06T07:50:50+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "paradetox_editOps_preprocess"
More Information needed | [
"# Dataset Card for \"paradetox_editOps_preprocess\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"paradetox_editOps_preprocess\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"paradetox_editOps_preprocess\"\n\nMore Information needed"
]
|
570223bc1ca108ec53de3e9e3a9e8dbe83e309ca | # Dataset Info C++ + Natural Description -> Doxygen Documentation
This dataset was created for my bachelors thesis investigating how LLMs can be fine-tuned to generate doxygen documentation. It was created by using the “Source code analysis dataset”
by Gelman, Banjo Obayomi, Jessica Moore und David Slater (doi: 10.1016/j.dib.2019.104712).
The following SQL-Statement was used to grab raw data from the dataset:
```
SELECT * FROM all_data
WHERE LENGTH(comment) > 300 and LENGTH(code) > 100 AND LENGTH(code) < 80
AND code NOT LIKE '%//%' AND code NOT LIKE '%/*%' AND code NOT LIKE '%*/%'
AND filename LIKE '%.cpp%'
LIMIT 12000
```
After selecting the Data Code LLaMa Instruct 34B is tasked to combine the human-written description of the functionality with the function code into a Doxygen-Comment. Any results which included the sample doxygen string or no doxygen string at all where filtered from the set.
| LukasSonn/DoxygenStrings-Long | [
"license:apache-2.0",
"doi:10.57967/hf/1328",
"region:us"
]
| 2023-11-06T08:00:31+00:00 | {"license": "apache-2.0"} | 2023-11-06T08:33:47+00:00 | []
| []
| TAGS
#license-apache-2.0 #doi-10.57967/hf/1328 #region-us
| # Dataset Info C++ + Natural Description -> Doxygen Documentation
This dataset was created for my bachelors thesis investigating how LLMs can be fine-tuned to generate doxygen documentation. It was created by using the “Source code analysis dataset”
by Gelman, Banjo Obayomi, Jessica Moore und David Slater (doi: 10.1016/j.dib.2019.104712).
The following SQL-Statement was used to grab raw data from the dataset:
After selecting the Data Code LLaMa Instruct 34B is tasked to combine the human-written description of the functionality with the function code into a Doxygen-Comment. Any results which included the sample doxygen string or no doxygen string at all where filtered from the set.
| [
"# Dataset Info C++ + Natural Description -> Doxygen Documentation\n\nThis dataset was created for my bachelors thesis investigating how LLMs can be fine-tuned to generate doxygen documentation. It was created by using the “Source code analysis dataset” \nby Gelman, Banjo Obayomi, Jessica Moore und David Slater (doi: 10.1016/j.dib.2019.104712).\n\nThe following SQL-Statement was used to grab raw data from the dataset:\n\n\nAfter selecting the Data Code LLaMa Instruct 34B is tasked to combine the human-written description of the functionality with the function code into a Doxygen-Comment. Any results which included the sample doxygen string or no doxygen string at all where filtered from the set."
]
| [
"TAGS\n#license-apache-2.0 #doi-10.57967/hf/1328 #region-us \n",
"# Dataset Info C++ + Natural Description -> Doxygen Documentation\n\nThis dataset was created for my bachelors thesis investigating how LLMs can be fine-tuned to generate doxygen documentation. It was created by using the “Source code analysis dataset” \nby Gelman, Banjo Obayomi, Jessica Moore und David Slater (doi: 10.1016/j.dib.2019.104712).\n\nThe following SQL-Statement was used to grab raw data from the dataset:\n\n\nAfter selecting the Data Code LLaMa Instruct 34B is tasked to combine the human-written description of the functionality with the function code into a Doxygen-Comment. Any results which included the sample doxygen string or no doxygen string at all where filtered from the set."
]
| [
26,
173
]
| [
"passage: TAGS\n#license-apache-2.0 #doi-10.57967/hf/1328 #region-us \n# Dataset Info C++ + Natural Description -> Doxygen Documentation\n\nThis dataset was created for my bachelors thesis investigating how LLMs can be fine-tuned to generate doxygen documentation. It was created by using the “Source code analysis dataset” \nby Gelman, Banjo Obayomi, Jessica Moore und David Slater (doi: 10.1016/j.dib.2019.104712).\n\nThe following SQL-Statement was used to grab raw data from the dataset:\n\n\nAfter selecting the Data Code LLaMa Instruct 34B is tasked to combine the human-written description of the functionality with the function code into a Doxygen-Comment. Any results which included the sample doxygen string or no doxygen string at all where filtered from the set."
]
|
9766e893bd86db712c8215fd7079cbbc9cbb0d70 | # Dataset Card for "text_messages"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | chirunder/text_messages | [
"region:us"
]
| 2023-11-06T08:03:04+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 786735647, "num_examples": 11615290}], "download_size": 563363348, "dataset_size": 786735647}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T08:03:35+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "text_messages"
More Information needed | [
"# Dataset Card for \"text_messages\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"text_messages\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"text_messages\"\n\nMore Information needed"
]
|
bfe48ff209110e3ea14e0da9e3f8c51eaf11525f | # Dataset Card for "xlsum_data-wiki_results"
rouge={'rouge1': 0.23617138944400812, 'rouge2': 0.05655501861336527, 'rougeL': 0.16424048383239956, 'rougeLsum': 0.16424048383239956}
Bert={'precision': 0.6961617108753749, 'recall': 0.6812583161563408, 'f1': 0.6883039236317944}
mover = 0.6015134358694214 | arthurmluz/xlsum_data-wiki_results | [
"region:us"
]
| 2023-11-06T08:03:10+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 25949078, "num_examples": 7175}], "download_size": 15637554, "dataset_size": 25949078}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-13T20:20:01+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "xlsum_data-wiki_results"
rouge={'rouge1': 0.23617138944400812, 'rouge2': 0.05655501861336527, 'rougeL': 0.16424048383239956, 'rougeLsum': 0.16424048383239956}
Bert={'precision': 0.6961617108753749, 'recall': 0.6812583161563408, 'f1': 0.6883039236317944}
mover = 0.6015134358694214 | [
"# Dataset Card for \"xlsum_data-wiki_results\"\n\nrouge={'rouge1': 0.23617138944400812, 'rouge2': 0.05655501861336527, 'rougeL': 0.16424048383239956, 'rougeLsum': 0.16424048383239956}\n\nBert={'precision': 0.6961617108753749, 'recall': 0.6812583161563408, 'f1': 0.6883039236317944}\n\nmover = 0.6015134358694214"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"xlsum_data-wiki_results\"\n\nrouge={'rouge1': 0.23617138944400812, 'rouge2': 0.05655501861336527, 'rougeL': 0.16424048383239956, 'rougeLsum': 0.16424048383239956}\n\nBert={'precision': 0.6961617108753749, 'recall': 0.6812583161563408, 'f1': 0.6883039236317944}\n\nmover = 0.6015134358694214"
]
| [
6,
138
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"xlsum_data-wiki_results\"\n\nrouge={'rouge1': 0.23617138944400812, 'rouge2': 0.05655501861336527, 'rougeL': 0.16424048383239956, 'rougeLsum': 0.16424048383239956}\n\nBert={'precision': 0.6961617108753749, 'recall': 0.6812583161563408, 'f1': 0.6883039236317944}\n\nmover = 0.6015134358694214"
]
|
c75486b1c4d208c4199ea700326589f76346f5dc | # Dataset Card for "dataset-public-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | male-2/dataset-public-v2 | [
"region:us"
]
| 2023-11-06T08:03:50+00:00 | {"dataset_info": {"features": [{"name": "conversation", "dtype": "string"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 842, "num_examples": 1}], "download_size": 6839, "dataset_size": 842}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T08:03:52+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dataset-public-v2"
More Information needed | [
"# Dataset Card for \"dataset-public-v2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset-public-v2\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dataset-public-v2\"\n\nMore Information needed"
]
|
84ce131b68958204c352633dfe1696d90d320a21 | This is a Chinese dataset of paraphrases created by ChatGPT.
*For English paraphrase dataset, you can refer to [humarin/chatgpt-paraphrases](https://huggingface.co/datasets/humarin/chatgpt-paraphrases).*
## We used this prompt to generate paraphrases
给下面这个问题生成5条相似的改写: *{text}*
This dataset is based on the queries from Baidu and Zhihu.
We generated 5 paraphrases for each sample, totally this dataset has about 238k data rows. You can make 30 rows from a row from each sample. In this way you can make 7.1 millions train pairs (238k rows with 5 paraphrases -> 6x5x238000 = 7.14 millions of bidirected or 6x5x238000/2 = 3.57 millions of unique pairs).
## We used
- 82851 questions from the Baidu dataset
- 154885 questions from the Zhihu dataset
## Structure of the dataset
- text column - an original sentence or question from the datasets
- paraphrases - a list of 5 paraphrases
- category - question / sentence
- source - baidu / zhihu
## Legal disclaimer
Data is based on OpenAI’s gpt-3.5-turbo, whose [terms of use](https://openai.com/policies/terms-of-use) prohibit developing models that compete with OpenAI. So if you use this dataset to train a model, don't compete with OpenAI.
### BibTeX entry and citation info
```bibtex
@inproceedings{chinese_chatgpt_paraphrases_dataset,
author={Shen Huang},
title={Chinese ChatGPT Paraphrases Dataset},
year={2023}
}
``` | pangda/chatgpt-paraphrases-zh | [
"size_categories:100K<n<1M",
"language:zh",
"license:mit",
"region:us"
]
| 2023-11-06T08:05:13+00:00 | {"language": ["zh"], "license": "mit", "size_categories": ["100K<n<1M"]} | 2023-11-06T08:07:38+00:00 | []
| [
"zh"
]
| TAGS
#size_categories-100K<n<1M #language-Chinese #license-mit #region-us
| This is a Chinese dataset of paraphrases created by ChatGPT.
*For English paraphrase dataset, you can refer to humarin/chatgpt-paraphrases.*
## We used this prompt to generate paraphrases
给下面这个问题生成5条相似的改写: *{text}*
This dataset is based on the queries from Baidu and Zhihu.
We generated 5 paraphrases for each sample, totally this dataset has about 238k data rows. You can make 30 rows from a row from each sample. In this way you can make 7.1 millions train pairs (238k rows with 5 paraphrases -> 6x5x238000 = 7.14 millions of bidirected or 6x5x238000/2 = 3.57 millions of unique pairs).
## We used
- 82851 questions from the Baidu dataset
- 154885 questions from the Zhihu dataset
## Structure of the dataset
- text column - an original sentence or question from the datasets
- paraphrases - a list of 5 paraphrases
- category - question / sentence
- source - baidu / zhihu
## Legal disclaimer
Data is based on OpenAI’s gpt-3.5-turbo, whose terms of use prohibit developing models that compete with OpenAI. So if you use this dataset to train a model, don't compete with OpenAI.
### BibTeX entry and citation info
| [
"## We used this prompt to generate paraphrases\n给下面这个问题生成5条相似的改写: *{text}*\n\nThis dataset is based on the queries from Baidu and Zhihu.\n\nWe generated 5 paraphrases for each sample, totally this dataset has about 238k data rows. You can make 30 rows from a row from each sample. In this way you can make 7.1 millions train pairs (238k rows with 5 paraphrases -> 6x5x238000 = 7.14 millions of bidirected or 6x5x238000/2 = 3.57 millions of unique pairs).",
"## We used\n- 82851 questions from the Baidu dataset\n- 154885 questions from the Zhihu dataset",
"## Structure of the dataset\n- text column - an original sentence or question from the datasets\n- paraphrases - a list of 5 paraphrases\n- category - question / sentence\n- source - baidu / zhihu",
"## Legal disclaimer\n\nData is based on OpenAI’s gpt-3.5-turbo, whose terms of use prohibit developing models that compete with OpenAI. So if you use this dataset to train a model, don't compete with OpenAI.",
"### BibTeX entry and citation info"
]
| [
"TAGS\n#size_categories-100K<n<1M #language-Chinese #license-mit #region-us \n",
"## We used this prompt to generate paraphrases\n给下面这个问题生成5条相似的改写: *{text}*\n\nThis dataset is based on the queries from Baidu and Zhihu.\n\nWe generated 5 paraphrases for each sample, totally this dataset has about 238k data rows. You can make 30 rows from a row from each sample. In this way you can make 7.1 millions train pairs (238k rows with 5 paraphrases -> 6x5x238000 = 7.14 millions of bidirected or 6x5x238000/2 = 3.57 millions of unique pairs).",
"## We used\n- 82851 questions from the Baidu dataset\n- 154885 questions from the Zhihu dataset",
"## Structure of the dataset\n- text column - an original sentence or question from the datasets\n- paraphrases - a list of 5 paraphrases\n- category - question / sentence\n- source - baidu / zhihu",
"## Legal disclaimer\n\nData is based on OpenAI’s gpt-3.5-turbo, whose terms of use prohibit developing models that compete with OpenAI. So if you use this dataset to train a model, don't compete with OpenAI.",
"### BibTeX entry and citation info"
]
| [
28,
141,
25,
51,
55,
11
]
| [
"passage: TAGS\n#size_categories-100K<n<1M #language-Chinese #license-mit #region-us \n## We used this prompt to generate paraphrases\n给下面这个问题生成5条相似的改写: *{text}*\n\nThis dataset is based on the queries from Baidu and Zhihu.\n\nWe generated 5 paraphrases for each sample, totally this dataset has about 238k data rows. You can make 30 rows from a row from each sample. In this way you can make 7.1 millions train pairs (238k rows with 5 paraphrases -> 6x5x238000 = 7.14 millions of bidirected or 6x5x238000/2 = 3.57 millions of unique pairs).## We used\n- 82851 questions from the Baidu dataset\n- 154885 questions from the Zhihu dataset## Structure of the dataset\n- text column - an original sentence or question from the datasets\n- paraphrases - a list of 5 paraphrases\n- category - question / sentence\n- source - baidu / zhihu## Legal disclaimer\n\nData is based on OpenAI’s gpt-3.5-turbo, whose terms of use prohibit developing models that compete with OpenAI. So if you use this dataset to train a model, don't compete with OpenAI.### BibTeX entry and citation info"
]
|
b0464c110ce59c2fa44165ad068744099d950aaf | # Dataset Card for "zalo-ai-cris-1k2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lequocbinh04/zalo-ai-cris-1k2 | [
"region:us"
]
| 2023-11-06T08:09:51+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 516600, "num_examples": 1200}], "download_size": 144323, "dataset_size": 516600}} | 2023-11-06T08:09:52+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "zalo-ai-cris-1k2"
More Information needed | [
"# Dataset Card for \"zalo-ai-cris-1k2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"zalo-ai-cris-1k2\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"zalo-ai-cris-1k2\"\n\nMore Information needed"
]
|
ce5b1eddbfebd330b4e8cc3df6e9b2eda056cc75 | # Dataset Card for "tokenized_news_2gb_4096"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | leeseeun/tokenized_news_2gb_4096 | [
"region:us"
]
| 2023-11-06T08:11:47+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 2228931880, "num_examples": 136010}], "download_size": 0, "dataset_size": 2228931880}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T09:32:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "tokenized_news_2gb_4096"
More Information needed | [
"# Dataset Card for \"tokenized_news_2gb_4096\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"tokenized_news_2gb_4096\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"tokenized_news_2gb_4096\"\n\nMore Information needed"
]
|
a6038143e066efcfb48a5957afb291f8966bf489 | # APPS Dataset for Reinforcement Learning with AI Feedback
## Dataset Details
[APPS_RLAIF](https://huggingface.co/datasets/nmd2k/apps_rlaif/) is an extended work from APPS [[1]]([^1])
to use Chat LLMs to create multiple variances for each solution for defined problems.
In each solution, we use LLama 34B [[2]]([^2]) to transform the original solutions into variances and rank them by score.
The generated flow is demonstrated as below; each variance is created based on the previous version of it in the chat.
We iterated each solutions `n=3` times
<img src="https://cdn-uploads.huggingface.co/production/uploads/63733f7fd398fce0dd45125c/MhfwiSbafLQDvxQdTuR-2.png" width="600" />
## Languages
The dataset contains problem description in English and code solutions in Python.
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("nmd2k/apps_rlaif")
DatasetDict({
train: Dataset({
features: ['problem_id', 'question', 'solutions', 'input_output', 'difficulty', 'url', 'starter_code', 'variances'],
num_rows: 4999
})
})
```
**How to use the dataset**
Each sample consists of a pair of problems and solutions (from APPS [[1]]([^1])) and a list of solution variances generated by LLM stored in the `variances` field.
For example:
```json
{'problem_id': 0,
'question': 'Polycarp has $n$ different binary words. A word called binary if it contains only characters \'0\' and \'1\'. For example, these words are binary: "0001", "11", "0" and "0011100".\n\nPolycarp wants to offer his set of $n$ binary words to play a game "words". In this game, players name words and each next word (starting from the second) must start with the last character of the previous word. The first word can be any. For example, these sequence of words can be named during the game: "0101", "1", "10", "00", "00001".\n\nWord reversal is the operation of reversing the order of the characters. For example, the word "0111" after the reversal becomes "1110", the word "11010" after the reversal becomes "01011".\n\nProbably, Polycarp has such a set of words that there is no way to put them in the order correspondent to the game rules. In this situation, he wants to reverse some words from his set so that: the final set of $n$ words still contains different words (i.e. all words are unique); there is a way to put all words of the final set of words in the order so that the final sequence of $n$ words is consistent with the game rules. \n\nPolycarp wants to reverse minimal number of words. Please, help him.\n\n\n-----Input-----\n\nThe first line of the input contains one integer $t$ ($1 \\le t \\le 10^4$) — the number of test cases in the input. Then $t$ test cases follow.\n\nThe first line of a test case contains one integer $n$ ($1 \\le n \\le 2\\cdot10^5$) — the number of words in the Polycarp\'s set. Next $n$ lines contain these words. All of $n$ words aren\'t empty and contains only characters \'0\' and \'1\'. The sum of word lengths doesn\'t exceed $4\\cdot10^6$. All words are different.\n\nGuaranteed, that the sum of $n$ for all test cases in the input doesn\'t exceed $2\\cdot10^5$. Also, guaranteed that the sum of word lengths for all test cases in the input doesn\'t exceed $4\\cdot10^6$.\n\n\n-----Output-----\n\nPrint answer for all of $t$ test cases in the order they appear.\n\nIf there is no answer for the test case, print -1. Otherwise, the first line of the output should contain $k$ ($0 \\le k \\le n$) — the minimal number of words in the set which should be reversed. The second line of the output should contain $k$ distinct integers — the indexes of the words in the set which should be reversed. Words are numerated from $1$ to $n$ in the order they appear. If $k=0$ you can skip this line (or you can print an empty line). If there are many answers you can print any of them.\n\n\n-----Example-----\nInput\n4\n4\n0001\n1000\n0011\n0111\n3\n010\n101\n0\n2\n00000\n00001\n4\n01\n001\n0001\n00001\n\nOutput\n1\n3 \n-1\n0\n\n2\n1 2',
'solutions': "for _ in range(int(input())):\n n = int(input())\n mass = []\n zo = 0\n oz = 0\n zz = 0\n oo = 0\n ozs = []\n zos = []\n ozss = set()\n zoss = set()\n for j in range(n):\n k = input()\n mass.append(k)\n if k[0] == '0' and k[-1] == '1':\n zoss.add(k)\n zos.append(j + 1)\n zo += 1\n elif k[0] == '1' and k[-1] == '0':\n ozss.add(k)\n ozs.append(j + 1)\n oz += 1\n elif k[0] == '0' and k[-1] == '0':\n zz += 1\n else:\n oo += 1\n if zz and oo and not oz and not zo:\n print(-1)\n continue\n else:\n if zo > oz:\n print((zo - oz) // 2)\n ans = []\n need = (zo - oz) // 2\n i = 0\n while need:\n zzz = mass[zos[i] - 1][len(mass[zos[i] - 1]) - 1:: -1]\n if zzz not in ozss:\n ans.append(zos[i])\n need -= 1\n i += 1\n print(*ans)\n else:\n print((oz - zo) // 2)\n ans = []\n need = (oz - zo) // 2\n i = 0\n while need:\n zzz = mass[ozs[i] - 1][len(mass[ozs[i] - 1]) - 1:: -1]\n if zzz not in zoss:\n ans.append(ozs[i])\n need -= 1\n i += 1\n print(*ans)\n",
'input_output': '{\n "inputs": [\n "4\\n4\\n0001\\n1000\\n0011\\n0111\\n3\\n010\\n101\\n0\\n2\\n00000\\n00001\\n4\\n01\\n001\\n0001\\n00001\\n"\n ],\n "outputs": [\n "1\\n3 \\n-1\\n0\\n\\n2\\n1 2 \\n"\n ]\n}',
'difficulty': 'interview',
'url': 'https://codeforces.com/problemset/problem/1259/D',
'starter_code': '',
'variances': ["for _ in range(int(input())):\n n = int(input())\n numbers = []\n zero_start_one_end = 0\n one_start_zero_end = 0\n zero_start_zero_end = 0\n one_start_one_end = 0\n zero_start_one_end_indices = []\n one_start_zero_end_indices = []\n zero_start_one_end_set = set()\n one_start_zero_end_set = set()\n for j in range(n):\n k = input()\n numbers.append(k)\n if k[0] == '0' and k[-1] == '1':\n one_start_zero_end_set.add(k)\n one_start_zero_end_indices.append(j + 1)\n one_start_zero_end += 1\n elif k[0] == '1' and k[-1] == '0':\n zero_start_one_end_set.add(k)\n zero_start_one_end_indices.append(j + 1)\n zero_start_one_end += 1\n elif k[0] == '0' and k[-1] == '0':\n zero_start_zero_end += 1\n else:\n one_start_one_end += 1\n if zero_start_zero_end and one_start_one_end and not one_start_zero_end and not zero_start_one_end:\n print(-1)\n continue\n else:\n if zero_start_one_end > one_start_zero_end:\n print((zero_start_one_end - one_start_zero_end) // 2)\n result = []\n required = (zero_start_one_end - one_start_zero_end) // 2\n index = 0\n while required:\n reversed_str = numbers[zero_start_one_end_indices[index] - 1][len(numbers[zero_start_one_end_indices[index] - 1]) - 1:: -1]\n if reversed_str not in one_start_zero_end_set:\n result.append(zero_start_one_end_indices[index])\n required -= 1\n index += 1\n print(*result)\n else:\n print((one_start_zero_end - zero_start_one_end) // 2)\n result = []\n required = (one_start_zero_end - zero_start_one_end) // 2\n index = 0\n while required:\n reversed_str = numbers[one_start_zero_end_indices[index] - 1][len(numbers[one_start_zero_end_indices[index] - 1]) - 1:: -1]\n if reversed_str not in zero_start_one_end_set:\n result.append(one_start_zero_end_indices[index])\n required -= 1\n index += 1\n print(*result)",
"for _ in range(int(input())):\n n = int(input())\n sequence = []\n first_zero_last_one = 0\n first_one_last_zero = 0\n first_zero_last_zero = 0\n first_one_last_one = 0\n first_zero_last_one_positions = []\n first_one_last_zero_positions = []\n first_zero_last_one_set = set()\n first_one_last_zero_set = set()\n for i in range(n):\n element = input()\n sequence.append(element)\n if element[0] == '0' and element[-1] == '1':\n first_one_last_zero_set.add(element)\n first_one_last_zero_positions.append(i + 1)\n first_one_last_zero += 1\n elif element[0] == '1' and element[-1] == '0':\n first_zero_last_one_set.add(element)\n first_zero_last_one_positions.append(i + 1)\n first_zero_last_one += 1\n elif element[0] == '0' and element[-1] == '0':\n first_zero_last_zero += 1\n else:\n first_one_last_one += 1\n if first_zero_last_zero and first_one_last_one and not first_zero_last_one and not first_one_last_zero:\n print(-1)\n continue\n else:\n if first_zero_last_one > first_one_last_zero:\n print((first_zero_last_one - first_one_last_zero) // 2)\n solution = []\n necessary = (first_zero_last_one - first_one_last_zero) // 2\n position = 0\n while necessary:\n reversed_element = sequence[first_zero_last_one_positions[position] - 1][len(sequence[first_zero_last_one_positions[position] - 1]) - 1:: -1]\n if reversed_element not in first_one_last_zero_set:\n solution.append(first_zero_last_one_positions[position])\n necessary -= 1\n position += 1\n print(*solution)\n else:\n print((first_one_last_zero - first_zero_last_one) // 2)\n solution = []\n necessary = (first_one_last_zero - first_zero_last_one) // 2\n position = 0\n while necessary:\n reversed_element = sequence[first_one_last_zero_positions[position] - 1][len(sequence[first_one_last_zero_positions[position] - 1]) - 1:: -1]\n if reversed_element not in first_zero_last_one_set:\n solution.append(first_one_last_zero_positions[position])\n necessary -= 1\n position += 1\n print(*solution)",
"for _ in range(int(input())):\n number_of_cases = int(input())\n sequence_list = []\n zero_start_one_end_count = 0\n one_start_zero_end_count = 0\n zero_start_zero_end_count = 0\n one_start_one_end_count = 0\n zero_start_one_end_index_list = []\n one_start_zero_end_index_list = []\n zero_start_one_end_set = set()\n one_start_zero_end_set = set()\n for case_index in range(number_of_cases):\n sequence = input()\n sequence_list.append(sequence)\n if sequence[0] == '0' and sequence[-1] == '1':\n one_start_zero_end_set.add(sequence)\n one_start_zero_end_index_list.append(case_index + 1)\n one_start_zero_end_count += 1\n elif sequence[0] == '1' and sequence[-1] == '0':\n zero_start_one_end_set.add(sequence)\n zero_start_one_end_index_list.append(case_index + 1)\n zero_start_one_end_count += 1\n elif sequence[0] == '0' and sequence[-1] == '0':\n zero_start_zero_end_count += 1\n else:\n one_start_one_end_count += 1\n if zero_start_zero_end_count and one_start_one_end_count and not one_start_zero_end_count and not zero_start_one_end_count:\n print(-1)\n continue\n else:\n if zero_start_one_end_count > one_start_zero_end_count:\n print((zero_start_one_end_count - one_start_zero_end_count) // 2)\n output = []\n required_count = (zero_start_one_end_count - one_start_zero_end_count) // 2\n index = 0\n while required_count:\n reversed_sequence = sequence_list[zero_start_one_end_index_list[index] - 1][len(sequence_list[zero_start_one_end_index_list[index] - 1]) - 1:: -1]\n if reversed_sequence not in one_start_zero_end_set:\n output.append(zero_start_one_end_index_list[index])\n required_count -= 1\n index += 1\n print(*output)\n else:\n print((one_start_zero_end_count - zero_start_one_end_count) // 2)\n output = []\n required_count = (one_start_zero_end_count - zero_start_one_end_count) // 2\n index = 0\n while required_count:\n reversed_sequence = sequence_list[one_start_zero_end_index_list[index] - 1][len(sequence_list[one_start_zero_end_index_list[index] - 1]) - 1:: -1]\n if reversed_sequence not in zero_start_one_end_set:\n output.append(one_start_zero_end_index_list[index])\n required_count -= 1\n index += 1\n print(*output)\n### User Message\nCreate another variance this code. New variance:\n### Assistant\n\nfor _ in range(int(input())):\n number_of_cases = int(input())\n sequence_list = []\n count_start_end_zero_one = 0\n count_start_end_one_zero = 0\n count_start_zero_end_zero = 0\n count_start_one_end_one = 0\n index_start_end_zero_one = []\n index_start_end_one_zero = []\n set_start_end_zero_one = set()\n set_start_end_one_zero = set()\n for case_index"]
}
```
<!-- ## Dataset Creation
If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section.
## Citation
**BibTeX:**
```
@misc{apps_rlaif,
author = {Manh, Dung Nguyen and Hai, Nam Le and Bui, Nghi DQ},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```
Naturally, you should also cite the original LLaMA-2 paper [[2]]([^2]) and the APPS paper [[1]]([^1]).
-->
[^1]: https://arxiv.org/abs/2105.09938
[^2]: https://arxiv.org/abs/2307.09288
| nmd2k/apps_rlaif | [
"task_categories:text-generation",
"task_categories:reinforcement-learning",
"size_categories:1K<n<10K",
"license:mit",
"code",
"arxiv:2105.09938",
"arxiv:2307.09288",
"region:us"
]
| 2023-11-06T08:15:38+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation", "reinforcement-learning"], "pretty_name": "apps_rlaif", "dataset_info": {"features": [{"name": "problem_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "input_output", "dtype": "string"}, {"name": "difficulty", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "starter_code", "dtype": "string"}, {"name": "prefer_solution", "dtype": "string"}, {"name": "flaw_solution", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 196914903, "num_examples": 23129}], "download_size": 38020746, "dataset_size": 196914903}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["code"]} | 2023-11-27T17:41:45+00:00 | [
"2105.09938",
"2307.09288"
]
| []
| TAGS
#task_categories-text-generation #task_categories-reinforcement-learning #size_categories-1K<n<10K #license-mit #code #arxiv-2105.09938 #arxiv-2307.09288 #region-us
| # APPS Dataset for Reinforcement Learning with AI Feedback
## Dataset Details
APPS_RLAIF is an extended work from APPS [[1]]([^1])
to use Chat LLMs to create multiple variances for each solution for defined problems.
In each solution, we use LLama 34B [[2]]([^2]) to transform the original solutions into variances and rank them by score.
The generated flow is demonstrated as below; each variance is created based on the previous version of it in the chat.
We iterated each solutions 'n=3' times
<img src="URL width="600" />
## Languages
The dataset contains problem description in English and code solutions in Python.
## Dataset Structure
How to use the dataset
Each sample consists of a pair of problems and solutions (from APPS [[1]]([^1])) and a list of solution variances generated by LLM stored in the 'variances' field.
For example:
[^1]: URL
[^2]: URL
| [
"# APPS Dataset for Reinforcement Learning with AI Feedback",
"## Dataset Details\n\nAPPS_RLAIF is an extended work from APPS [[1]]([^1]) \nto use Chat LLMs to create multiple variances for each solution for defined problems. \nIn each solution, we use LLama 34B [[2]]([^2]) to transform the original solutions into variances and rank them by score.\nThe generated flow is demonstrated as below; each variance is created based on the previous version of it in the chat. \nWe iterated each solutions 'n=3' times\n<img src=\"URL width=\"600\" />",
"## Languages\n\nThe dataset contains problem description in English and code solutions in Python.",
"## Dataset Structure\n\n\n\nHow to use the dataset\n\nEach sample consists of a pair of problems and solutions (from APPS [[1]]([^1])) and a list of solution variances generated by LLM stored in the 'variances' field.\n\nFor example:\n\n\n\n\n\n[^1]: URL\n[^2]: URL"
]
| [
"TAGS\n#task_categories-text-generation #task_categories-reinforcement-learning #size_categories-1K<n<10K #license-mit #code #arxiv-2105.09938 #arxiv-2307.09288 #region-us \n",
"# APPS Dataset for Reinforcement Learning with AI Feedback",
"## Dataset Details\n\nAPPS_RLAIF is an extended work from APPS [[1]]([^1]) \nto use Chat LLMs to create multiple variances for each solution for defined problems. \nIn each solution, we use LLama 34B [[2]]([^2]) to transform the original solutions into variances and rank them by score.\nThe generated flow is demonstrated as below; each variance is created based on the previous version of it in the chat. \nWe iterated each solutions 'n=3' times\n<img src=\"URL width=\"600\" />",
"## Languages\n\nThe dataset contains problem description in English and code solutions in Python.",
"## Dataset Structure\n\n\n\nHow to use the dataset\n\nEach sample consists of a pair of problems and solutions (from APPS [[1]]([^1])) and a list of solution variances generated by LLM stored in the 'variances' field.\n\nFor example:\n\n\n\n\n\n[^1]: URL\n[^2]: URL"
]
| [
65,
13,
127,
18,
71
]
| [
"passage: TAGS\n#task_categories-text-generation #task_categories-reinforcement-learning #size_categories-1K<n<10K #license-mit #code #arxiv-2105.09938 #arxiv-2307.09288 #region-us \n# APPS Dataset for Reinforcement Learning with AI Feedback## Dataset Details\n\nAPPS_RLAIF is an extended work from APPS [[1]]([^1]) \nto use Chat LLMs to create multiple variances for each solution for defined problems. \nIn each solution, we use LLama 34B [[2]]([^2]) to transform the original solutions into variances and rank them by score.\nThe generated flow is demonstrated as below; each variance is created based on the previous version of it in the chat. \nWe iterated each solutions 'n=3' times\n<img src=\"URL width=\"600\" />## Languages\n\nThe dataset contains problem description in English and code solutions in Python.## Dataset Structure\n\n\n\nHow to use the dataset\n\nEach sample consists of a pair of problems and solutions (from APPS [[1]]([^1])) and a list of solution variances generated by LLM stored in the 'variances' field.\n\nFor example:\n\n\n\n\n\n[^1]: URL\n[^2]: URL"
]
|
fa59d6c60f037c87813b8c9f16f43ec986a39648 | # Dataset Card for "autotrain-data-Medical_Terminology_Zephyr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pseudolab/autotrain-data-Medical_Terminology_Zephyr | [
"region:us"
]
| 2023-11-06T08:37:53+00:00 | {"dataset_info": {"features": [{"name": "tags", "dtype": "string"}, {"name": "categories", "dtype": "string"}, {"name": "topics", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "es-title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "es-bite", "dtype": "string"}, {"name": "audience", "dtype": "string"}, {"name": "segment", "dtype": "string"}, {"name": "insurance-status", "dtype": "string"}, {"name": "state", "dtype": "string"}, {"name": "condition", "dtype": "string"}, {"name": "autotrain_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 123044, "num_examples": 257}, {"name": "validation", "num_bytes": 123044, "num_examples": 257}], "download_size": 128192, "dataset_size": 246088}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-06T08:54:09+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "autotrain-data-Medical_Terminology_Zephyr"
More Information needed | [
"# Dataset Card for \"autotrain-data-Medical_Terminology_Zephyr\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"autotrain-data-Medical_Terminology_Zephyr\"\n\nMore Information needed"
]
| [
6,
26
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-Medical_Terminology_Zephyr\"\n\nMore Information needed"
]
|
aa2b83305d3e5cf450811f374ef2458dd5c9088b | # Dataset Info C++ + Natural Description -> Doxygen Documentation
This dataset was created for my bachelors thesis investigating how LLMs can be fine-tuned to generate doxygen documentation. It was created by using the “Source code analysis dataset”
by Gelman, Banjo Obayomi, Jessica Moore und David Slater (doi: 10.1016/j.dib.2019.104712).
The following SQL-Statement was used to grab raw data from the dataset:
```
SELECT * FROM all_data
WHERE LENGTH(comment) < 350 AND LENGTH(comment) > 10 AND LENGTH(code) > 100 AND LENGTH(code) < 800
AND code NOT LIKE '%//%' AND code NOT LIKE '%/*%' AND code NOT LIKE '%*/%'
AND filename LIKE '%.cpp%'
LIMIT 12000
```
After selecting the Data Code LLaMa Instruct 34B is tasked to combine the human-written description of the functionality with the function code into a Doxygen-Comment. Any results which included the sample doxygen string or no doxygen string at all where filtered from the set.
| LukasSonn/DoxygenStrings-Short | [
"license:apache-2.0",
"doi:10.57967/hf/1331",
"region:us"
]
| 2023-11-06T08:39:53+00:00 | {"license": "apache-2.0"} | 2023-11-06T08:44:30+00:00 | []
| []
| TAGS
#license-apache-2.0 #doi-10.57967/hf/1331 #region-us
| # Dataset Info C++ + Natural Description -> Doxygen Documentation
This dataset was created for my bachelors thesis investigating how LLMs can be fine-tuned to generate doxygen documentation. It was created by using the “Source code analysis dataset”
by Gelman, Banjo Obayomi, Jessica Moore und David Slater (doi: 10.1016/j.dib.2019.104712).
The following SQL-Statement was used to grab raw data from the dataset:
After selecting the Data Code LLaMa Instruct 34B is tasked to combine the human-written description of the functionality with the function code into a Doxygen-Comment. Any results which included the sample doxygen string or no doxygen string at all where filtered from the set.
| [
"# Dataset Info C++ + Natural Description -> Doxygen Documentation\n\nThis dataset was created for my bachelors thesis investigating how LLMs can be fine-tuned to generate doxygen documentation. It was created by using the “Source code analysis dataset” \nby Gelman, Banjo Obayomi, Jessica Moore und David Slater (doi: 10.1016/j.dib.2019.104712).\n\nThe following SQL-Statement was used to grab raw data from the dataset:\n\n\nAfter selecting the Data Code LLaMa Instruct 34B is tasked to combine the human-written description of the functionality with the function code into a Doxygen-Comment. Any results which included the sample doxygen string or no doxygen string at all where filtered from the set."
]
| [
"TAGS\n#license-apache-2.0 #doi-10.57967/hf/1331 #region-us \n",
"# Dataset Info C++ + Natural Description -> Doxygen Documentation\n\nThis dataset was created for my bachelors thesis investigating how LLMs can be fine-tuned to generate doxygen documentation. It was created by using the “Source code analysis dataset” \nby Gelman, Banjo Obayomi, Jessica Moore und David Slater (doi: 10.1016/j.dib.2019.104712).\n\nThe following SQL-Statement was used to grab raw data from the dataset:\n\n\nAfter selecting the Data Code LLaMa Instruct 34B is tasked to combine the human-written description of the functionality with the function code into a Doxygen-Comment. Any results which included the sample doxygen string or no doxygen string at all where filtered from the set."
]
| [
26,
173
]
| [
"passage: TAGS\n#license-apache-2.0 #doi-10.57967/hf/1331 #region-us \n# Dataset Info C++ + Natural Description -> Doxygen Documentation\n\nThis dataset was created for my bachelors thesis investigating how LLMs can be fine-tuned to generate doxygen documentation. It was created by using the “Source code analysis dataset” \nby Gelman, Banjo Obayomi, Jessica Moore und David Slater (doi: 10.1016/j.dib.2019.104712).\n\nThe following SQL-Statement was used to grab raw data from the dataset:\n\n\nAfter selecting the Data Code LLaMa Instruct 34B is tasked to combine the human-written description of the functionality with the function code into a Doxygen-Comment. Any results which included the sample doxygen string or no doxygen string at all where filtered from the set."
]
|
b2aad9e4dcd9f584d6108cc268c5d739cdaa4483 | # Dataset Card for "legal_document_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ademax/legal_document_vi | [
"region:us"
]
| 2023-11-06T08:48:11+00:00 | {"dataset_info": {"features": [{"name": "subject", "dtype": "string"}, {"name": "meta", "struct": [{"name": "effective_date", "dtype": "string"}, {"name": "issuing_agency", "dtype": "string"}, {"name": "promulgation_date", "dtype": "string"}, {"name": "sign_number", "dtype": "string"}, {"name": "signer", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "text", "dtype": "string"}, {"name": "metadata_coQuanBanHanh", "dtype": "string"}, {"name": "metadata_coQuanBanHanh_conf", "dtype": "bool"}, {"name": "metadata_soHieu", "dtype": "string"}, {"name": "metadata_soHieu_conf", "dtype": "bool"}, {"name": "metadata_loaiVanBan", "dtype": "string"}, {"name": "metadata_loaiVanBan_conf", "dtype": "float64"}, {"name": "metadata_ngayBanHanh", "dtype": "string"}, {"name": "metadata_ngayBanHanh_conf", "dtype": "float64"}, {"name": "metadata_trichYeu", "dtype": "string"}, {"name": "metadata_trichYeu_conf", "dtype": "float64"}, {"name": "metadata_nguoiKy", "dtype": "string"}, {"name": "metadata_nguoiKy_conf", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 7768076795, "num_examples": 424062}], "download_size": 2688089919, "dataset_size": 7768076795}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T08:51:35+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "legal_document_vi"
More Information needed | [
"# Dataset Card for \"legal_document_vi\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"legal_document_vi\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"legal_document_vi\"\n\nMore Information needed"
]
|
6347826b4a732d4096792311e81bb6c20608b823 |
The largest query length is: <b>1815</b><br>
The average query length is: <b>61.19101432132145</b>
-------
The largest pos length is: <b>1312</b><br>
The average pos length is: <b>152.11767179632923</b>
-------
The largest neg length is: <b>1669</b><br>
The average neg length is: <b>224.2615171813766</b> | w95/triplets | [
"license:mit",
"region:us"
]
| 2023-11-06T08:48:31+00:00 | {"license": "mit"} | 2023-11-08T22:55:35+00:00 | []
| []
| TAGS
#license-mit #region-us
|
The largest query length is: <b>1815</b><br>
The average query length is: <b>61.19101432132145</b>
-------
The largest pos length is: <b>1312</b><br>
The average pos length is: <b>152.11767179632923</b>
-------
The largest neg length is: <b>1669</b><br>
The average neg length is: <b>224.2615171813766</b> | []
| [
"TAGS\n#license-mit #region-us \n"
]
| [
11
]
| [
"passage: TAGS\n#license-mit #region-us \n"
]
|
cd880e35869e76f41ac227bb79301b0db32baa99 | # Dataset Card for "autotrain-data-Medical_Terminology_Zephyr_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pseudolab/autotrain-data-Medical_Terminology_Zephyr_2 | [
"region:us"
]
| 2023-11-06T08:54:51+00:00 | {"dataset_info": {"features": [{"name": "tags", "dtype": "string"}, {"name": "categories", "dtype": "string"}, {"name": "topics", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "es-title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "es-bite", "dtype": "string"}, {"name": "audience", "dtype": "string"}, {"name": "segment", "dtype": "string"}, {"name": "insurance-status", "dtype": "string"}, {"name": "state", "dtype": "string"}, {"name": "condition", "dtype": "string"}, {"name": "autotrain_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 123044, "num_examples": 257}, {"name": "validation", "num_bytes": 123044, "num_examples": 257}], "download_size": 128192, "dataset_size": 246088}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-06T08:54:52+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "autotrain-data-Medical_Terminology_Zephyr_2"
More Information needed | [
"# Dataset Card for \"autotrain-data-Medical_Terminology_Zephyr_2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"autotrain-data-Medical_Terminology_Zephyr_2\"\n\nMore Information needed"
]
| [
6,
28
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-Medical_Terminology_Zephyr_2\"\n\nMore Information needed"
]
|
590726b3765b1b90c5e53a17e3b1f77d92d3aa8a |
# Dataset Card for Argument-Quality-Ranking-30k Dataset
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Argument Quality Ranking](#argument-quality-ranking)
- [Argument Topic](#argument-topic)
- [Dataset Collection](#dataset-collection)
- [Argument Collection](#argument-collection)
- [Quality and Stance Labeling](#quality-and-stance-labeling)
- [Dataset Structure](#dataset-structure)
- [Quality Labels](#quality-labels)
- [Stance Labels](#stance-labels)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Summary
### Argument Quality Ranking
The dataset contains 30,497 crowd-sourced arguments for 71 debatable topics labeled for quality and stance, split into train, validation and test sets.
The dataset was originally published as part of our paper: [A Large-scale Dataset for Argument Quality Ranking: Construction and Analysis](https://arxiv.org/abs/1911.11408).
### Argument Topic
This subset contains 9,487 of the arguments only with their topics with a different train-validation-test split. Usage of this subset TBA.
## Dataset Collection
### Argument Collection
For the purpose of collecting arguments for this dataset we conducted a crowd annotation task. We selected 71 common controversial topics for which arguments were collected (e.g., We should abolish capital punishment).
Annotators were presented with a single topic each time, and asked to contribute one supporting and one contesting argument for it, requiring arguments to be written using original language. To motivate high-quality contributions, contributors were informed they will receive extra payment for high quality arguments, as determined by the subsequent argument quality labeling task.
It was explained that an argument will be considered as a high quality one, if a person preparing a speech on the topic will be likely to use this argument as is in her speech.
We place a limit on argument length - a minimum of 35 characters and a maximum of 210 characters. In total, we collected 30,497 arguments from 280 contributors, each contributing no more than 6 arguments per topic.
### Quality and Stance Labeling
Annotators were presented with a binary question per argument, asking if they would recommend a friend to use that argument as is in a speech supporting/contesting the topic, regardless of personal opinion.
In addition, annotators were asked to mark the stance of the argument towards the topic (pro or con).
10 annotators labeled each instance.
## Dataset Structure
Each instance contains a string argument, a string topic, and quality and stance scores:
* WA - the quality label according to the weighted-average scoring function
* MACE-P - the quality label according to the MACE-P scoring function
* stance_WA - the stance label according to the weighted-average scoring function
* stance_WA_conf - the confidence in the stance label according to the weighted-average scoring function
### Quality Labels
For an explanation of the quality labels presented in columns WA and MACE-P, please see section 4 in the paper.
### Stance Labels
There were three possible annotations for the stance task: 1 (pro), -1 (con) and 0 (neutral). The stance_WA_conf column refers to the weighted-average score of the winning label. The stance_WA column refers to the winning stance label itself.
## Licensing Information
The datasets are released under the following licensing and copyright terms:
* (c) Copyright [Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:Copyrights#Reusers.27_rights_and_obligations)
* (c) Copyright IBM 2014. Released under [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/)
## Citation Information
```
@article{DBLP:journals/corr/abs-1911-11408,
author = {Shai Gretz and
Roni Friedman and
Edo Cohen{-}Karlik and
Assaf Toledo and
Dan Lahav and
Ranit Aharonov and
Noam Slonim},
title = {A Large-scale Dataset for Argument Quality Ranking: Construction and
Analysis},
journal = {CoRR},
volume = {abs/1911.11408},
year = {2019},
url = {http://arxiv.org/abs/1911.11408},
eprinttype = {arXiv},
eprint = {1911.11408},
timestamp = {Tue, 03 Dec 2019 20:41:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-11408.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | ibm/argument_quality_ranking_30k | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-3.0",
"arxiv:1911.11408",
"region:us"
]
| 2023-11-06T08:57:02+00:00 | {"language": ["en"], "license": "cc-by-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "Argument-Quality-Ranking-30k", "configs": [{"config_name": "argument_quality_ranking", "data_files": [{"split": "train", "path": "train.csv"}, {"split": "validation", "path": "dev.csv"}, {"split": "test", "path": "test.csv"}]}, {"config_name": "argument_topic", "data_files": [{"split": "train", "path": "train_topic.csv"}, {"split": "validation", "path": "dev_topic.csv"}, {"split": "test", "path": "test_topic.csv"}]}]} | 2023-11-06T11:46:42+00:00 | [
"1911.11408"
]
| [
"en"
]
| TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc-by-3.0 #arxiv-1911.11408 #region-us
|
# Dataset Card for Argument-Quality-Ranking-30k Dataset
## Table of Contents
- Dataset Summary
- Argument Quality Ranking
- Argument Topic
- Dataset Collection
- Argument Collection
- Quality and Stance Labeling
- Dataset Structure
- Quality Labels
- Stance Labels
- Licensing Information
- Citation Information
## Dataset Summary
### Argument Quality Ranking
The dataset contains 30,497 crowd-sourced arguments for 71 debatable topics labeled for quality and stance, split into train, validation and test sets.
The dataset was originally published as part of our paper: A Large-scale Dataset for Argument Quality Ranking: Construction and Analysis.
### Argument Topic
This subset contains 9,487 of the arguments only with their topics with a different train-validation-test split. Usage of this subset TBA.
## Dataset Collection
### Argument Collection
For the purpose of collecting arguments for this dataset we conducted a crowd annotation task. We selected 71 common controversial topics for which arguments were collected (e.g., We should abolish capital punishment).
Annotators were presented with a single topic each time, and asked to contribute one supporting and one contesting argument for it, requiring arguments to be written using original language. To motivate high-quality contributions, contributors were informed they will receive extra payment for high quality arguments, as determined by the subsequent argument quality labeling task.
It was explained that an argument will be considered as a high quality one, if a person preparing a speech on the topic will be likely to use this argument as is in her speech.
We place a limit on argument length - a minimum of 35 characters and a maximum of 210 characters. In total, we collected 30,497 arguments from 280 contributors, each contributing no more than 6 arguments per topic.
### Quality and Stance Labeling
Annotators were presented with a binary question per argument, asking if they would recommend a friend to use that argument as is in a speech supporting/contesting the topic, regardless of personal opinion.
In addition, annotators were asked to mark the stance of the argument towards the topic (pro or con).
10 annotators labeled each instance.
## Dataset Structure
Each instance contains a string argument, a string topic, and quality and stance scores:
* WA - the quality label according to the weighted-average scoring function
* MACE-P - the quality label according to the MACE-P scoring function
* stance_WA - the stance label according to the weighted-average scoring function
* stance_WA_conf - the confidence in the stance label according to the weighted-average scoring function
### Quality Labels
For an explanation of the quality labels presented in columns WA and MACE-P, please see section 4 in the paper.
### Stance Labels
There were three possible annotations for the stance task: 1 (pro), -1 (con) and 0 (neutral). The stance_WA_conf column refers to the weighted-average score of the winning label. The stance_WA column refers to the winning stance label itself.
## Licensing Information
The datasets are released under the following licensing and copyright terms:
* (c) Copyright Wikipedia
* (c) Copyright IBM 2014. Released under CC-BY-SA 3.0
| [
"# Dataset Card for Argument-Quality-Ranking-30k Dataset",
"## Table of Contents\n\n- Dataset Summary\n - Argument Quality Ranking\n - Argument Topic\n- Dataset Collection\n - Argument Collection\n - Quality and Stance Labeling\n- Dataset Structure\n - Quality Labels\n - Stance Labels\n- Licensing Information\n- Citation Information",
"## Dataset Summary",
"### Argument Quality Ranking\n\nThe dataset contains 30,497 crowd-sourced arguments for 71 debatable topics labeled for quality and stance, split into train, validation and test sets.\n\nThe dataset was originally published as part of our paper: A Large-scale Dataset for Argument Quality Ranking: Construction and Analysis.",
"### Argument Topic\n\nThis subset contains 9,487 of the arguments only with their topics with a different train-validation-test split. Usage of this subset TBA.",
"## Dataset Collection",
"### Argument Collection\n\nFor the purpose of collecting arguments for this dataset we conducted a crowd annotation task. We selected 71 common controversial topics for which arguments were collected (e.g., We should abolish capital punishment).\nAnnotators were presented with a single topic each time, and asked to contribute one supporting and one contesting argument for it, requiring arguments to be written using original language. To motivate high-quality contributions, contributors were informed they will receive extra payment for high quality arguments, as determined by the subsequent argument quality labeling task.\nIt was explained that an argument will be considered as a high quality one, if a person preparing a speech on the topic will be likely to use this argument as is in her speech.\nWe place a limit on argument length - a minimum of 35 characters and a maximum of 210 characters. In total, we collected 30,497 arguments from 280 contributors, each contributing no more than 6 arguments per topic.",
"### Quality and Stance Labeling\n\nAnnotators were presented with a binary question per argument, asking if they would recommend a friend to use that argument as is in a speech supporting/contesting the topic, regardless of personal opinion. \nIn addition, annotators were asked to mark the stance of the argument towards the topic (pro or con).\n10 annotators labeled each instance.",
"## Dataset Structure\n\nEach instance contains a string argument, a string topic, and quality and stance scores:\n* WA - the quality label according to the weighted-average scoring function\n* MACE-P - the quality label according to the MACE-P scoring function\n* stance_WA - the stance label according to the weighted-average scoring function\n* stance_WA_conf - the confidence in the stance label according to the weighted-average scoring function",
"### Quality Labels\n\nFor an explanation of the quality labels presented in columns WA and MACE-P, please see section 4 in the paper.",
"### Stance Labels\n\nThere were three possible annotations for the stance task: 1 (pro), -1 (con) and 0 (neutral). The stance_WA_conf column refers to the weighted-average score of the winning label. The stance_WA column refers to the winning stance label itself.",
"## Licensing Information\n\nThe datasets are released under the following licensing and copyright terms:\n* (c) Copyright Wikipedia\n* (c) Copyright IBM 2014. Released under CC-BY-SA 3.0"
]
| [
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc-by-3.0 #arxiv-1911.11408 #region-us \n",
"# Dataset Card for Argument-Quality-Ranking-30k Dataset",
"## Table of Contents\n\n- Dataset Summary\n - Argument Quality Ranking\n - Argument Topic\n- Dataset Collection\n - Argument Collection\n - Quality and Stance Labeling\n- Dataset Structure\n - Quality Labels\n - Stance Labels\n- Licensing Information\n- Citation Information",
"## Dataset Summary",
"### Argument Quality Ranking\n\nThe dataset contains 30,497 crowd-sourced arguments for 71 debatable topics labeled for quality and stance, split into train, validation and test sets.\n\nThe dataset was originally published as part of our paper: A Large-scale Dataset for Argument Quality Ranking: Construction and Analysis.",
"### Argument Topic\n\nThis subset contains 9,487 of the arguments only with their topics with a different train-validation-test split. Usage of this subset TBA.",
"## Dataset Collection",
"### Argument Collection\n\nFor the purpose of collecting arguments for this dataset we conducted a crowd annotation task. We selected 71 common controversial topics for which arguments were collected (e.g., We should abolish capital punishment).\nAnnotators were presented with a single topic each time, and asked to contribute one supporting and one contesting argument for it, requiring arguments to be written using original language. To motivate high-quality contributions, contributors were informed they will receive extra payment for high quality arguments, as determined by the subsequent argument quality labeling task.\nIt was explained that an argument will be considered as a high quality one, if a person preparing a speech on the topic will be likely to use this argument as is in her speech.\nWe place a limit on argument length - a minimum of 35 characters and a maximum of 210 characters. In total, we collected 30,497 arguments from 280 contributors, each contributing no more than 6 arguments per topic.",
"### Quality and Stance Labeling\n\nAnnotators were presented with a binary question per argument, asking if they would recommend a friend to use that argument as is in a speech supporting/contesting the topic, regardless of personal opinion. \nIn addition, annotators were asked to mark the stance of the argument towards the topic (pro or con).\n10 annotators labeled each instance.",
"## Dataset Structure\n\nEach instance contains a string argument, a string topic, and quality and stance scores:\n* WA - the quality label according to the weighted-average scoring function\n* MACE-P - the quality label according to the MACE-P scoring function\n* stance_WA - the stance label according to the weighted-average scoring function\n* stance_WA_conf - the confidence in the stance label according to the weighted-average scoring function",
"### Quality Labels\n\nFor an explanation of the quality labels presented in columns WA and MACE-P, please see section 4 in the paper.",
"### Stance Labels\n\nThere were three possible annotations for the stance task: 1 (pro), -1 (con) and 0 (neutral). The stance_WA_conf column refers to the weighted-average score of the winning label. The stance_WA column refers to the winning stance label itself.",
"## Licensing Information\n\nThe datasets are released under the following licensing and copyright terms:\n* (c) Copyright Wikipedia\n* (c) Copyright IBM 2014. Released under CC-BY-SA 3.0"
]
| [
51,
16,
53,
5,
74,
42,
4,
217,
85,
109,
33,
76,
43
]
| [
"passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc-by-3.0 #arxiv-1911.11408 #region-us \n# Dataset Card for Argument-Quality-Ranking-30k Dataset## Table of Contents\n\n- Dataset Summary\n - Argument Quality Ranking\n - Argument Topic\n- Dataset Collection\n - Argument Collection\n - Quality and Stance Labeling\n- Dataset Structure\n - Quality Labels\n - Stance Labels\n- Licensing Information\n- Citation Information## Dataset Summary### Argument Quality Ranking\n\nThe dataset contains 30,497 crowd-sourced arguments for 71 debatable topics labeled for quality and stance, split into train, validation and test sets.\n\nThe dataset was originally published as part of our paper: A Large-scale Dataset for Argument Quality Ranking: Construction and Analysis.### Argument Topic\n\nThis subset contains 9,487 of the arguments only with their topics with a different train-validation-test split. Usage of this subset TBA.## Dataset Collection### Argument Collection\n\nFor the purpose of collecting arguments for this dataset we conducted a crowd annotation task. We selected 71 common controversial topics for which arguments were collected (e.g., We should abolish capital punishment).\nAnnotators were presented with a single topic each time, and asked to contribute one supporting and one contesting argument for it, requiring arguments to be written using original language. To motivate high-quality contributions, contributors were informed they will receive extra payment for high quality arguments, as determined by the subsequent argument quality labeling task.\nIt was explained that an argument will be considered as a high quality one, if a person preparing a speech on the topic will be likely to use this argument as is in her speech.\nWe place a limit on argument length - a minimum of 35 characters and a maximum of 210 characters. In total, we collected 30,497 arguments from 280 contributors, each contributing no more than 6 arguments per topic."
]
|
48c909b5b51818ebd19b6a4da38daf6665ba831c | # Dataset Card for "vsr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Phando/vsr | [
"region:us"
]
| 2023-11-06T09:00:47+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image_link", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "relation", "dtype": "string"}, {"name": "subj", "dtype": "string"}, {"name": "obj", "dtype": "string"}, {"name": "annotator_id", "dtype": "int64"}, {"name": "vote_true_validator_id", "dtype": "string"}, {"name": "vote_false_validator_id", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 176262215.386, "num_examples": 3489}, {"name": "validation", "num_bytes": 17990271.0, "num_examples": 340}, {"name": "test", "num_bytes": 54289880.918, "num_examples": 1222}], "download_size": 239471235, "dataset_size": 248542367.30400002}} | 2023-11-06T09:01:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "vsr"
More Information needed | [
"# Dataset Card for \"vsr\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"vsr\"\n\nMore Information needed"
]
| [
6,
12
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"vsr\"\n\nMore Information needed"
]
|
6d07e53db51021afc2eb46018850bb823888397c | # Dataset Card for "rap-lyrics-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nateraw/rap-lyrics-v1 | [
"region:us"
]
| 2023-11-06T09:07:00+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "artist", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "full_title", "dtype": "string"}, {"name": "lyrics", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7948557, "num_examples": 2350}], "download_size": 4158696, "dataset_size": 7948557}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T09:07:02+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "rap-lyrics-v1"
More Information needed | [
"# Dataset Card for \"rap-lyrics-v1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"rap-lyrics-v1\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"rap-lyrics-v1\"\n\nMore Information needed"
]
|
3fb2b97299bc38bd8cf19c3a61534bb9ed6a185a | # Dataset Card for "xlsum_data-wiki_1024_results"
rouge={'rouge1': 0.23987936461074316, 'rouge2': 0.05808943407195291, 'rougeL': 0.1650498622820974, 'rougeLsum': 0.1650498622820974}
Bert={'precision': 0.694144615811338, 'recall': 0.6835571901391192, 'f1': 0.6884710294942823}
mover = 0.6019846863815564 | arthurmluz/xlsum_data-wiki_1024_results | [
"region:us"
]
| 2023-11-06T09:11:20+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 26071294, "num_examples": 7175}], "download_size": 15718510, "dataset_size": 26071294}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-13T20:21:29+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "xlsum_data-wiki_1024_results"
rouge={'rouge1': 0.23987936461074316, 'rouge2': 0.05808943407195291, 'rougeL': 0.1650498622820974, 'rougeLsum': 0.1650498622820974}
Bert={'precision': 0.694144615811338, 'recall': 0.6835571901391192, 'f1': 0.6884710294942823}
mover = 0.6019846863815564 | [
"# Dataset Card for \"xlsum_data-wiki_1024_results\"\n\nrouge={'rouge1': 0.23987936461074316, 'rouge2': 0.05808943407195291, 'rougeL': 0.1650498622820974, 'rougeLsum': 0.1650498622820974}\n\nBert={'precision': 0.694144615811338, 'recall': 0.6835571901391192, 'f1': 0.6884710294942823}\n\nmover = 0.6019846863815564"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"xlsum_data-wiki_1024_results\"\n\nrouge={'rouge1': 0.23987936461074316, 'rouge2': 0.05808943407195291, 'rougeL': 0.1650498622820974, 'rougeLsum': 0.1650498622820974}\n\nBert={'precision': 0.694144615811338, 'recall': 0.6835571901391192, 'f1': 0.6884710294942823}\n\nmover = 0.6019846863815564"
]
| [
6,
137
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"xlsum_data-wiki_1024_results\"\n\nrouge={'rouge1': 0.23987936461074316, 'rouge2': 0.05808943407195291, 'rougeL': 0.1650498622820974, 'rougeLsum': 0.1650498622820974}\n\nBert={'precision': 0.694144615811338, 'recall': 0.6835571901391192, 'f1': 0.6884710294942823}\n\nmover = 0.6019846863815564"
]
|
433c4b8a6ee463fb0edde4db5e4c76824b3ee818 |
# Fantasy/Sci-fi Dataset
This dataset contains fantasy and scifi books in plain text format. Each line of the dataset represents each sentence of the concated corpus for the following books:
1. 01 Horselords.txt
2. 01 The Second Generation.txt 02 Tantras.txt
3. R.A. Salvatore - The Icewind Dale Trilogy - 2 - Streams of Silver.txt
4. RA SalvatoreThe Legacy of The Drow - 2 - Starless Night.txt
5. R.A.Salvatore - Icewind Dale Trilogy 1 - The Crystal Shard.txt
6. Star Wars - [Thrawn Trilogy 02] - Dark Force Rising (by Timothy Zahn).txt
7. Robert Jordan - The Wheel of Time 01 - Eye of the world.txt
8. 03 Crusade.txt
9. Salvatore, RA - Cleric Quintet 5 -The Chaos Curse.txt
10. 03 Waterdeep.txt Clarke Arthur C - 3001 The Final Odissey.txt
11. Dragonlance Preludes 2 vol 2 - Flint the King.txt
12. 03 Dragons of Spring Dawning.txt
13. Lloyd Alexander - [Chronicles Of Prydain 4] Taran Wanderer.txt
14. 01 Dragons of Autumn Twilight.txt
15. 03 The Two Swords.txt
16. Robert Jordan - 12 - The Gathering Storm - Chapter One.txt
17. 02 War Of The Twins.txt
18. 01 - The Fellowship Of The Ring.txt
19. 02 The Lone Drow.txt
20. 01 The Thousand Orcs.txt Auel, Jean - Earth's Children
21. 03 - The Mammoth Hunters.txt 01 Shadowdale.txt Salvatore, RA - Cleric Quintet 3 - Night Masks.txt
22. Robert Jordan - The Strike at Shayol Ghul.txt
23. Salvatore, R.A. - Paths of Darkness 1 - The Silent Blade.txt
24. Clancy Tom - Patriot Games.txt
25. Lloyd Alexander - [Chronicles Of Prydain 1] Book of Three.txt
26. Lloyd Alexander - [Chronicles Of Prydain 2] Black Cauldron.txt
27. Salvatore, R.A. - Paths of Darkness 3 - Servant of the Shard.txt
28. 02 Crown of Fire.txt
29. 04 Prince of Lies.txt
30. Salvatore, R.A. - Paths of Darkness 2 - The Spine of the World.txt
31. Robert Jordan - The Wheel of Time 11 - Knife of Dreams.txt
32. Lloyd Alexander - [Chronicles Of Prydain 3] Castle Of Llyr.txt R.A. Salvatore - The Dark Elf Trilogy.txt
33. 02 Dragonwall.txt Frank Herbert - Dune.txt
34. 02 - The Two Towers.txt
35. Salvatore, RA - Cleric Quintet 4 - The Fallen Fortress.txt
36. Robert Jordan - The Wheel of Time 04 - The Shadow Rising.txt
37. Robert Jordan - The Wheel of Time 10 - Crossroads of Twilight.txt
38. Harry Potter 2 - Chamber of Secrets.txt
39. Auel, Jean - Earth's Children 01 - The Clan of the Cave Bear.txt
40. Harry Potter 6 - The Half Blood Prince.txt
41. Robert Jordan - The Wheel of Time 03 - The Dragon Reborn.txt
42. R.A. Salvatore - The Legacy of the Drow 1 - Legacy.txt
43. 01 Spellfire.txt Frank Herbert - Children of Dune.txt
44. 01 Time Of The Twins.txt
45. R.A. Salvatore - The Legacy of the Drow III - Siege of Darkness.txt
46. Robert Jordan - The Wheel of Time 08 - The Path of Daggers.txt
47. R.A. Salvatore - The Icewind Dale Trilogy - 3 - The Halfling's Gem.txt
48. Auel, Jean - Earth's Children 05 - The Shelters Of Stone.txt
49. Harry Potter 7 - Deathly Hollows.txt
50. Robert Jordan - The Wheel of Time 07 - A Crown of Swords.txt
51. Harry Potter 1 - Sorcerer's Stone.txt
52. 05 Crucible - The Trial Of Cyric The Mad.txt Star Wars - [Thrawn Trilogy 01] - Heir to the Empire (by Timothy Zahn).txt
53. Robert Jordan - The Wheel of Time 05 - The Fires of Heaven.txt Robert Jordan - The Wheel of Time Compendium.txt
| StrangeCroissant/fantasy_dataset | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"books",
"fantasy",
"scifi",
"text",
"region:us"
]
| 2023-11-06T09:35:30+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "question-answering"], "tags": ["books", "fantasy", "scifi", "text"]} | 2023-11-06T14:05:20+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-generation #task_categories-question-answering #size_categories-10K<n<100K #language-English #books #fantasy #scifi #text #region-us
|
# Fantasy/Sci-fi Dataset
This dataset contains fantasy and scifi books in plain text format. Each line of the dataset represents each sentence of the concated corpus for the following books:
1. 01 URL
2. 01 The Second URL 02 URL
3. R.A. Salvatore - The Icewind Dale Trilogy - 2 - Streams of URL
4. RA SalvatoreThe Legacy of The Drow - 2 - Starless URL
5. R.A.Salvatore - Icewind Dale Trilogy 1 - The Crystal URL
6. Star Wars - [Thrawn Trilogy 02] - Dark Force Rising (by Timothy Zahn).txt
7. Robert Jordan - The Wheel of Time 01 - Eye of the URL
8. 03 URL
9. Salvatore, RA - Cleric Quintet 5 -The Chaos URL
10. 03 URL Clarke Arthur C - 3001 The Final URL
11. Dragonlance Preludes 2 vol 2 - Flint the URL
12. 03 Dragons of Spring URL
13. Lloyd Alexander - [Chronicles Of Prydain 4] Taran URL
14. 01 Dragons of Autumn URL
15. 03 The Two URL
16. Robert Jordan - 12 - The Gathering Storm - Chapter URL
17. 02 War Of The URL
18. 01 - The Fellowship Of The URL
19. 02 The Lone URL
20. 01 The Thousand URL Auel, Jean - Earth's Children
21. 03 - The Mammoth URL 01 URL Salvatore, RA - Cleric Quintet 3 - Night URL
22. Robert Jordan - The Strike at Shayol URL
23. Salvatore, R.A. - Paths of Darkness 1 - The Silent URL
24. Clancy Tom - Patriot URL
25. Lloyd Alexander - [Chronicles Of Prydain 1] Book of URL
26. Lloyd Alexander - [Chronicles Of Prydain 2] Black URL
27. Salvatore, R.A. - Paths of Darkness 3 - Servant of the URL
28. 02 Crown of URL
29. 04 Prince of URL
30. Salvatore, R.A. - Paths of Darkness 2 - The Spine of the URL
31. Robert Jordan - The Wheel of Time 11 - Knife of URL
32. Lloyd Alexander - [Chronicles Of Prydain 3] Castle Of URL R.A. Salvatore - The Dark Elf URL
33. 02 URL Frank Herbert - URL
34. 02 - The Two URL
35. Salvatore, RA - Cleric Quintet 4 - The Fallen URL
36. Robert Jordan - The Wheel of Time 04 - The Shadow URL
37. Robert Jordan - The Wheel of Time 10 - Crossroads of URL
38. Harry Potter 2 - Chamber of URL
39. Auel, Jean - Earth's Children 01 - The Clan of the Cave URL
40. Harry Potter 6 - The Half Blood URL
41. Robert Jordan - The Wheel of Time 03 - The Dragon URL
42. R.A. Salvatore - The Legacy of the Drow 1 - URL
43. 01 URL Frank Herbert - Children of URL
44. 01 Time Of The URL
45. R.A. Salvatore - The Legacy of the Drow III - Siege of URL
46. Robert Jordan - The Wheel of Time 08 - The Path of URL
47. R.A. Salvatore - The Icewind Dale Trilogy - 3 - The Halfling's URL
48. Auel, Jean - Earth's Children 05 - The Shelters Of URL
49. Harry Potter 7 - Deathly URL
50. Robert Jordan - The Wheel of Time 07 - A Crown of URL
51. Harry Potter 1 - Sorcerer's URL
52. 05 Crucible - The Trial Of Cyric The URL Star Wars - [Thrawn Trilogy 01] - Heir to the Empire (by Timothy Zahn).txt
53. Robert Jordan - The Wheel of Time 05 - The Fires of URL Robert Jordan - The Wheel of Time URL
| [
"# Fantasy/Sci-fi Dataset \n\nThis dataset contains fantasy and scifi books in plain text format. Each line of the dataset represents each sentence of the concated corpus for the following books:\n\n1. 01 URL \n2. 01 The Second URL 02 URL \n3. R.A. Salvatore - The Icewind Dale Trilogy - 2 - Streams of URL \n4. RA SalvatoreThe Legacy of The Drow - 2 - Starless URL \n5. R.A.Salvatore - Icewind Dale Trilogy 1 - The Crystal URL \n6. Star Wars - [Thrawn Trilogy 02] - Dark Force Rising (by Timothy Zahn).txt \n7. Robert Jordan - The Wheel of Time 01 - Eye of the URL \n8. 03 URL \n9. Salvatore, RA - Cleric Quintet 5 -The Chaos URL \n10. 03 URL Clarke Arthur C - 3001 The Final URL \n11. Dragonlance Preludes 2 vol 2 - Flint the URL \n12. 03 Dragons of Spring URL \n13. Lloyd Alexander - [Chronicles Of Prydain 4] Taran URL \n14. 01 Dragons of Autumn URL \n15. 03 The Two URL \n16. Robert Jordan - 12 - The Gathering Storm - Chapter URL \n17. 02 War Of The URL \n18. 01 - The Fellowship Of The URL \n19. 02 The Lone URL \n20. 01 The Thousand URL Auel, Jean - Earth's Children \n21. 03 - The Mammoth URL 01 URL Salvatore, RA - Cleric Quintet 3 - Night URL \n22. Robert Jordan - The Strike at Shayol URL \n23. Salvatore, R.A. - Paths of Darkness 1 - The Silent URL \n24. Clancy Tom - Patriot URL \n25. Lloyd Alexander - [Chronicles Of Prydain 1] Book of URL \n26. Lloyd Alexander - [Chronicles Of Prydain 2] Black URL \n27. Salvatore, R.A. - Paths of Darkness 3 - Servant of the URL \n28. 02 Crown of URL \n29. 04 Prince of URL \n30. Salvatore, R.A. - Paths of Darkness 2 - The Spine of the URL \n31. Robert Jordan - The Wheel of Time 11 - Knife of URL \n32. Lloyd Alexander - [Chronicles Of Prydain 3] Castle Of URL R.A. Salvatore - The Dark Elf URL \n33. 02 URL Frank Herbert - URL \n34. 02 - The Two URL \n35. Salvatore, RA - Cleric Quintet 4 - The Fallen URL \n36. Robert Jordan - The Wheel of Time 04 - The Shadow URL \n37. Robert Jordan - The Wheel of Time 10 - Crossroads of URL \n38. Harry Potter 2 - Chamber of URL \n39. Auel, Jean - Earth's Children 01 - The Clan of the Cave URL \n40. Harry Potter 6 - The Half Blood URL \n41. Robert Jordan - The Wheel of Time 03 - The Dragon URL \n42. R.A. Salvatore - The Legacy of the Drow 1 - URL \n43. 01 URL Frank Herbert - Children of URL \n44. 01 Time Of The URL \n45. R.A. Salvatore - The Legacy of the Drow III - Siege of URL \n46. Robert Jordan - The Wheel of Time 08 - The Path of URL \n47. R.A. Salvatore - The Icewind Dale Trilogy - 3 - The Halfling's URL \n48. Auel, Jean - Earth's Children 05 - The Shelters Of URL \n49. Harry Potter 7 - Deathly URL \n50. Robert Jordan - The Wheel of Time 07 - A Crown of URL \n51. Harry Potter 1 - Sorcerer's URL \n52. 05 Crucible - The Trial Of Cyric The URL Star Wars - [Thrawn Trilogy 01] - Heir to the Empire (by Timothy Zahn).txt \n53. Robert Jordan - The Wheel of Time 05 - The Fires of URL Robert Jordan - The Wheel of Time URL"
]
| [
"TAGS\n#task_categories-text-generation #task_categories-question-answering #size_categories-10K<n<100K #language-English #books #fantasy #scifi #text #region-us \n",
"# Fantasy/Sci-fi Dataset \n\nThis dataset contains fantasy and scifi books in plain text format. Each line of the dataset represents each sentence of the concated corpus for the following books:\n\n1. 01 URL \n2. 01 The Second URL 02 URL \n3. R.A. Salvatore - The Icewind Dale Trilogy - 2 - Streams of URL \n4. RA SalvatoreThe Legacy of The Drow - 2 - Starless URL \n5. R.A.Salvatore - Icewind Dale Trilogy 1 - The Crystal URL \n6. Star Wars - [Thrawn Trilogy 02] - Dark Force Rising (by Timothy Zahn).txt \n7. Robert Jordan - The Wheel of Time 01 - Eye of the URL \n8. 03 URL \n9. Salvatore, RA - Cleric Quintet 5 -The Chaos URL \n10. 03 URL Clarke Arthur C - 3001 The Final URL \n11. Dragonlance Preludes 2 vol 2 - Flint the URL \n12. 03 Dragons of Spring URL \n13. Lloyd Alexander - [Chronicles Of Prydain 4] Taran URL \n14. 01 Dragons of Autumn URL \n15. 03 The Two URL \n16. Robert Jordan - 12 - The Gathering Storm - Chapter URL \n17. 02 War Of The URL \n18. 01 - The Fellowship Of The URL \n19. 02 The Lone URL \n20. 01 The Thousand URL Auel, Jean - Earth's Children \n21. 03 - The Mammoth URL 01 URL Salvatore, RA - Cleric Quintet 3 - Night URL \n22. Robert Jordan - The Strike at Shayol URL \n23. Salvatore, R.A. - Paths of Darkness 1 - The Silent URL \n24. Clancy Tom - Patriot URL \n25. Lloyd Alexander - [Chronicles Of Prydain 1] Book of URL \n26. Lloyd Alexander - [Chronicles Of Prydain 2] Black URL \n27. Salvatore, R.A. - Paths of Darkness 3 - Servant of the URL \n28. 02 Crown of URL \n29. 04 Prince of URL \n30. Salvatore, R.A. - Paths of Darkness 2 - The Spine of the URL \n31. Robert Jordan - The Wheel of Time 11 - Knife of URL \n32. Lloyd Alexander - [Chronicles Of Prydain 3] Castle Of URL R.A. Salvatore - The Dark Elf URL \n33. 02 URL Frank Herbert - URL \n34. 02 - The Two URL \n35. Salvatore, RA - Cleric Quintet 4 - The Fallen URL \n36. Robert Jordan - The Wheel of Time 04 - The Shadow URL \n37. Robert Jordan - The Wheel of Time 10 - Crossroads of URL \n38. Harry Potter 2 - Chamber of URL \n39. Auel, Jean - Earth's Children 01 - The Clan of the Cave URL \n40. Harry Potter 6 - The Half Blood URL \n41. Robert Jordan - The Wheel of Time 03 - The Dragon URL \n42. R.A. Salvatore - The Legacy of the Drow 1 - URL \n43. 01 URL Frank Herbert - Children of URL \n44. 01 Time Of The URL \n45. R.A. Salvatore - The Legacy of the Drow III - Siege of URL \n46. Robert Jordan - The Wheel of Time 08 - The Path of URL \n47. R.A. Salvatore - The Icewind Dale Trilogy - 3 - The Halfling's URL \n48. Auel, Jean - Earth's Children 05 - The Shelters Of URL \n49. Harry Potter 7 - Deathly URL \n50. Robert Jordan - The Wheel of Time 07 - A Crown of URL \n51. Harry Potter 1 - Sorcerer's URL \n52. 05 Crucible - The Trial Of Cyric The URL Star Wars - [Thrawn Trilogy 01] - Heir to the Empire (by Timothy Zahn).txt \n53. Robert Jordan - The Wheel of Time 05 - The Fires of URL Robert Jordan - The Wheel of Time URL"
]
| [
55,
807
]
| [
"passage: TAGS\n#task_categories-text-generation #task_categories-question-answering #size_categories-10K<n<100K #language-English #books #fantasy #scifi #text #region-us \n"
]
|
2521757da6d03ef506d4c518a879681385c033e5 | # Dataset Card for "pimcore-docs-embeddings-gpe"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | fashxp/pimcore-docs-embeddings-gpe | [
"region:us"
]
| 2023-11-06T09:57:44+00:00 | {"dataset_info": {"features": [{"name": "heading", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 14945986, "num_examples": 3100}], "download_size": 15661734, "dataset_size": 14945986}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T09:57:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "pimcore-docs-embeddings-gpe"
More Information needed | [
"# Dataset Card for \"pimcore-docs-embeddings-gpe\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"pimcore-docs-embeddings-gpe\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"pimcore-docs-embeddings-gpe\"\n\nMore Information needed"
]
|
dad01fc54cd06263605b2d8e716f18920e5fd061 | # Dataset Card for "metadata-legal-doc-ser"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ademax/metadata-legal-doc-ser | [
"region:us"
]
| 2023-11-06T10:03:06+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 18870413203, "num_examples": 237467}], "download_size": 1661208233, "dataset_size": 18870413203}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T10:06:46+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "metadata-legal-doc-ser"
More Information needed | [
"# Dataset Card for \"metadata-legal-doc-ser\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"metadata-legal-doc-ser\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"metadata-legal-doc-ser\"\n\nMore Information needed"
]
|
67cf80bd53acfe899c00c87869b6f796559102f6 | # Dataset Card for "text_message_translations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | chirunder/text_message_translations_1k | [
"region:us"
]
| 2023-11-06T10:05:00+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "translations", "struct": [{"name": "chinese", "dtype": "string"}, {"name": "hindi", "dtype": "string"}, {"name": "russian", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 454875, "num_examples": 1000}], "download_size": 253800, "dataset_size": 454875}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T10:15:42+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "text_message_translations"
More Information needed | [
"# Dataset Card for \"text_message_translations\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"text_message_translations\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"text_message_translations\"\n\nMore Information needed"
]
|
b4914ae04908727a6e0681414fede2f2463d6840 | # Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Jayem-11/mozilla_commonvoice_hackathon_preprocessed_train_batch_1 | [
"region:us"
]
| 2023-11-06T10:10:00+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "input_length", "dtype": "int64"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}, {"name": "labels_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 15582750752.60228, "num_examples": 13687}], "download_size": 4763193239, "dataset_size": 15582750752.60228}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T12:15:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_1"
More Information needed | [
"# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_1\"\n\nMore Information needed"
]
| [
6,
32
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_1\"\n\nMore Information needed"
]
|
203d43836c2eb3491f2147a5f93b8cd575585572 |
# Dataset Card for PKU-PosterLayout
[](https://github.com/shunk031/huggingface-datasets_PKU-PosterLayout/actions/workflows/ci.yaml)
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://59.108.48.34/tiki/PosterLayout/
- **Repository:** https://github.com/shunk031/huggingface-datasets_PKU-PosterLayout
- **Paper (Preprint):** https://arxiv.org/abs/2303.15937
- **Paper (CVPR2023):** https://openaccess.thecvf.com/content/CVPR2023/html/Hsu_PosterLayout_A_New_Benchmark_and_Approach_for_Content-Aware_Visual-Textual_Presentation_CVPR_2023_paper.html
### Dataset Summary
PKU-PosterLayout is a new dataset and benchmark for content-aware visual-textual presentation layout.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language data in PKU-PosterLayout is in Chinese ([BCP-47 zh](https://www.rfc-editor.org/info/bcp47)).
## Dataset Structure
### Data Instances
To use PKU-PosterLayout dataset, you need to download the poster image and saliency maps via [PKU Netdisk](https://disk.pku.edu.cn/link/999C6E97BB354DF8AD0F9E1F9003BE05) or [Google Drive](https://drive.google.com/drive/folders/1Gk202RVs9Qy2zbJUNeurC1CaQYNU-Vuv?usp=share_link).
```
/path/to/datasets
├── train
│ ├── inpainted_poster.zip
│ ├── original_poster.zip
│ ├── saliencymaps_basnet.zip
│ └── saliencymaps_pfpn.zip
└── test
├── image_canvas.zip
├── saliencymaps_basnet.zip
└── saliencymaps_pfpn.zip
```
```python
import datasets as ds
dataset = ds.load_dataset(
path="shunk031/PKU-PosterLayout",
data_dir="/path/to/datasets/",
)
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@inproceedings{hsu2023posterlayout,
title={PosterLayout: A New Benchmark and Approach for Content-aware Visual-Textual Presentation Layout},
author={Hsu, Hsiao Yuan and He, Xiangteng and Peng, Yuxin and Kong, Hao and Zhang, Qing},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={6018--6026},
year={2023}
}
```
### Contributions
Thanks to [@PKU-ICST-MIPL](https://github.com/PKU-ICST-MIPL) for creating this dataset.
| pytorch-layout-generation/PKU-PosterLayout | [
"task_categories:other",
"annotations_creators:expert-generated",
"language_creators:found",
"source_datasets:extended|PosterErase",
"language:zh",
"license:cc-by-sa-4.0",
"layout-generation",
"graphic design",
"arxiv:2303.15937",
"region:us"
]
| 2023-11-06T10:11:50+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["zh"], "license": ["cc-by-sa-4.0"], "multilinguality": [], "size_categories": [], "source_datasets": ["extended|PosterErase"], "task_categories": ["other"], "task_ids": [], "pretty_name": "PKU-PosterLayout", "tags": ["layout-generation", "graphic design"]} | 2023-11-19T14:37:07+00:00 | [
"2303.15937"
]
| [
"zh"
]
| TAGS
#task_categories-other #annotations_creators-expert-generated #language_creators-found #source_datasets-extended|PosterErase #language-Chinese #license-cc-by-sa-4.0 #layout-generation #graphic design #arxiv-2303.15937 #region-us
|
# Dataset Card for PKU-PosterLayout
: URL
- Paper (CVPR2023): URL
### Dataset Summary
PKU-PosterLayout is a new dataset and benchmark for content-aware visual-textual presentation layout.
### Supported Tasks and Leaderboards
### Languages
The language data in PKU-PosterLayout is in Chinese (BCP-47 zh).
## Dataset Structure
### Data Instances
To use PKU-PosterLayout dataset, you need to download the poster image and saliency maps via PKU Netdisk or Google Drive.
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @PKU-ICST-MIPL for creating this dataset.
| [
"# Dataset Card for PKU-PosterLayout\n\n: URL\n- Paper (CVPR2023): URL",
"### Dataset Summary\n\nPKU-PosterLayout is a new dataset and benchmark for content-aware visual-textual presentation layout.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThe language data in PKU-PosterLayout is in Chinese (BCP-47 zh).",
"## Dataset Structure",
"### Data Instances\n\nTo use PKU-PosterLayout dataset, you need to download the poster image and saliency maps via PKU Netdisk or Google Drive.",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @PKU-ICST-MIPL for creating this dataset."
]
| [
"TAGS\n#task_categories-other #annotations_creators-expert-generated #language_creators-found #source_datasets-extended|PosterErase #language-Chinese #license-cc-by-sa-4.0 #layout-generation #graphic design #arxiv-2303.15937 #region-us \n",
"# Dataset Card for PKU-PosterLayout\n\n: URL\n- Paper (CVPR2023): URL",
"### Dataset Summary\n\nPKU-PosterLayout is a new dataset and benchmark for content-aware visual-textual presentation layout.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThe language data in PKU-PosterLayout is in Chinese (BCP-47 zh).",
"## Dataset Structure",
"### Data Instances\n\nTo use PKU-PosterLayout dataset, you need to download the poster image and saliency maps via PKU Netdisk or Google Drive.",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @PKU-ICST-MIPL for creating this dataset."
]
| [
83,
19,
162,
48,
33,
10,
26,
6,
41,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
22
]
| [
"passage: TAGS\n#task_categories-other #annotations_creators-expert-generated #language_creators-found #source_datasets-extended|PosterErase #language-Chinese #license-cc-by-sa-4.0 #layout-generation #graphic design #arxiv-2303.15937 #region-us \n# Dataset Card for PKU-PosterLayout\n\n: URL\n- Paper (CVPR2023): URL### Dataset Summary\n\nPKU-PosterLayout is a new dataset and benchmark for content-aware visual-textual presentation layout.### Supported Tasks and Leaderboards### Languages\n\nThe language data in PKU-PosterLayout is in Chinese (BCP-47 zh).## Dataset Structure### Data Instances\n\nTo use PKU-PosterLayout dataset, you need to download the poster image and saliency maps via PKU Netdisk or Google Drive.### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information"
]
|
662e52340311bcf52d927c207ae51c6c3389b5f3 |
# Dataset Card for "ceti_audio"
## Table of Contents
- [Dataset Card for "ceti\_audio"](#dataset-card-for-ceti_audio)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@autumnjohnson](https://github.com/<github-username>) for adding this dataset. | autumnjohnson/ceti_audio | [
"size_categories:1K<n<10K",
"region:us"
]
| 2023-11-06T10:15:17+00:00 | {"size_categories": ["1K<n<10K"], "pretty_name": "Project CETI (Cetacean Translation Initiative) audio", "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "coda_type", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 295401207.9840547, "num_examples": 3160}, {"name": "test", "num_bytes": 32905451.01594533, "num_examples": 352}], "download_size": 162207534, "dataset_size": 328306659.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2024-01-27T22:23:14+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #region-us
|
# Dataset Card for "ceti_audio"
## Table of Contents
- Dataset Card for "ceti\_audio"
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Initial Data Collection and Normalization
- Who are the source language producers?
- Annotations
- Annotation process
- Who are the annotators?
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @autumnjohnson for adding this dataset. | [
"# Dataset Card for \"ceti_audio\"",
"## Table of Contents\n- Dataset Card for \"ceti\\_audio\"\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @autumnjohnson for adding this dataset."
]
| [
"TAGS\n#size_categories-1K<n<10K #region-us \n",
"# Dataset Card for \"ceti_audio\"",
"## Table of Contents\n- Dataset Card for \"ceti\\_audio\"\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @autumnjohnson for adding this dataset."
]
| [
18,
12,
168,
24,
6,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
20
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #region-us \n# Dataset Card for \"ceti_audio\"## Table of Contents\n- Dataset Card for \"ceti\\_audio\"\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @autumnjohnson for adding this dataset."
]
|
55eea341e4b8074275c5e7e3e6fb96402968d112 | # Dataset Card for "zac2023-math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nguyenthanhdo/zac2023-math | [
"region:us"
]
| 2023-11-06T10:16:55+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "public_test", "path": "data/public_test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 303871, "num_examples": 1200}, {"name": "public_test", "num_bytes": 31224, "num_examples": 189}], "download_size": 172884, "dataset_size": 335095}} | 2023-11-06T10:16:56+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "zac2023-math"
More Information needed | [
"# Dataset Card for \"zac2023-math\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"zac2023-math\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"zac2023-math\"\n\nMore Information needed"
]
|
ee18c6b9e66a6579875330f037cde3fff9aa5cd3 | # Dataset Card for "ira_ragas"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | CogwiseAI/ira_ragas | [
"region:us"
]
| 2023-11-06T10:21:44+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "baseline", "path": "data/baseline-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "ground_truths", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "contexts", "dtype": "string"}], "splits": [{"name": "baseline", "num_bytes": 11683, "num_examples": 10}], "download_size": 15470, "dataset_size": 11683}} | 2023-11-06T10:21:46+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ira_ragas"
More Information needed | [
"# Dataset Card for \"ira_ragas\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ira_ragas\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ira_ragas\"\n\nMore Information needed"
]
|
ad9dbd30cf1a9f0f4cbc6d8d9f0f2e335b71ef0f |
# Dataset Card for Nexdata/Hindi-Conversational-Speech-Data-by-Telephone
## Description
The 760 Hours - Hindi Conversational Speech Data involved more than 1,000 native speakers, developed with proper balance of gender ratio, Speakers would choose a few familiar topics out of the given list and start conversations to ensure dialogues' fluency and naturalness. The recording devices are various mobile phones. The audio format is 8kHz, 16bit, uncompressed WAV, and all the speech data was recorded in quiet indoor environments. All the speech audio was manually transcribed with text content, the start and end time of each effective sentence, and speaker identification. The accuracy rate of sentences is ≥ 95%.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1206?source=Huggingface
## Format
8kHz, 8bit, wav, mono channel;
## Recording Environment
quiet indoor environment, without echo;
## Recording Content
dozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;
## Speaker
1,004 speakers totally, with 48% male and 52% female.
## Annotation
annotating for the transcription text, speaker identification and gender
## Device
Android mobile phone, iPhone;
## Language
Hindi
## Application scenarios
speech recognition; voiceprint recognition;
## Accuracy rate
95%
# Licensing Information
Commercial License | Nexdata/Hindi_Conversational_Speech_Data_by_Telephone | [
"task_categories:conversational",
"language:hi",
"region:us"
]
| 2023-11-06T10:25:09+00:00 | {"language": ["hi"], "task_categories": ["conversational"]} | 2023-11-10T07:39:45+00:00 | []
| [
"hi"
]
| TAGS
#task_categories-conversational #language-Hindi #region-us
|
# Dataset Card for Nexdata/Hindi-Conversational-Speech-Data-by-Telephone
## Description
The 760 Hours - Hindi Conversational Speech Data involved more than 1,000 native speakers, developed with proper balance of gender ratio, Speakers would choose a few familiar topics out of the given list and start conversations to ensure dialogues' fluency and naturalness. The recording devices are various mobile phones. The audio format is 8kHz, 16bit, uncompressed WAV, and all the speech data was recorded in quiet indoor environments. All the speech audio was manually transcribed with text content, the start and end time of each effective sentence, and speaker identification. The accuracy rate of sentences is ≥ 95%.
For more details, please refer to the link: URL
## Format
8kHz, 8bit, wav, mono channel;
## Recording Environment
quiet indoor environment, without echo;
## Recording Content
dozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;
## Speaker
1,004 speakers totally, with 48% male and 52% female.
## Annotation
annotating for the transcription text, speaker identification and gender
## Device
Android mobile phone, iPhone;
## Language
Hindi
## Application scenarios
speech recognition; voiceprint recognition;
## Accuracy rate
95%
# Licensing Information
Commercial License | [
"# Dataset Card for Nexdata/Hindi-Conversational-Speech-Data-by-Telephone",
"## Description\nThe 760 Hours - Hindi Conversational Speech Data involved more than 1,000 native speakers, developed with proper balance of gender ratio, Speakers would choose a few familiar topics out of the given list and start conversations to ensure dialogues' fluency and naturalness. The recording devices are various mobile phones. The audio format is 8kHz, 16bit, uncompressed WAV, and all the speech data was recorded in quiet indoor environments. All the speech audio was manually transcribed with text content, the start and end time of each effective sentence, and speaker identification. The accuracy rate of sentences is ≥ 95%.\n\nFor more details, please refer to the link: URL",
"## Format\n8kHz, 8bit, wav, mono channel;",
"## Recording Environment\nquiet indoor environment, without echo;",
"## Recording Content\ndozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;",
"## Speaker\n1,004 speakers totally, with 48% male and 52% female.",
"## Annotation\nannotating for the transcription text, speaker identification and gender",
"## Device\nAndroid mobile phone, iPhone;",
"## Language\nHindi",
"## Application scenarios\nspeech recognition; voiceprint recognition;",
"## Accuracy rate\n95%",
"# Licensing Information\nCommercial License"
]
| [
"TAGS\n#task_categories-conversational #language-Hindi #region-us \n",
"# Dataset Card for Nexdata/Hindi-Conversational-Speech-Data-by-Telephone",
"## Description\nThe 760 Hours - Hindi Conversational Speech Data involved more than 1,000 native speakers, developed with proper balance of gender ratio, Speakers would choose a few familiar topics out of the given list and start conversations to ensure dialogues' fluency and naturalness. The recording devices are various mobile phones. The audio format is 8kHz, 16bit, uncompressed WAV, and all the speech data was recorded in quiet indoor environments. All the speech audio was manually transcribed with text content, the start and end time of each effective sentence, and speaker identification. The accuracy rate of sentences is ≥ 95%.\n\nFor more details, please refer to the link: URL",
"## Format\n8kHz, 8bit, wav, mono channel;",
"## Recording Environment\nquiet indoor environment, without echo;",
"## Recording Content\ndozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;",
"## Speaker\n1,004 speakers totally, with 48% male and 52% female.",
"## Annotation\nannotating for the transcription text, speaker identification and gender",
"## Device\nAndroid mobile phone, iPhone;",
"## Language\nHindi",
"## Application scenarios\nspeech recognition; voiceprint recognition;",
"## Accuracy rate\n95%",
"# Licensing Information\nCommercial License"
]
| [
20,
25,
156,
15,
16,
30,
17,
17,
8,
3,
11,
6,
9
]
| [
"passage: TAGS\n#task_categories-conversational #language-Hindi #region-us \n# Dataset Card for Nexdata/Hindi-Conversational-Speech-Data-by-Telephone## Description\nThe 760 Hours - Hindi Conversational Speech Data involved more than 1,000 native speakers, developed with proper balance of gender ratio, Speakers would choose a few familiar topics out of the given list and start conversations to ensure dialogues' fluency and naturalness. The recording devices are various mobile phones. The audio format is 8kHz, 16bit, uncompressed WAV, and all the speech data was recorded in quiet indoor environments. All the speech audio was manually transcribed with text content, the start and end time of each effective sentence, and speaker identification. The accuracy rate of sentences is ≥ 95%.\n\nFor more details, please refer to the link: URL## Format\n8kHz, 8bit, wav, mono channel;## Recording Environment\nquiet indoor environment, without echo;## Recording Content\ndozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;## Speaker\n1,004 speakers totally, with 48% male and 52% female.## Annotation\nannotating for the transcription text, speaker identification and gender## Device\nAndroid mobile phone, iPhone;## Language\nHindi## Application scenarios\nspeech recognition; voiceprint recognition;## Accuracy rate\n95%# Licensing Information\nCommercial License"
]
|
ec4e2c2ec3e0c70087c67a28a7bce58b682b8109 | ---
# Dataset Card for Claim Stance Dataset
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Notes](#notes)
## Dataset Summary
### Claim Stance
This dataset contains 2,394 labeled Wikipedia claims for 55 topics. The dataset includes the stance (Pro/Con) of each claim towards the topic,
as well as fine-grained annotations, based on the semantic model of [Stance Classification of Context-Dependent Claims](https://aclanthology.org/E17-1024/) (topic target,
topic sentiment towards its target, claim target, claim sentiment towards its target, and the relation between the targets).
The dataset is divided into a training set (25 topics, 1,039 claims) and a test set (30 topics, 1,355 claims).
The information in this card refers to this subset of the dataset unless stated otherwise.
### Claim Stance Topic
This subset contains the claims (column `text`) only associated with the topic (column `label`) in a different split to train-validation-test.
This subset can be utilized for topic classification tasks.
## Dataset Structure
* topicId - internal topic ID
* split - train or test
* topicText - the topic text
* topicTarget - sentiment target of topic
* topicSentiment - topic sentiment towards its target (1:positive/-1:negative)
* claims.claimId - claim internal ID
* claims.stance - PRO or CON
* claims.claimCorrectedText - the corrected version of the claim
* claims.claimOriginalText - the original version of the claim
* claims.Compatible - is the claim compatible with the semantic model of [Stance Classification of Context-Dependent Claims](https://aclanthology.org/E17-1024/)? (yes/no)
The following fine-grained annotations are specified only for "compatible" claims
* claims.claimTarget.text - claim sentiment target text (in the corrected version of the claim)
* claims.claimTarget.span.start - 0,
* claims.claimTarget.span.end - 31
* claims.claimSentiment - claim's sentiment towards its target (1:positive/-1:negative)
* claims.targetsRelation - relation between claim target and topic target ((1:consistent/-1:contrastive))
## Licensing Information
The datasets are released under the following licensing and copyright terms:
* (c) Copyright [Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:Copyrights#Reusers.27_rights_and_obligations)
* (c) Copyright IBM 2014. Released under [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/)
## Citation Information
If you use this dataset, please cite the following paper:
```
@inproceedings{bar-haim-etal-2017-stance,
title = "Stance Classification of Context-Dependent Claims",
author = "Bar-Haim, Roy and
Bhattacharya, Indrajit and
Dinuzzo, Francesco and
Saha, Amrita and
Slonim, Noam",
editor = "Lapata, Mirella and
Blunsom, Phil and
Koller, Alexander",
booktitle = "Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers",
month = apr,
year = "2017",
address = "Valencia, Spain",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/E17-1024",
pages = "251--261",
abstract = "Recent work has addressed the problem of detecting relevant claims for a given controversial topic. We introduce the complementary task of Claim Stance Classification, along with the first benchmark dataset for this task. We decompose this problem into: (a) open-domain target identification for topic and claim (b) sentiment classification for each target, and (c) open-domain contrast detection between the topic and the claim targets. Manual annotation of the dataset confirms the applicability and validity of our model. We describe an implementation of our model, focusing on a novel algorithm for contrast detection. Our approach achieves promising results, and is shown to outperform several baselines, which represent the common practice of applying a single, monolithic classifier for stance classification.",
}
```
Improved stance classification results on this dataset were published in:
```
@inproceedings{bar-haim-etal-2017-improving,
title = "Improving Claim Stance Classification with Lexical Knowledge Expansion and Context Utilization",
author = "Bar-Haim, Roy and
Edelstein, Lilach and
Jochim, Charles and
Slonim, Noam",
editor = "Habernal, Ivan and
Gurevych, Iryna and
Ashley, Kevin and
Cardie, Claire and
Green, Nancy and
Litman, Diane and
Petasis, Georgios and
Reed, Chris and
Slonim, Noam and
Walker, Vern",
booktitle = "Proceedings of the 4th Workshop on Argument Mining",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W17-5104",
doi = "10.18653/v1/W17-5104",
pages = "32--38",
abstract = "Stance classification is a core component in on-demand argument construction pipelines. Previous work on claim stance classification relied on background knowledge such as manually-composed sentiment lexicons. We show that both accuracy and coverage can be significantly improved through automatic expansion of the initial lexicon. We also developed a set of contextual features that further improves the state-of-the-art for this task.",
}
```
## Notes
(1) Claim annotations and the experiments reported in [Stance Classification of Context-Dependent Claims](https://aclanthology.org/E17-1024/) and [Improving Claim Stance Classification with Lexical Knowledge Expansion and Context Utilization](https://aclanthology.org/W17-5104/)
are based on the corrected version of the claim. See [A Benchmark Dataset for Automatic Detection of Claims and Evidence in the Context of Controversial Topics](https://aclanthology.org/W14-2109/) for description of generating
corrected version for claims. The original version is the claim as it is found in the clean version of
the article, with no further editing.
(2) The topics and claims partially overlap with the CE-EMNLP-2015 dataset:
Common topics IDs: 1, 21, 61, 81, 101, 121, 181, 221, 323, 381, 441, 442, 443, 481, 482, 483, 601, 602,
621, 641, 642, 644, 645, 648, 662, 663, 665, 681, 683, 701, 721, 742, 743, 744, 761, 801, 803, 841, 861,
881, 923, 926, 941, 942, 944, 946
Only this dataset: 603, 661, 922, 985, 987, 990, 994, 1005, 1065
Only the CE-EMNLP-2015 dataset: 643, 646, 647, 664, 821, 902, 921, 925, 943, 945, 947, 961
| ibm/claim_stance | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-3.0",
"region:us"
]
| 2023-11-06T10:29:47+00:00 | {"language": ["en"], "license": "cc-by-3.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "Claim Stance", "configs": [{"config_name": "claim_stance", "data_files": [{"split": "train", "path": "train.csv"}, {"split": "test", "path": "test.csv"}]}, {"config_name": "claim_stance_topic", "data_files": [{"split": "train", "path": "train_topic.csv"}, {"split": "validation", "path": "dev_topic.csv"}, {"split": "test", "path": "test_topic.csv"}]}]} | 2023-11-15T10:01:56+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-cc-by-3.0 #region-us
| ---
# Dataset Card for Claim Stance Dataset
## Table of Contents
- Dataset Summary
- Dataset Structure
- Licensing Information
- Citation Information
- Notes
## Dataset Summary
### Claim Stance
This dataset contains 2,394 labeled Wikipedia claims for 55 topics. The dataset includes the stance (Pro/Con) of each claim towards the topic,
as well as fine-grained annotations, based on the semantic model of Stance Classification of Context-Dependent Claims (topic target,
topic sentiment towards its target, claim target, claim sentiment towards its target, and the relation between the targets).
The dataset is divided into a training set (25 topics, 1,039 claims) and a test set (30 topics, 1,355 claims).
The information in this card refers to this subset of the dataset unless stated otherwise.
### Claim Stance Topic
This subset contains the claims (column 'text') only associated with the topic (column 'label') in a different split to train-validation-test.
This subset can be utilized for topic classification tasks.
## Dataset Structure
* topicId - internal topic ID
* split - train or test
* topicText - the topic text
* topicTarget - sentiment target of topic
* topicSentiment - topic sentiment towards its target (1:positive/-1:negative)
* claims.claimId - claim internal ID
* URL - PRO or CON
* claims.claimCorrectedText - the corrected version of the claim
* claims.claimOriginalText - the original version of the claim
* claims.Compatible - is the claim compatible with the semantic model of Stance Classification of Context-Dependent Claims? (yes/no)
The following fine-grained annotations are specified only for "compatible" claims
* URL - claim sentiment target text (in the corrected version of the claim)
* URL - 0,
* URL - 31
* claims.claimSentiment - claim's sentiment towards its target (1:positive/-1:negative)
* claims.targetsRelation - relation between claim target and topic target ((1:consistent/-1:contrastive))
## Licensing Information
The datasets are released under the following licensing and copyright terms:
* (c) Copyright Wikipedia
* (c) Copyright IBM 2014. Released under CC-BY-SA 3.0
If you use this dataset, please cite the following paper:
Improved stance classification results on this dataset were published in:
## Notes
(1) Claim annotations and the experiments reported in Stance Classification of Context-Dependent Claims and Improving Claim Stance Classification with Lexical Knowledge Expansion and Context Utilization
are based on the corrected version of the claim. See A Benchmark Dataset for Automatic Detection of Claims and Evidence in the Context of Controversial Topics for description of generating
corrected version for claims. The original version is the claim as it is found in the clean version of
the article, with no further editing.
(2) The topics and claims partially overlap with the CE-EMNLP-2015 dataset:
Common topics IDs: 1, 21, 61, 81, 101, 121, 181, 221, 323, 381, 441, 442, 443, 481, 482, 483, 601, 602,
621, 641, 642, 644, 645, 648, 662, 663, 665, 681, 683, 701, 721, 742, 743, 744, 761, 801, 803, 841, 861,
881, 923, 926, 941, 942, 944, 946
Only this dataset: 603, 661, 922, 985, 987, 990, 994, 1005, 1065
Only the CE-EMNLP-2015 dataset: 643, 646, 647, 664, 821, 902, 921, 925, 943, 945, 947, 961
| [
"# Dataset Card for Claim Stance Dataset",
"## Table of Contents\n\n- Dataset Summary\n- Dataset Structure\n- Licensing Information\n- Citation Information\n- Notes",
"## Dataset Summary",
"### Claim Stance\n\nThis dataset contains 2,394 labeled Wikipedia claims for 55 topics. The dataset includes the stance (Pro/Con) of each claim towards the topic, \nas well as fine-grained annotations, based on the semantic model of Stance Classification of Context-Dependent Claims (topic target,\ntopic sentiment towards its target, claim target, claim sentiment towards its target, and the relation between the targets). \n\nThe dataset is divided into a training set (25 topics, 1,039 claims) and a test set (30 topics, 1,355 claims).\n\nThe information in this card refers to this subset of the dataset unless stated otherwise.",
"### Claim Stance Topic\n\nThis subset contains the claims (column 'text') only associated with the topic (column 'label') in a different split to train-validation-test. \nThis subset can be utilized for topic classification tasks.",
"## Dataset Structure\n\n* topicId - internal topic ID\n* split - train or test\n* topicText - the topic text\n* topicTarget - sentiment target of topic\n* topicSentiment - topic sentiment towards its target (1:positive/-1:negative)\n* claims.claimId - claim internal ID\n* URL - PRO or CON\n* claims.claimCorrectedText - the corrected version of the claim\n* claims.claimOriginalText - the original version of the claim\n* claims.Compatible - is the claim compatible with the semantic model of Stance Classification of Context-Dependent Claims? (yes/no)\n \nThe following fine-grained annotations are specified only for \"compatible\" claims\n* URL - claim sentiment target text (in the corrected version of the claim)\n* URL - 0,\n* URL - 31\n* claims.claimSentiment - claim's sentiment towards its target (1:positive/-1:negative)\n* claims.targetsRelation - relation between claim target and topic target ((1:consistent/-1:contrastive))",
"## Licensing Information\n\nThe datasets are released under the following licensing and copyright terms:\n* (c) Copyright Wikipedia\n* (c) Copyright IBM 2014. Released under CC-BY-SA 3.0\n\n\n\nIf you use this dataset, please cite the following paper:\n\n\n\nImproved stance classification results on this dataset were published in:",
"## Notes\n\n(1) Claim annotations and the experiments reported in Stance Classification of Context-Dependent Claims and Improving Claim Stance Classification with Lexical Knowledge Expansion and Context Utilization\n are based on the corrected version of the claim. See A Benchmark Dataset for Automatic Detection of Claims and Evidence in the Context of Controversial Topics for description of generating \n corrected version for claims. The original version is the claim as it is found in the clean version of \n the article, with no further editing.\n\n(2) The topics and claims partially overlap with the CE-EMNLP-2015 dataset:\n Common topics IDs: 1, 21, 61, 81, 101, 121, 181, 221, 323, 381, 441, 442, 443, 481, 482, 483, 601, 602, \n 621, 641, 642, 644, 645, 648, 662, 663, 665, 681, 683, 701, 721, 742, 743, 744, 761, 801, 803, 841, 861, \n 881, 923, 926, 941, 942, 944, 946\n Only this dataset: 603, 661, 922, 985, 987, 990, 994, 1005, 1065\n Only the CE-EMNLP-2015 dataset: 643, 646, 647, 664, 821, 902, 921, 925, 943, 945, 947, 961"
]
| [
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-cc-by-3.0 #region-us \n",
"# Dataset Card for Claim Stance Dataset",
"## Table of Contents\n\n- Dataset Summary\n- Dataset Structure\n- Licensing Information\n- Citation Information\n- Notes",
"## Dataset Summary",
"### Claim Stance\n\nThis dataset contains 2,394 labeled Wikipedia claims for 55 topics. The dataset includes the stance (Pro/Con) of each claim towards the topic, \nas well as fine-grained annotations, based on the semantic model of Stance Classification of Context-Dependent Claims (topic target,\ntopic sentiment towards its target, claim target, claim sentiment towards its target, and the relation between the targets). \n\nThe dataset is divided into a training set (25 topics, 1,039 claims) and a test set (30 topics, 1,355 claims).\n\nThe information in this card refers to this subset of the dataset unless stated otherwise.",
"### Claim Stance Topic\n\nThis subset contains the claims (column 'text') only associated with the topic (column 'label') in a different split to train-validation-test. \nThis subset can be utilized for topic classification tasks.",
"## Dataset Structure\n\n* topicId - internal topic ID\n* split - train or test\n* topicText - the topic text\n* topicTarget - sentiment target of topic\n* topicSentiment - topic sentiment towards its target (1:positive/-1:negative)\n* claims.claimId - claim internal ID\n* URL - PRO or CON\n* claims.claimCorrectedText - the corrected version of the claim\n* claims.claimOriginalText - the original version of the claim\n* claims.Compatible - is the claim compatible with the semantic model of Stance Classification of Context-Dependent Claims? (yes/no)\n \nThe following fine-grained annotations are specified only for \"compatible\" claims\n* URL - claim sentiment target text (in the corrected version of the claim)\n* URL - 0,\n* URL - 31\n* claims.claimSentiment - claim's sentiment towards its target (1:positive/-1:negative)\n* claims.targetsRelation - relation between claim target and topic target ((1:consistent/-1:contrastive))",
"## Licensing Information\n\nThe datasets are released under the following licensing and copyright terms:\n* (c) Copyright Wikipedia\n* (c) Copyright IBM 2014. Released under CC-BY-SA 3.0\n\n\n\nIf you use this dataset, please cite the following paper:\n\n\n\nImproved stance classification results on this dataset were published in:",
"## Notes\n\n(1) Claim annotations and the experiments reported in Stance Classification of Context-Dependent Claims and Improving Claim Stance Classification with Lexical Knowledge Expansion and Context Utilization\n are based on the corrected version of the claim. See A Benchmark Dataset for Automatic Detection of Claims and Evidence in the Context of Controversial Topics for description of generating \n corrected version for claims. The original version is the claim as it is found in the clean version of \n the article, with no further editing.\n\n(2) The topics and claims partially overlap with the CE-EMNLP-2015 dataset:\n Common topics IDs: 1, 21, 61, 81, 101, 121, 181, 221, 323, 381, 441, 442, 443, 481, 482, 483, 601, 602, \n 621, 641, 642, 644, 645, 648, 662, 663, 665, 681, 683, 701, 721, 742, 743, 744, 761, 801, 803, 841, 861, \n 881, 923, 926, 941, 942, 944, 946\n Only this dataset: 603, 661, 922, 985, 987, 990, 994, 1005, 1065\n Only the CE-EMNLP-2015 dataset: 643, 646, 647, 664, 821, 902, 921, 925, 943, 945, 947, 961"
]
| [
42,
11,
28,
5,
150,
62,
240,
72,
358
]
| [
"passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-cc-by-3.0 #region-us \n# Dataset Card for Claim Stance Dataset## Table of Contents\n\n- Dataset Summary\n- Dataset Structure\n- Licensing Information\n- Citation Information\n- Notes## Dataset Summary### Claim Stance\n\nThis dataset contains 2,394 labeled Wikipedia claims for 55 topics. The dataset includes the stance (Pro/Con) of each claim towards the topic, \nas well as fine-grained annotations, based on the semantic model of Stance Classification of Context-Dependent Claims (topic target,\ntopic sentiment towards its target, claim target, claim sentiment towards its target, and the relation between the targets). \n\nThe dataset is divided into a training set (25 topics, 1,039 claims) and a test set (30 topics, 1,355 claims).\n\nThe information in this card refers to this subset of the dataset unless stated otherwise.### Claim Stance Topic\n\nThis subset contains the claims (column 'text') only associated with the topic (column 'label') in a different split to train-validation-test. \nThis subset can be utilized for topic classification tasks."
]
|
e36907544d44c3a247898ed81540310442329e20 |
# German STS Benchmark
This data is orinally from https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark
The license information can be found under:
https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark/blob/master/LICENSE | jinaai/german-STSbenchmark | [
"region:us"
]
| 2023-11-06T10:51:50+00:00 | {} | 2024-01-24T14:43:39+00:00 | []
| []
| TAGS
#region-us
|
# German STS Benchmark
This data is orinally from URL
The license information can be found under:
URL | [
"# German STS Benchmark\n\nThis data is orinally from URL\n\nThe license information can be found under:\nURL"
]
| [
"TAGS\n#region-us \n",
"# German STS Benchmark\n\nThis data is orinally from URL\n\nThe license information can be found under:\nURL"
]
| [
6,
24
]
| [
"passage: TAGS\n#region-us \n# German STS Benchmark\n\nThis data is orinally from URL\n\nThe license information can be found under:\nURL"
]
|
a1644a9fab5f2bbca075936884b14e1af5cb9aa1 | # Dataset Card for "UD_Thai-PUD-prompt"
This dataset is the test set from the Parallel Universal Dependencies (PUD) treebanks.
See more [https://github.com/UniversalDependencies/UD_Thai-PUD](https://github.com/UniversalDependencies/UD_Thai-PUD)
## Template
```
Inputs: จงสร้างประโยคตามโครงสร้าง {pos}:
Targets: Thai sentence
```
pos: [All tag](https://universaldependencies.org/u/pos/)
Source code for create dataset: [https://github.com/PyThaiNLP/support-aya-datasets/blob/main/pos/ud_pud_thai.ipynb](https://github.com/PyThaiNLP/support-aya-datasets/blob/main/pos/ud_pud_thai.ipynb)
| pythainlp/UD_Thai-PUD-prompt | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:n<1K",
"language:th",
"license:cc-by-sa-3.0",
"region:us"
]
| 2023-11-06T10:59:51+00:00 | {"language": ["th"], "license": "cc-by-sa-3.0", "size_categories": ["n<1K"], "task_categories": ["text2text-generation", "text-generation"], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 475196, "num_examples": 1000}], "download_size": 171576, "dataset_size": 475196}} | 2023-11-06T11:02:40+00:00 | []
| [
"th"
]
| TAGS
#task_categories-text2text-generation #task_categories-text-generation #size_categories-n<1K #language-Thai #license-cc-by-sa-3.0 #region-us
| # Dataset Card for "UD_Thai-PUD-prompt"
This dataset is the test set from the Parallel Universal Dependencies (PUD) treebanks.
See more URL
## Template
pos: All tag
Source code for create dataset: URL
| [
"# Dataset Card for \"UD_Thai-PUD-prompt\"\n\nThis dataset is the test set from the Parallel Universal Dependencies (PUD) treebanks.\n\nSee more URL",
"## Template\n\npos: All tag\n\n\nSource code for create dataset: URL"
]
| [
"TAGS\n#task_categories-text2text-generation #task_categories-text-generation #size_categories-n<1K #language-Thai #license-cc-by-sa-3.0 #region-us \n",
"# Dataset Card for \"UD_Thai-PUD-prompt\"\n\nThis dataset is the test set from the Parallel Universal Dependencies (PUD) treebanks.\n\nSee more URL",
"## Template\n\npos: All tag\n\n\nSource code for create dataset: URL"
]
| [
56,
43,
14
]
| [
"passage: TAGS\n#task_categories-text2text-generation #task_categories-text-generation #size_categories-n<1K #language-Thai #license-cc-by-sa-3.0 #region-us \n# Dataset Card for \"UD_Thai-PUD-prompt\"\n\nThis dataset is the test set from the Parallel Universal Dependencies (PUD) treebanks.\n\nSee more URL## Template\n\npos: All tag\n\n\nSource code for create dataset: URL"
]
|
b3a23d29b974eabead03b97387d480179d943628 |
# Welcome to the HuggingFace repo for SentenceAx


The [Openie6 (O6) software](https://github.com/dair-iitd/openie6)
splits complex or
compound sentences into simple ones.
Simple sentences are essentially the same
as the triples (subject, relationship, object) which,
when visualized as a directed or undirected graph,
is called a “knowledge graph”.
Sentence splitting is also a necessary step
in doing causal DAG extraction from text (causal DEFT),
as is done by my software [Mappa Mundi](https://github.com/rrtucci/mappa_mundi).
SentenceAx (Sax) is a complete rewrite, from stem to stern, of O6.
SentenceAx is a fine-tuning of BERT written with PyTorch and Lightning.
SentenceAx is a stand-alone app, but it is also a vital part of the Mappa Mundi
project for doing causal AI/ML and causal inference.
This repo contains various large data files (input data, weights)
necessary for training or generated by training Sax.
The Python source code for Sax can be found at
[its GitHub repo](https://github.com/rrtucci/SentenceAx).
| rrtucci/SentenceAx | [
"task_categories:summarization",
"task_categories:text-classification",
"license:lgpl-3.0",
"code",
"region:us"
]
| 2023-11-06T11:09:50+00:00 | {"license": "lgpl-3.0", "task_categories": ["summarization", "text-classification"], "tags": ["code"]} | 2024-02-15T21:34:32+00:00 | []
| []
| TAGS
#task_categories-summarization #task_categories-text-classification #license-lgpl-3.0 #code #region-us
|
# Welcome to the HuggingFace repo for SentenceAx
!SentenceAx
!SentenceAx Bayesian Network
The Openie6 (O6) software
splits complex or
compound sentences into simple ones.
Simple sentences are essentially the same
as the triples (subject, relationship, object) which,
when visualized as a directed or undirected graph,
is called a “knowledge graph”.
Sentence splitting is also a necessary step
in doing causal DAG extraction from text (causal DEFT),
as is done by my software Mappa Mundi.
SentenceAx (Sax) is a complete rewrite, from stem to stern, of O6.
SentenceAx is a fine-tuning of BERT written with PyTorch and Lightning.
SentenceAx is a stand-alone app, but it is also a vital part of the Mappa Mundi
project for doing causal AI/ML and causal inference.
This repo contains various large data files (input data, weights)
necessary for training or generated by training Sax.
The Python source code for Sax can be found at
its GitHub repo.
| [
"# Welcome to the HuggingFace repo for SentenceAx\n\n!SentenceAx\n!SentenceAx Bayesian Network\n\n\nThe Openie6 (O6) software \nsplits complex or\ncompound sentences into simple ones. \nSimple sentences are essentially the same \nas the triples (subject, relationship, object) which, \nwhen visualized as a directed or undirected graph, \nis called a “knowledge graph”. \nSentence splitting is also a necessary step \nin doing causal DAG extraction from text (causal DEFT), \nas is done by my software Mappa Mundi.\n\nSentenceAx (Sax) is a complete rewrite, from stem to stern, of O6.\n\nSentenceAx is a fine-tuning of BERT written with PyTorch and Lightning.\n\nSentenceAx is a stand-alone app, but it is also a vital part of the Mappa Mundi \nproject for doing causal AI/ML and causal inference.\n\nThis repo contains various large data files (input data, weights)\nnecessary for training or generated by training Sax.\nThe Python source code for Sax can be found at\nits GitHub repo."
]
| [
"TAGS\n#task_categories-summarization #task_categories-text-classification #license-lgpl-3.0 #code #region-us \n",
"# Welcome to the HuggingFace repo for SentenceAx\n\n!SentenceAx\n!SentenceAx Bayesian Network\n\n\nThe Openie6 (O6) software \nsplits complex or\ncompound sentences into simple ones. \nSimple sentences are essentially the same \nas the triples (subject, relationship, object) which, \nwhen visualized as a directed or undirected graph, \nis called a “knowledge graph”. \nSentence splitting is also a necessary step \nin doing causal DAG extraction from text (causal DEFT), \nas is done by my software Mappa Mundi.\n\nSentenceAx (Sax) is a complete rewrite, from stem to stern, of O6.\n\nSentenceAx is a fine-tuning of BERT written with PyTorch and Lightning.\n\nSentenceAx is a stand-alone app, but it is also a vital part of the Mappa Mundi \nproject for doing causal AI/ML and causal inference.\n\nThis repo contains various large data files (input data, weights)\nnecessary for training or generated by training Sax.\nThe Python source code for Sax can be found at\nits GitHub repo."
]
| [
37,
259
]
| [
"passage: TAGS\n#task_categories-summarization #task_categories-text-classification #license-lgpl-3.0 #code #region-us \n# Welcome to the HuggingFace repo for SentenceAx\n\n!SentenceAx\n!SentenceAx Bayesian Network\n\n\nThe Openie6 (O6) software \nsplits complex or\ncompound sentences into simple ones. \nSimple sentences are essentially the same \nas the triples (subject, relationship, object) which, \nwhen visualized as a directed or undirected graph, \nis called a “knowledge graph”. \nSentence splitting is also a necessary step \nin doing causal DAG extraction from text (causal DEFT), \nas is done by my software Mappa Mundi.\n\nSentenceAx (Sax) is a complete rewrite, from stem to stern, of O6.\n\nSentenceAx is a fine-tuning of BERT written with PyTorch and Lightning.\n\nSentenceAx is a stand-alone app, but it is also a vital part of the Mappa Mundi \nproject for doing causal AI/ML and causal inference.\n\nThis repo contains various large data files (input data, weights)\nnecessary for training or generated by training Sax.\nThe Python source code for Sax can be found at\nits GitHub repo."
]
|
5f738f426956b71fd6db6321fa39ac8ef52cee96 | Hello World | NeelBhatt/MotaGpt | [
"region:us"
]
| 2023-11-06T11:35:51+00:00 | {} | 2023-11-06T11:38:28+00:00 | []
| []
| TAGS
#region-us
| Hello World | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
afbdc7f3bd40ca5870759fe23d5201f4a952c30e | # Dataset Card for "instseg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | xrizs/instseg | [
"region:us"
]
| 2023-11-06T11:43:48+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "annotation", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 34491927.0, "num_examples": 58}, {"name": "val", "num_bytes": 12337041.0, "num_examples": 20}, {"name": "test", "num_bytes": 5255226.0, "num_examples": 9}], "download_size": 52063862, "dataset_size": 52084194.0}} | 2023-11-06T11:43:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "instseg"
More Information needed | [
"# Dataset Card for \"instseg\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"instseg\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"instseg\"\n\nMore Information needed"
]
|
98ac60a673c0a1d80d7ef3ee482a5ab210fbeec2 | # Dataset Card for "llama-intent-1K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pankajemplay/llama-intent-1K | [
"region:us"
]
| 2023-11-06T11:59:44+00:00 | {"dataset_info": {"features": [{"name": "User Query", "dtype": "string"}, {"name": "Intent", "dtype": "string"}, {"name": "id type", "dtype": "string"}, {"name": "id value", "dtype": "string"}, {"name": "id slot filled", "dtype": "bool"}, {"name": "Task", "dtype": "string"}, {"name": "task slot filled", "dtype": "bool"}, {"name": "Bot Response", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 633182, "num_examples": 1308}], "download_size": 189305, "dataset_size": 633182}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T11:59:45+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "llama-intent-1K"
More Information needed | [
"# Dataset Card for \"llama-intent-1K\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"llama-intent-1K\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"llama-intent-1K\"\n\nMore Information needed"
]
|
64f2527965633404d6bcf37cf5c2fd4920b63d17 | # Dataset Card for "xlsum_data-wiki_cstnews_results"
rouge={'rouge1': 0.21997747471239348, 'rouge2': 0.056806669503149276, 'rougeL': 0.1360258101768815, 'rougeLsum': 0.1360258101768815}
Bert={'precision': 0.6605041286920421, 'recall': 0.7237377011402143, 'f1': 0.6903295458401537}
mover = 0.5717557827942945 | arthurmluz/xlsum_data-wiki_cstnews_results | [
"region:us"
]
| 2023-11-06T12:09:23+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 29309066, "num_examples": 7175}], "download_size": 17991258, "dataset_size": 29309066}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-13T20:25:00+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "xlsum_data-wiki_cstnews_results"
rouge={'rouge1': 0.21997747471239348, 'rouge2': 0.056806669503149276, 'rougeL': 0.1360258101768815, 'rougeLsum': 0.1360258101768815}
Bert={'precision': 0.6605041286920421, 'recall': 0.7237377011402143, 'f1': 0.6903295458401537}
mover = 0.5717557827942945 | [
"# Dataset Card for \"xlsum_data-wiki_cstnews_results\"\n\nrouge={'rouge1': 0.21997747471239348, 'rouge2': 0.056806669503149276, 'rougeL': 0.1360258101768815, 'rougeLsum': 0.1360258101768815}\n\nBert={'precision': 0.6605041286920421, 'recall': 0.7237377011402143, 'f1': 0.6903295458401537}\n\nmover = 0.5717557827942945"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"xlsum_data-wiki_cstnews_results\"\n\nrouge={'rouge1': 0.21997747471239348, 'rouge2': 0.056806669503149276, 'rougeL': 0.1360258101768815, 'rougeLsum': 0.1360258101768815}\n\nBert={'precision': 0.6605041286920421, 'recall': 0.7237377011402143, 'f1': 0.6903295458401537}\n\nmover = 0.5717557827942945"
]
| [
6,
140
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"xlsum_data-wiki_cstnews_results\"\n\nrouge={'rouge1': 0.21997747471239348, 'rouge2': 0.056806669503149276, 'rougeL': 0.1360258101768815, 'rougeLsum': 0.1360258101768815}\n\nBert={'precision': 0.6605041286920421, 'recall': 0.7237377011402143, 'f1': 0.6903295458401537}\n\nmover = 0.5717557827942945"
]
|
3afc7c3623abb95653c75e258266d0c4e65244c3 | # Dataset Card for "alpaca-for-lm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlanRobotics/alpaca-for-lm | [
"region:us"
]
| 2023-11-06T12:09:45+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18063618.9710281, "num_examples": 26839}, {"name": "test", "num_bytes": 2007667.0289719, "num_examples": 2983}], "download_size": 9941583, "dataset_size": 20071286.0}} | 2023-11-06T12:09:57+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "alpaca-for-lm"
More Information needed | [
"# Dataset Card for \"alpaca-for-lm\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"alpaca-for-lm\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"alpaca-for-lm\"\n\nMore Information needed"
]
|
c13eb4b1d2522922d6f8ce56c7f58f99b3a0427b | # Dataset Card for "agent_action_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Raihan004/agent_action_full | [
"region:us"
]
| 2023-11-06T12:10:08+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "\u0995\u09c1\u0995\u09c1\u09b0_\u0995\u09ae\u09cd\u09aa\u09bf\u0989\u099f\u09be\u09b0_\u09ac\u09cd\u09af\u09ac\u09b9\u09be\u09b0_\u0995\u09b0\u09be", "1": "\u0995\u09c1\u0995\u09c1\u09b0_\u0996\u09be\u0993\u09af\u09bc\u09be", "2": "\u0995\u09c1\u0995\u09c1\u09b0_\u0996\u09c7\u09b2\u09be_\u0995\u09b0\u09be", "3": "\u0995\u09c1\u0995\u09c1\u09b0_\u0998\u09c1\u09ae\u09be\u09a8\u09c7\u09be", "4": "\u0995\u09c1\u0995\u09c1\u09b0_\u09aa\u09a1\u09bc\u09be", "5": "\u0995\u09c1\u0995\u09c1\u09b0_\u09aa\u09be\u09a8_\u0995\u09b0\u09be", "6": "\u0995\u09c1\u0995\u09c1\u09b0_\u09b9\u09be\u0981\u099f\u09be", "7": "\u099b\u09c7\u09b2\u09c7_\u0995\u09a5\u09be_\u09ac\u09b2\u09be", "8": "\u099b\u09c7\u09b2\u09c7_\u0995\u09ae\u09cd\u09aa\u09bf\u0989\u099f\u09be\u09b0_\u09ac\u09cd\u09af\u09ac\u09b9\u09be\u09b0_\u0995\u09b0\u09be", "9": "\u099b\u09c7\u09b2\u09c7_\u0996\u09be\u0993\u09af\u09bc\u09be", "10": "\u099b\u09c7\u09b2\u09c7_\u0996\u09c7\u09b2\u09be_\u0995\u09b0\u09be", "11": "\u099b\u09c7\u09b2\u09c7_\u0998\u09c1\u09ae\u09be\u09a8\u09c7\u09be", "12": "\u099b\u09c7\u09b2\u09c7_\u09aa\u09a1\u09bc\u09be", "13": "\u099b\u09c7\u09b2\u09c7_\u09aa\u09be\u09a8_\u0995\u09b0\u09be", "14": "\u099b\u09c7\u09b2\u09c7_\u09b0\u09be\u09a8\u09cd\u09a8\u09be_\u0995\u09b0\u09be", "15": "\u099b\u09c7\u09b2\u09c7_\u09b2\u09c7\u0996\u09be", "16": "\u099b\u09c7\u09b2\u09c7_\u09b9\u09be\u0981\u099f\u09be", "17": "\u09ac\u09bf\u09a1\u09bc\u09be\u09b2_\u0995\u09ae\u09cd\u09aa\u09bf\u0989\u099f\u09be\u09b0_\u09ac\u09cd\u09af\u09ac\u09b9\u09be\u09b0_\u0995\u09b0\u09be", "18": "\u09ac\u09bf\u09a1\u09bc\u09be\u09b2_\u0996\u09be\u0993\u09af\u09bc\u09be", "19": "\u09ac\u09bf\u09a1\u09bc\u09be\u09b2_\u0996\u09c7\u09b2\u09be_\u0995\u09b0\u09be", "20": "\u09ac\u09bf\u09a1\u09bc\u09be\u09b2_\u0998\u09c1\u09ae\u09be\u09a8\u09c7\u09be", "21": "\u09ac\u09bf\u09a1\u09bc\u09be\u09b2_\u09aa\u09a1\u09bc\u09be", "22": "\u09ac\u09bf\u09a1\u09bc\u09be\u09b2_\u09aa\u09be\u09a8_\u0995\u09b0\u09be", "23": "\u09ac\u09bf\u09a1\u09bc\u09be\u09b2_\u09b9\u09be\u0981\u099f\u09be", "24": "\u09ae\u09c7\u09af\u09bc\u09c7_\u0995\u09a5\u09be_\u09ac\u09b2\u09be", "25": "\u09ae\u09c7\u09af\u09bc\u09c7_\u0995\u09ae\u09cd\u09aa\u09bf\u0989\u099f\u09be\u09b0_\u09ac\u09cd\u09af\u09ac\u09b9\u09be\u09b0_\u0995\u09b0\u09be", "26": "\u09ae\u09c7\u09af\u09bc\u09c7_\u0996\u09be\u0993\u09af\u09bc\u09be", "27": "\u09ae\u09c7\u09af\u09bc\u09c7_\u0996\u09c7\u09b2\u09be_\u0995\u09b0\u09be", "28": "\u09ae\u09c7\u09af\u09bc\u09c7_\u0998\u09c1\u09ae\u09be\u09a8\u09c7\u09be", "29": "\u09ae\u09c7\u09af\u09bc\u09c7_\u09aa\u09a1\u09bc\u09be", "30": "\u09ae\u09c7\u09af\u09bc\u09c7_\u09aa\u09be\u09a8_\u0995\u09b0\u09be", "31": "\u09ae\u09c7\u09af\u09bc\u09c7_\u09b0\u09be\u09a8\u09cd\u09a8\u09be_\u0995\u09b0\u09be", "32": "\u09ae\u09c7\u09af\u09bc\u09c7_\u09b2\u09c7\u0996\u09be", "33": "\u09ae\u09c7\u09af\u09bc\u09c7_\u09b9\u09be\u0981\u099f\u09be"}}}}], "splits": [{"name": "train", "num_bytes": 383592449.803037, "num_examples": 3954}, {"name": "test", "num_bytes": 79473638.26096302, "num_examples": 698}], "download_size": 495183643, "dataset_size": 463066088.064}} | 2023-11-06T12:31:13+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "agent_action_full"
More Information needed | [
"# Dataset Card for \"agent_action_full\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"agent_action_full\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"agent_action_full\"\n\nMore Information needed"
]
|
7297b2a9219f2a7cbe8033871c36e44361b53105 | Collection of good human dataset (No LLM-generated) | lu-vae/natural-dataset | [
"region:us"
]
| 2023-11-06T12:18:45+00:00 | {} | 2023-11-06T12:22:48+00:00 | []
| []
| TAGS
#region-us
| Collection of good human dataset (No LLM-generated) | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
877550da579d29b8a4cc38e9eb5944d07fa257ea | Shopping bag images used to train this project https://docs.edgeimpulse.com/experts/image-projects/deter-shoplifting-with-computer-vision-ti-tda4vm
---
Contact
---
@RoniBandini
https://www.linkedin.com/in/ronibandini/
| ronibandini/shoppingBags | [
"license:mit",
"region:us"
]
| 2023-11-06T12:48:11+00:00 | {"license": "mit"} | 2023-11-06T12:56:54+00:00 | []
| []
| TAGS
#license-mit #region-us
| Shopping bag images used to train this project URL
---
Contact
---
@RoniBandini
URL
| []
| [
"TAGS\n#license-mit #region-us \n"
]
| [
11
]
| [
"passage: TAGS\n#license-mit #region-us \n"
]
|
82ddad517368f4713c94dccdfc2496b1cfcc78b8 | name: image
dtype: image
- name: faces | renhj/test_open | [
"task_categories:object-detection",
"task_ids:face-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-wider",
"language:en",
"license:cc-by-nc-nd-4.0",
"region:us"
]
| 2023-11-06T13:05:34+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-wider"], "task_categories": ["object-detection"], "task_ids": ["face-detection"], "paperswithcode_id": "wider-face-1", "pretty_name": "WIDER FACE"} | 2023-11-07T09:42:47+00:00 | []
| [
"en"
]
| TAGS
#task_categories-object-detection #task_ids-face-detection #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-wider #language-English #license-cc-by-nc-nd-4.0 #region-us
| name: image
dtype: image
- name: faces | []
| [
"TAGS\n#task_categories-object-detection #task_ids-face-detection #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-wider #language-English #license-cc-by-nc-nd-4.0 #region-us \n"
]
| [
100
]
| [
"passage: TAGS\n#task_categories-object-detection #task_ids-face-detection #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-wider #language-English #license-cc-by-nc-nd-4.0 #region-us \n"
]
|
408109b8d6422179022f8c8f93ad6d2b2401e324 | Large dataset of chess games parsed from lichess for training LLMs
All games are converted to the same format, annotations removed, unfinished games removed (so all games where one player wins actually end with checkmate move), shuffled.
| BlueSunflower/ChessGames | [
"license:apache-2.0",
"region:us"
]
| 2023-11-06T13:16:01+00:00 | {"license": "apache-2.0"} | 2023-11-06T15:18:45+00:00 | []
| []
| TAGS
#license-apache-2.0 #region-us
| Large dataset of chess games parsed from lichess for training LLMs
All games are converted to the same format, annotations removed, unfinished games removed (so all games where one player wins actually end with checkmate move), shuffled.
| []
| [
"TAGS\n#license-apache-2.0 #region-us \n"
]
| [
14
]
| [
"passage: TAGS\n#license-apache-2.0 #region-us \n"
]
|
622df8797000f0afa1ee37b961af8abe133c2ff7 | # Dataset Card for "indian_food_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ManoharEldhandi/indian_food_images | [
"region:us"
]
| 2023-11-06T13:19:03+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "burger", "1": "butter_naan", "2": "chai", "3": "chapati", "4": "chole_bhature", "5": "dal_makhani", "6": "dhokla", "7": "fried_rice", "8": "idli", "9": "jalebi", "10": "kaathi_rolls", "11": "kadai_paneer", "12": "kulfi", "13": "masala_dosa", "14": "momos", "15": "paani_puri", "16": "pakode", "17": "pav_bhaji", "18": "pizza", "19": "samosa"}}}}], "splits": [{"name": "train", "num_bytes": 1200414082.0794334, "num_examples": 5328}, {"name": "test", "num_bytes": 222276428.3925666, "num_examples": 941}], "download_size": 1601712089, "dataset_size": 1422690510.4720001}} | 2023-11-06T13:20:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "indian_food_images"
More Information needed | [
"# Dataset Card for \"indian_food_images\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"indian_food_images\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"indian_food_images\"\n\nMore Information needed"
]
|
2e23e32608fb9b008550c6d8d8af8c04a830fc72 | # Dataset Card for "text_message_transliteration_1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | chirunder/text_message_transliteration_1k | [
"region:us"
]
| 2023-11-06T13:27:06+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "translations", "struct": [{"name": "chinese", "dtype": "string"}, {"name": "hindi", "dtype": "string"}, {"name": "russian", "dtype": "string"}]}, {"name": "transliteration", "struct": [{"name": "chinese", "dtype": "string"}, {"name": "hindi", "dtype": "string"}, {"name": "russian", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 609895, "num_examples": 1000}], "download_size": 361488, "dataset_size": 609895}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T13:27:08+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "text_message_transliteration_1k"
More Information needed | [
"# Dataset Card for \"text_message_transliteration_1k\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"text_message_transliteration_1k\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"text_message_transliteration_1k\"\n\nMore Information needed"
]
|
719fe84e5fe39bf872efeea0f97e204b7c0df61a | Description:
Text column is the scraped news article headline or lead.
Code_frames column are annotation frames that follow the Policy Issue Frames (1-15).
Label is the 0th index of the Code_frames, for training purposes | jmLuis/PhilippineFrameCorpus-PFC | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"region:us"
]
| 2023-11-06T13:27:20+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "PFC"} | 2023-11-06T13:40:37+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-English #region-us
| Description:
Text column is the scraped news article headline or lead.
Code_frames column are annotation frames that follow the Policy Issue Frames (1-15).
Label is the 0th index of the Code_frames, for training purposes | []
| [
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #region-us \n"
]
| [
33
]
| [
"passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #region-us \n"
]
|
61754a8cb61c4bcdd5e3d9661ae0b5b6e0a455f5 | # Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Jayem-11/mozilla_commonvoice_hackathon_preprocessed_train_batch_2 | [
"region:us"
]
| 2023-11-06T13:36:39+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "input_length", "dtype": "int64"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}, {"name": "labels_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 15584501798.875, "num_examples": 13689}], "download_size": 4765376085, "dataset_size": 15584501798.875}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T13:41:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_2"
More Information needed | [
"# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_2\"\n\nMore Information needed"
]
| [
6,
33
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_2\"\n\nMore Information needed"
]
|
c0541a2cbe6b6163a09321c7ab8604a649990364 | # Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Jayem-11/mozilla_commonvoice_hackathon_preprocessed_train_batch_3 | [
"region:us"
]
| 2023-11-06T13:37:49+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "input_length", "dtype": "int64"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}, {"name": "labels_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 15580205067.875, "num_examples": 13689}], "download_size": 4759107017, "dataset_size": 15580205067.875}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T13:42:45+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_3"
More Information needed | [
"# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_3\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_3\"\n\nMore Information needed"
]
| [
6,
33
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_3\"\n\nMore Information needed"
]
|
4da5d4eee56966e1e9138b41cd17c36702bc4595 | # Dataset Card for "fm_updates"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | coastalcph/fm_updates | [
"region:us"
]
| 2023-11-06T13:57:39+00:00 | {"dataset_info": {"features": [{"name": "query", "struct": [{"name": "label", "dtype": "string"}, {"name": "objects", "list": [{"name": "label", "dtype": "string"}, {"name": "qid", "dtype": "string"}]}, {"name": "qid", "dtype": "string"}, {"name": "rel_id", "dtype": "string"}, {"name": "relation", "dtype": "string"}]}, {"name": "prediction", "struct": [{"name": "predictions", "list": [{"name": "answer", "dtype": "string"}, {"name": "first_token_probability", "dtype": "float64"}, {"name": "per_token_probability", "sequence": "float64"}, {"name": "perplexity", "dtype": "float64"}]}, {"name": "query", "dtype": "string"}]}, {"name": "relation", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "updates", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1525467, "num_examples": 5080}], "download_size": 606338, "dataset_size": 1525467}} | 2023-11-06T14:46:21+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fm_updates"
More Information needed | [
"# Dataset Card for \"fm_updates\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fm_updates\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fm_updates\"\n\nMore Information needed"
]
|
432ed6bfc2fb018deb1ffe39c5bc5c4d39619434 | # Dataset Card for "uemf_cer_chunked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | medmac01/uemf_cer_chunked | [
"region:us"
]
| 2023-11-06T14:09:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "page", "dtype": "int64"}, {"name": "ref", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 315387, "num_examples": 555}], "download_size": 151851, "dataset_size": 315387}} | 2023-11-06T14:09:23+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "uemf_cer_chunked"
More Information needed | [
"# Dataset Card for \"uemf_cer_chunked\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"uemf_cer_chunked\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"uemf_cer_chunked\"\n\nMore Information needed"
]
|
433087dcedcb1d8ca6c7fbb6864a43d74d450cb8 | # Dataset Card for "building_type_classification_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | spr1916/building_type_classification_train | [
"region:us"
]
| 2023-11-06T14:16:38+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 190605, "num_examples": 2351}], "download_size": 27315, "dataset_size": 190605}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T16:32:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "building_type_classification_train"
More Information needed | [
"# Dataset Card for \"building_type_classification_train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"building_type_classification_train\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"building_type_classification_train\"\n\nMore Information needed"
]
|
006191ab52ab6df892ef3f5db95bdd79896e7b23 | # Dataset Card for "building_type_classification_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | spr1916/building_type_classification_test | [
"region:us"
]
| 2023-11-06T14:17:23+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 24630, "num_examples": 312}], "download_size": 4454, "dataset_size": 24630}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T16:32:33+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "building_type_classification_test"
More Information needed | [
"# Dataset Card for \"building_type_classification_test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"building_type_classification_test\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"building_type_classification_test\"\n\nMore Information needed"
]
|
8f8ff229b171dcfbbbb3c8da7bc8cdcad4c6f065 | # llama.cpp scripts
These are scripts that have helped me to manage llama.cpp, llama models, etc.
## Install
Scripts are installed to `~/.local/bin`.
```bash
bash install.sh
```
| iandennismiller/llama-cpp-scripts | [
"language:code",
"license:mit",
"bash",
"llama.cpp",
"script",
"region:us"
]
| 2023-11-06T14:18:35+00:00 | {"language": ["code"], "license": "mit", "pretty_name": "These are scripts that have helped me to manage llama.cpp, llama models, etc.", "tags": ["bash", "llama.cpp", "script"]} | 2023-12-29T14:47:23+00:00 | []
| [
"code"
]
| TAGS
#language-code #license-mit #bash #llama.cpp #script #region-us
| # URL scripts
These are scripts that have helped me to manage URL, llama models, etc.
## Install
Scripts are installed to '~/.local/bin'.
| [
"# URL scripts\n\nThese are scripts that have helped me to manage URL, llama models, etc.",
"## Install\n\nScripts are installed to '~/.local/bin'."
]
| [
"TAGS\n#language-code #license-mit #bash #llama.cpp #script #region-us \n",
"# URL scripts\n\nThese are scripts that have helped me to manage URL, llama models, etc.",
"## Install\n\nScripts are installed to '~/.local/bin'."
]
| [
25,
21,
17
]
| [
"passage: TAGS\n#language-code #license-mit #bash #llama.cpp #script #region-us \n# URL scripts\n\nThese are scripts that have helped me to manage URL, llama models, etc.## Install\n\nScripts are installed to '~/.local/bin'."
]
|
8219d3e09c4237b774265d2b890e77c21e1bdd0d | # Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Jayem-11/mozilla_commonvoice_hackathon_preprocessed_train_batch_4 | [
"region:us"
]
| 2023-11-06T14:24:31+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "input_length", "dtype": "int64"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}, {"name": "labels_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 15590806829.875, "num_examples": 13689}], "download_size": 4768732812, "dataset_size": 15590806829.875}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T14:28:34+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_4"
More Information needed | [
"# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_4\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_4\"\n\nMore Information needed"
]
| [
6,
33
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mozilla_commonvoice_hackathon_preprocessed_train_batch_4\"\n\nMore Information needed"
]
|
03f74963da8261c395fd359c5d1c21e8c36a3bba |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset is a test result csv file from the zero-shot prompting experiment.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | sinandraide/zero_shot_test | [
"region:us"
]
| 2023-11-06T14:43:50+00:00 | {} | 2023-11-07T01:26:54+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset is a test result csv file from the zero-shot prompting experiment.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset is a test result csv file from the zero-shot prompting experiment.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset is a test result csv file from the zero-shot prompting experiment.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
6,
27,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset is a test result csv file from the zero-shot prompting experiment.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
e57fdf17835731e90d174792ca59434d236abc04 | first commit | zhangxing9292018/mydataset | [
"license:apache-2.0",
"region:us"
]
| 2023-11-06T14:44:35+00:00 | {"license": "apache-2.0"} | 2023-11-06T16:29:50+00:00 | []
| []
| TAGS
#license-apache-2.0 #region-us
| first commit | []
| [
"TAGS\n#license-apache-2.0 #region-us \n"
]
| [
14
]
| [
"passage: TAGS\n#license-apache-2.0 #region-us \n"
]
|
c80863d8532a25752b1f49c9c72f569ffd20bf50 | # Dataset Card for "All_10_Action"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Raihan004/All_10_Action | [
"region:us"
]
| 2023-11-06T14:51:54+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "\u0995\u09a5\u09be_\u09ac\u09b2\u09be", "1": "\u0995\u09ae\u09cd\u09aa\u09bf\u0989\u099f\u09be\u09b0_\u09ac\u09cd\u09af\u09ac\u09b9\u09be\u09b0_\u0995\u09b0\u09be", "2": "\u0996\u09be\u0993\u09af\u09bc\u09be", "3": "\u0996\u09c7\u09b2\u09be_\u0995\u09b0\u09be", "4": "\u0998\u09c1\u09ae\u09be\u09a8\u09c7\u09be", "5": "\u09aa\u09a1\u09bc\u09be", "6": "\u09aa\u09be\u09a8_\u0995\u09b0\u09be", "7": "\u09b0\u09be\u09a8\u09cd\u09a8\u09be_\u0995\u09b0\u09be", "8": "\u09b2\u09c7\u0996\u09be", "9": "\u09b9\u09be\u0981\u099f\u09be"}}}}], "splits": [{"name": "train", "num_bytes": 450039362.261335, "num_examples": 3972}, {"name": "test", "num_bytes": 64023200.75866496, "num_examples": 702}], "download_size": 494658461, "dataset_size": 514062563.02}} | 2023-11-06T15:25:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "All_10_Action"
More Information needed | [
"# Dataset Card for \"All_10_Action\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"All_10_Action\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"All_10_Action\"\n\nMore Information needed"
]
|
bc30dd3f0d4556ecbd196513e9fef1983644f963 | # Dataset Card for "ukabs_id_rename"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | CJWeiss/ukabs_id_rename | [
"region:us"
]
| 2023-11-06T15:28:40+00:00 | {"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 53147657, "num_examples": 594}, {"name": "test", "num_bytes": 10152794, "num_examples": 120}, {"name": "valid", "num_bytes": 8112656, "num_examples": 79}], "download_size": 33052341, "dataset_size": 71413107}} | 2023-11-06T15:28:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ukabs_id_rename"
More Information needed | [
"# Dataset Card for \"ukabs_id_rename\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ukabs_id_rename\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ukabs_id_rename\"\n\nMore Information needed"
]
|
80ce1c2bacbc7a1d6c1185dba3ca7663b2784a5e | # Dataset Card for "xlsum_data-wiki_cstnews_1024_results"
rouge={'rouge1': 0.21341914887481395, 'rouge2': 0.055688000356489714, 'rougeL': 0.13199232049785112, 'rougeLsum': 0.13199232049785112}
Bert={'precision': 0.6575144631688188, 'recall': 0.7245010691569658, 'f1': 0.6890005766888528}
mover = 0.568884719739508 | arthurmluz/xlsum_data-wiki_cstnews_1024_results | [
"region:us"
]
| 2023-11-06T15:34:09+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "gen_summary", "dtype": "string"}, {"name": "rouge", "struct": [{"name": "rouge1", "dtype": "float64"}, {"name": "rouge2", "dtype": "float64"}, {"name": "rougeL", "dtype": "float64"}, {"name": "rougeLsum", "dtype": "float64"}]}, {"name": "bert", "struct": [{"name": "f1", "sequence": "float64"}, {"name": "hashcode", "dtype": "string"}, {"name": "precision", "sequence": "float64"}, {"name": "recall", "sequence": "float64"}]}, {"name": "moverScore", "dtype": "float64"}], "splits": [{"name": "validation", "num_bytes": 29742589, "num_examples": 7175}], "download_size": 18278365, "dataset_size": 29742589}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-13T20:28:37+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "xlsum_data-wiki_cstnews_1024_results"
rouge={'rouge1': 0.21341914887481395, 'rouge2': 0.055688000356489714, 'rougeL': 0.13199232049785112, 'rougeLsum': 0.13199232049785112}
Bert={'precision': 0.6575144631688188, 'recall': 0.7245010691569658, 'f1': 0.6890005766888528}
mover = 0.568884719739508 | [
"# Dataset Card for \"xlsum_data-wiki_cstnews_1024_results\"\n\nrouge={'rouge1': 0.21341914887481395, 'rouge2': 0.055688000356489714, 'rougeL': 0.13199232049785112, 'rougeLsum': 0.13199232049785112}\n\nBert={'precision': 0.6575144631688188, 'recall': 0.7245010691569658, 'f1': 0.6890005766888528}\n\nmover = 0.568884719739508"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"xlsum_data-wiki_cstnews_1024_results\"\n\nrouge={'rouge1': 0.21341914887481395, 'rouge2': 0.055688000356489714, 'rougeL': 0.13199232049785112, 'rougeLsum': 0.13199232049785112}\n\nBert={'precision': 0.6575144631688188, 'recall': 0.7245010691569658, 'f1': 0.6890005766888528}\n\nmover = 0.568884719739508"
]
| [
6,
138
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"xlsum_data-wiki_cstnews_1024_results\"\n\nrouge={'rouge1': 0.21341914887481395, 'rouge2': 0.055688000356489714, 'rougeL': 0.13199232049785112, 'rougeLsum': 0.13199232049785112}\n\nBert={'precision': 0.6575144631688188, 'recall': 0.7245010691569658, 'f1': 0.6890005766888528}\n\nmover = 0.568884719739508"
]
|
abd5e3176dbacf18bdbc1cf2732c990ac1151370 | # Dataset Card for Instrucat
## Dataset Description
### Dataset Summary
InstruCat is a dataset consisting of 216826 instructions in Catalan.
### Dataset contains data converted to instructions format from the following datasets:
- caBreu : The instructions were created in form of summarization tasks. There are 2 types of summarization categories in the dataset: extreme and abstractive. The extreme one summarizes text into one sentence and the abstractive into shorter texts around 3-5 sentences.
- CatalanQA : The instructions correspond to questions in CatalanQA.
- CaWikiTC : The instructions were created in 2 different ways of text classification tasks with the distribution 70% - 30%. The first way is to define a category of a given text. The second way is to answer where a given text belongs to a certain category in a form of alternative question.
- ceil : The instructions were created in 2 different ways of Named Entity Recognition tasks with the distribution 70% - 30%. The first way is to list all the found Named Entities. The second way is to list only Named Entities of a particular category.
- CoqCat : The instructions correspond to the first questions of CoqCat conversations.
- GuiaCat : The instructions were created in form of sentiment analysis tasks.
- IntoxiCat : The instructions were created in form of binary classification tasks. The task is to define wether a given text is toxic or no.
- NLUCat : The instructions were created in form of phrase generation tasks to express a given intent.
- Parafraseja : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text.
- PAWS-ca : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text.
- sts-ca : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text.
- teca : The instructions were created in 2 different ways with the distribution 70% - 30%. The first way is in form of entailment generation tasks. The second way is to define whether one given text is an entailment of another given text.
- WikiCat : The instructions were created in 2 different ways of text classification tasks with the distribution 70% - 30%. The first way is to define a category of a given text. The second way is to answer where a given text belongs to a certain category in a form of alternative question.
## Dataset Structure
#### Data Splits
- train.jsonl: 165100 instructions
- validation.jsonl: 25351 instructions
- test.jsonl: 26375 instructions
### Data Instances
Three JSONL files, one for each split.
An example of 'test' looks as follows:
```
{
"ID": "Parafraseja_8977",
"instruction": "Reescriu aquesta frase sense alterar-ne el significat:",
"context": "Es tracta d'un tipus que ens falla ja que a ell li falla aquesta falta d'interès per tal d'exercir el domini sobre l'ambient.",
"response": "Es tracta d'un tipus que ens falla perquè a ell li falla aquesta falta d'interès per exercir el domini sobre l'ambient.",
"category": "paraphrasis"
}
```
### Category Distibution
| Category | Number of instructions |% |
|----------------|----------|------ |
| ner | 59410 | 27.39% |
| paraphrasis | 34695 | 16.00% |
| text_classification | 33393 | 15.40% |
| toxicity | 29809 | 13.74% |
| qa | 27427 | 12.64% |
| phrase_generation | 11873 | 5.47% |
| entailment_generation | 6354 | 2.93% |
| sentiment_analysis | 5750 | 2.65% |
| abstractive_summarization | 2999 | 1.38% |
| extreme_summarization | 2999 | 1.38% |
| entailment | 2117 | 0.97% |
### Acknowledgments
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the [project ILENIA](https://proyectoilenia.es/) with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334 | projecte-aina/InstruCAT | [
"language:ca",
"region:us"
]
| 2023-11-06T15:38:40+00:00 | {"language": ["ca"]} | 2024-01-23T09:11:14+00:00 | []
| [
"ca"
]
| TAGS
#language-Catalan #region-us
| Dataset Card for Instrucat
==========================
Dataset Description
-------------------
### Dataset Summary
InstruCat is a dataset consisting of 216826 instructions in Catalan.
### Dataset contains data converted to instructions format from the following datasets:
* caBreu : The instructions were created in form of summarization tasks. There are 2 types of summarization categories in the dataset: extreme and abstractive. The extreme one summarizes text into one sentence and the abstractive into shorter texts around 3-5 sentences.
* CatalanQA : The instructions correspond to questions in CatalanQA.
* CaWikiTC : The instructions were created in 2 different ways of text classification tasks with the distribution 70% - 30%. The first way is to define a category of a given text. The second way is to answer where a given text belongs to a certain category in a form of alternative question.
* ceil : The instructions were created in 2 different ways of Named Entity Recognition tasks with the distribution 70% - 30%. The first way is to list all the found Named Entities. The second way is to list only Named Entities of a particular category.
* CoqCat : The instructions correspond to the first questions of CoqCat conversations.
* GuiaCat : The instructions were created in form of sentiment analysis tasks.
* IntoxiCat : The instructions were created in form of binary classification tasks. The task is to define wether a given text is toxic or no.
* NLUCat : The instructions were created in form of phrase generation tasks to express a given intent.
* Parafraseja : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text.
* PAWS-ca : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text.
* sts-ca : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text.
* teca : The instructions were created in 2 different ways with the distribution 70% - 30%. The first way is in form of entailment generation tasks. The second way is to define whether one given text is an entailment of another given text.
* WikiCat : The instructions were created in 2 different ways of text classification tasks with the distribution 70% - 30%. The first way is to define a category of a given text. The second way is to answer where a given text belongs to a certain category in a form of alternative question.
Dataset Structure
-----------------
#### Data Splits
* URL: 165100 instructions
* URL: 25351 instructions
* URL: 26375 instructions
### Data Instances
Three JSONL files, one for each split.
An example of 'test' looks as follows:
### Category Distibution
Category: ner, Number of instructions: 59410, %: 27.39%
Category: paraphrasis, Number of instructions: 34695, %: 16.00%
Category: text\_classification, Number of instructions: 33393, %: 15.40%
Category: toxicity, Number of instructions: 29809, %: 13.74%
Category: qa, Number of instructions: 27427, %: 12.64%
Category: phrase\_generation, Number of instructions: 11873, %: 5.47%
Category: entailment\_generation, Number of instructions: 6354, %: 2.93%
Category: sentiment\_analysis, Number of instructions: 5750, %: 2.65%
Category: abstractive\_summarization, Number of instructions: 2999, %: 1.38%
Category: extreme\_summarization, Number of instructions: 2999, %: 1.38%
Category: entailment, Number of instructions: 2117, %: 0.97%
### Acknowledgments
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334
| [
"### Dataset Summary\n\n\nInstruCat is a dataset consisting of 216826 instructions in Catalan.",
"### Dataset contains data converted to instructions format from the following datasets:\n\n\n* caBreu : The instructions were created in form of summarization tasks. There are 2 types of summarization categories in the dataset: extreme and abstractive. The extreme one summarizes text into one sentence and the abstractive into shorter texts around 3-5 sentences.\n* CatalanQA : The instructions correspond to questions in CatalanQA.\n* CaWikiTC : The instructions were created in 2 different ways of text classification tasks with the distribution 70% - 30%. The first way is to define a category of a given text. The second way is to answer where a given text belongs to a certain category in a form of alternative question.\n* ceil : The instructions were created in 2 different ways of Named Entity Recognition tasks with the distribution 70% - 30%. The first way is to list all the found Named Entities. The second way is to list only Named Entities of a particular category.\n* CoqCat : The instructions correspond to the first questions of CoqCat conversations.\n* GuiaCat : The instructions were created in form of sentiment analysis tasks.\n* IntoxiCat : The instructions were created in form of binary classification tasks. The task is to define wether a given text is toxic or no.\n* NLUCat : The instructions were created in form of phrase generation tasks to express a given intent.\n* Parafraseja : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text.\n* PAWS-ca : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text.\n* sts-ca : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text.\n* teca : The instructions were created in 2 different ways with the distribution 70% - 30%. The first way is in form of entailment generation tasks. The second way is to define whether one given text is an entailment of another given text.\n* WikiCat : The instructions were created in 2 different ways of text classification tasks with the distribution 70% - 30%. The first way is to define a category of a given text. The second way is to answer where a given text belongs to a certain category in a form of alternative question.\n\n\nDataset Structure\n-----------------",
"#### Data Splits\n\n\n* URL: 165100 instructions\n* URL: 25351 instructions\n* URL: 26375 instructions",
"### Data Instances\n\n\nThree JSONL files, one for each split.\n\n\nAn example of 'test' looks as follows:",
"### Category Distibution\n\n\nCategory: ner, Number of instructions: 59410, %: 27.39%\nCategory: paraphrasis, Number of instructions: 34695, %: 16.00%\nCategory: text\\_classification, Number of instructions: 33393, %: 15.40%\nCategory: toxicity, Number of instructions: 29809, %: 13.74%\nCategory: qa, Number of instructions: 27427, %: 12.64%\nCategory: phrase\\_generation, Number of instructions: 11873, %: 5.47%\nCategory: entailment\\_generation, Number of instructions: 6354, %: 2.93%\nCategory: sentiment\\_analysis, Number of instructions: 5750, %: 2.65%\nCategory: abstractive\\_summarization, Number of instructions: 2999, %: 1.38%\nCategory: extreme\\_summarization, Number of instructions: 2999, %: 1.38%\nCategory: entailment, Number of instructions: 2117, %: 0.97%",
"### Acknowledgments\n\n\nThis work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334"
]
| [
"TAGS\n#language-Catalan #region-us \n",
"### Dataset Summary\n\n\nInstruCat is a dataset consisting of 216826 instructions in Catalan.",
"### Dataset contains data converted to instructions format from the following datasets:\n\n\n* caBreu : The instructions were created in form of summarization tasks. There are 2 types of summarization categories in the dataset: extreme and abstractive. The extreme one summarizes text into one sentence and the abstractive into shorter texts around 3-5 sentences.\n* CatalanQA : The instructions correspond to questions in CatalanQA.\n* CaWikiTC : The instructions were created in 2 different ways of text classification tasks with the distribution 70% - 30%. The first way is to define a category of a given text. The second way is to answer where a given text belongs to a certain category in a form of alternative question.\n* ceil : The instructions were created in 2 different ways of Named Entity Recognition tasks with the distribution 70% - 30%. The first way is to list all the found Named Entities. The second way is to list only Named Entities of a particular category.\n* CoqCat : The instructions correspond to the first questions of CoqCat conversations.\n* GuiaCat : The instructions were created in form of sentiment analysis tasks.\n* IntoxiCat : The instructions were created in form of binary classification tasks. The task is to define wether a given text is toxic or no.\n* NLUCat : The instructions were created in form of phrase generation tasks to express a given intent.\n* Parafraseja : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text.\n* PAWS-ca : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text.\n* sts-ca : The instructions were created in form of text generation tasks. The task is to generate a text equivalent by meaning to a given text.\n* teca : The instructions were created in 2 different ways with the distribution 70% - 30%. The first way is in form of entailment generation tasks. The second way is to define whether one given text is an entailment of another given text.\n* WikiCat : The instructions were created in 2 different ways of text classification tasks with the distribution 70% - 30%. The first way is to define a category of a given text. The second way is to answer where a given text belongs to a certain category in a form of alternative question.\n\n\nDataset Structure\n-----------------",
"#### Data Splits\n\n\n* URL: 165100 instructions\n* URL: 25351 instructions\n* URL: 26375 instructions",
"### Data Instances\n\n\nThree JSONL files, one for each split.\n\n\nAn example of 'test' looks as follows:",
"### Category Distibution\n\n\nCategory: ner, Number of instructions: 59410, %: 27.39%\nCategory: paraphrasis, Number of instructions: 34695, %: 16.00%\nCategory: text\\_classification, Number of instructions: 33393, %: 15.40%\nCategory: toxicity, Number of instructions: 29809, %: 13.74%\nCategory: qa, Number of instructions: 27427, %: 12.64%\nCategory: phrase\\_generation, Number of instructions: 11873, %: 5.47%\nCategory: entailment\\_generation, Number of instructions: 6354, %: 2.93%\nCategory: sentiment\\_analysis, Number of instructions: 5750, %: 2.65%\nCategory: abstractive\\_summarization, Number of instructions: 2999, %: 1.38%\nCategory: extreme\\_summarization, Number of instructions: 2999, %: 1.38%\nCategory: entailment, Number of instructions: 2117, %: 0.97%",
"### Acknowledgments\n\n\nThis work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334"
]
| [
11,
22,
530,
23,
28,
216,
81
]
| [
"passage: TAGS\n#language-Catalan #region-us \n### Dataset Summary\n\n\nInstruCat is a dataset consisting of 216826 instructions in Catalan."
]
|
825dcaabaddcaa0836ad79ed115d256db6e7ed76 | # Dataset Card for "esc50"
This is a mirror for the ESC-50 dataset. Original sources:
https://github.com/karolpiczak/ESC-50
K. J. Piczak. ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd Annual ACM Conference on Multimedia, Brisbane, Australia, 2015.
[DOI: http://dx.doi.org/10.1145/2733373.2806390]
The dataset is available under the terms of the Creative Commons Attribution Non-Commercial license.
## Exploring the dataset
You can visualize the dataset using Renumics Spotlight:
```python
import datasets
from renumics import spotlight
ds = datasets.load_dataset('renumics/esc50', split='train')
spotlight.show(ds)
```
## Explore enriched dataset
To fully understand the dataset, you can leverage model results such as embeddings or predictions.
Here is an example how to use zero-shot classification with MS CLAP for this purpose:
```python
ds_results = datasets.load_dataset("renumics/esc50-clap2023-results",split='train')
ds = datasets.concatenate_datasets([ds, ds_results], axis=1)
spotlight.show(ds, dtype={'text_embedding': spotlight.Embedding, 'audio_embedding': spotlight.Embedding})
```

| renumics/esc50 | [
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"license:cc-by-nc-2.0",
"region:us"
]
| 2023-11-06T15:46:01+00:00 | {"license": "cc-by-nc-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["audio-classification"], "dataset_info": {"features": [{"name": "src_file", "dtype": "string"}, {"name": "fold", "dtype": "int64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "dog", "1": "rooster", "2": "pig", "3": "cow", "4": "frog", "5": "cat", "6": "hen", "7": "insects", "8": "sheep", "9": "crow", "10": "rain", "11": "sea_waves", "12": "crackling_fire", "13": "crickets", "14": "chirping_birds", "15": "water_drops", "16": "wind", "17": "pouring_water", "18": "toilet_flush", "19": "thunderstorm", "20": "crying_baby", "21": "sneezing", "22": "clapping", "23": "breathing", "24": "coughing", "25": "footsteps", "26": "laughing", "27": "brushing_teeth", "28": "snoring", "29": "drinking_sipping", "30": "door_wood_knock", "31": "mouse_click", "32": "keyboard_typing", "33": "door_wood_creaks", "34": "can_opening", "35": "washing_machine", "36": "vacuum_cleaner", "37": "clock_alarm", "38": "clock_tick", "39": "glass_breaking", "40": "helicopter", "41": "chainsaw", "42": "siren", "43": "car_horn", "44": "engine", "45": "train", "46": "church_bells", "47": "airplane", "48": "fireworks", "49": "hand_saw"}}}}, {"name": "esc10", "dtype": "bool"}, {"name": "take", "dtype": "string"}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 882179256, "num_examples": 2000}], "download_size": 773038488, "dataset_size": 882179256}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-09T09:17:07+00:00 | []
| []
| TAGS
#task_categories-audio-classification #size_categories-1K<n<10K #license-cc-by-nc-2.0 #region-us
| # Dataset Card for "esc50"
This is a mirror for the ESC-50 dataset. Original sources:
URL
K. J. Piczak. ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd Annual ACM Conference on Multimedia, Brisbane, Australia, 2015.
[DOI: URL
The dataset is available under the terms of the Creative Commons Attribution Non-Commercial license.
## Exploring the dataset
You can visualize the dataset using Renumics Spotlight:
## Explore enriched dataset
To fully understand the dataset, you can leverage model results such as embeddings or predictions.
Here is an example how to use zero-shot classification with MS CLAP for this purpose:
!image/png
| [
"# Dataset Card for \"esc50\"\n\nThis is a mirror for the ESC-50 dataset. Original sources:\n\nURL\nK. J. Piczak. ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd Annual ACM Conference on Multimedia, Brisbane, Australia, 2015.\n[DOI: URL\n\nThe dataset is available under the terms of the Creative Commons Attribution Non-Commercial license.",
"## Exploring the dataset\n\nYou can visualize the dataset using Renumics Spotlight:",
"## Explore enriched dataset\n\nTo fully understand the dataset, you can leverage model results such as embeddings or predictions.\n\nHere is an example how to use zero-shot classification with MS CLAP for this purpose:\n\n\n\n\n!image/png"
]
| [
"TAGS\n#task_categories-audio-classification #size_categories-1K<n<10K #license-cc-by-nc-2.0 #region-us \n",
"# Dataset Card for \"esc50\"\n\nThis is a mirror for the ESC-50 dataset. Original sources:\n\nURL\nK. J. Piczak. ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd Annual ACM Conference on Multimedia, Brisbane, Australia, 2015.\n[DOI: URL\n\nThe dataset is available under the terms of the Creative Commons Attribution Non-Commercial license.",
"## Exploring the dataset\n\nYou can visualize the dataset using Renumics Spotlight:",
"## Explore enriched dataset\n\nTo fully understand the dataset, you can leverage model results such as embeddings or predictions.\n\nHere is an example how to use zero-shot classification with MS CLAP for this purpose:\n\n\n\n\n!image/png"
]
| [
41,
88,
20,
55
]
| [
"passage: TAGS\n#task_categories-audio-classification #size_categories-1K<n<10K #license-cc-by-nc-2.0 #region-us \n# Dataset Card for \"esc50\"\n\nThis is a mirror for the ESC-50 dataset. Original sources:\n\nURL\nK. J. Piczak. ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd Annual ACM Conference on Multimedia, Brisbane, Australia, 2015.\n[DOI: URL\n\nThe dataset is available under the terms of the Creative Commons Attribution Non-Commercial license.## Exploring the dataset\n\nYou can visualize the dataset using Renumics Spotlight:## Explore enriched dataset\n\nTo fully understand the dataset, you can leverage model results such as embeddings or predictions.\n\nHere is an example how to use zero-shot classification with MS CLAP for this purpose:\n\n\n\n\n!image/png"
]
|
f007443378a6c09f80e73287ce914c61292fcf35 | # Dataset Card for "imdb-card-pred-decimal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuanbiao/imdb-card-pred-decimal | [
"region:us"
]
| 2023-11-06T15:47:19+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 25224623, "num_examples": 100000}], "download_size": 4207011, "dataset_size": 25224623}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-09T01:08:13+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "imdb-card-pred-decimal"
More Information needed | [
"# Dataset Card for \"imdb-card-pred-decimal\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"imdb-card-pred-decimal\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"imdb-card-pred-decimal\"\n\nMore Information needed"
]
|
ee1ffbdf941f1a1b41bb7e0c38adf9f216086f70 |
# Acrobot-v1 - Imitation Learning Datasets
This is a dataset created by [Imitation Learning Datasets](https://github.com/NathanGavenski/IL-Datasets) project.
It was created by using Stable Baselines weights from a DQN policy from [HuggingFace](https://huggingface.co/sb3/dqn-Acrobot-v1).
## Description
The dataset consists of 1,000 episodes with an average episodic reward of `-69.852`.
Each entry consists of:
```
obs (list): observation with length 6.
action (int): action (0, 1 or 2).
reward (float): reward point for that timestep.
episode_returns (bool): if that state was the initial timestep for an episode.
```
## Usage
Feel free to download and use the `teacher.jsonl` dataset as you please.
If you are interested in using our PyTorch Dataset implementation, feel free to check the [IL Datasets](https://github.com/NathanGavenski/IL-Datasets/blob/main/src/imitation_datasets/dataset/dataset.py) project.
There, we implement a base Dataset that downloads this dataset and all other datasets directly from HuggingFace.
The Baseline Dataset also allows for more control over train and test splits and how many episodes you want to use (in cases where the 1k episodes are not necessary).
## Citation
Coming soon. | NathanGavenski/Acrobot-v1 | [
"size_categories:10M<n<100M",
"license:mit",
"Imitation Learning",
"Expert Trajectory",
"region:us"
]
| 2023-11-06T15:50:16+00:00 | {"license": "mit", "size_categories": ["10M<n<100M"], "pretty_name": "Acrobot-v1 Expert Dataset", "tags": ["Imitation Learning", "Expert Trajectory"]} | 2023-11-06T15:53:25+00:00 | []
| []
| TAGS
#size_categories-10M<n<100M #license-mit #Imitation Learning #Expert Trajectory #region-us
|
# Acrobot-v1 - Imitation Learning Datasets
This is a dataset created by Imitation Learning Datasets project.
It was created by using Stable Baselines weights from a DQN policy from HuggingFace.
## Description
The dataset consists of 1,000 episodes with an average episodic reward of '-69.852'.
Each entry consists of:
## Usage
Feel free to download and use the 'URL' dataset as you please.
If you are interested in using our PyTorch Dataset implementation, feel free to check the IL Datasets project.
There, we implement a base Dataset that downloads this dataset and all other datasets directly from HuggingFace.
The Baseline Dataset also allows for more control over train and test splits and how many episodes you want to use (in cases where the 1k episodes are not necessary).
Coming soon. | [
"# Acrobot-v1 - Imitation Learning Datasets\n\nThis is a dataset created by Imitation Learning Datasets project. \nIt was created by using Stable Baselines weights from a DQN policy from HuggingFace.",
"## Description\n\nThe dataset consists of 1,000 episodes with an average episodic reward of '-69.852'.\nEach entry consists of:",
"## Usage\n\nFeel free to download and use the 'URL' dataset as you please.\nIf you are interested in using our PyTorch Dataset implementation, feel free to check the IL Datasets project.\nThere, we implement a base Dataset that downloads this dataset and all other datasets directly from HuggingFace.\nThe Baseline Dataset also allows for more control over train and test splits and how many episodes you want to use (in cases where the 1k episodes are not necessary).\n\nComing soon."
]
| [
"TAGS\n#size_categories-10M<n<100M #license-mit #Imitation Learning #Expert Trajectory #region-us \n",
"# Acrobot-v1 - Imitation Learning Datasets\n\nThis is a dataset created by Imitation Learning Datasets project. \nIt was created by using Stable Baselines weights from a DQN policy from HuggingFace.",
"## Description\n\nThe dataset consists of 1,000 episodes with an average episodic reward of '-69.852'.\nEach entry consists of:",
"## Usage\n\nFeel free to download and use the 'URL' dataset as you please.\nIf you are interested in using our PyTorch Dataset implementation, feel free to check the IL Datasets project.\nThere, we implement a base Dataset that downloads this dataset and all other datasets directly from HuggingFace.\nThe Baseline Dataset also allows for more control over train and test splits and how many episodes you want to use (in cases where the 1k episodes are not necessary).\n\nComing soon."
]
| [
33,
51,
32,
113
]
| [
"passage: TAGS\n#size_categories-10M<n<100M #license-mit #Imitation Learning #Expert Trajectory #region-us \n# Acrobot-v1 - Imitation Learning Datasets\n\nThis is a dataset created by Imitation Learning Datasets project. \nIt was created by using Stable Baselines weights from a DQN policy from HuggingFace.## Description\n\nThe dataset consists of 1,000 episodes with an average episodic reward of '-69.852'.\nEach entry consists of:## Usage\n\nFeel free to download and use the 'URL' dataset as you please.\nIf you are interested in using our PyTorch Dataset implementation, feel free to check the IL Datasets project.\nThere, we implement a base Dataset that downloads this dataset and all other datasets directly from HuggingFace.\nThe Baseline Dataset also allows for more control over train and test splits and how many episodes you want to use (in cases where the 1k episodes are not necessary).\n\nComing soon."
]
|
79ed74e3898e7c9f01354892aad4decb3ec6f57b |
It contains 585 English riddles. The top 173 was adjusted by GPT4. | flyingfishinwater/riddle | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-11-06T15:58:26+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["question-answering", "text2text-generation"], "pretty_name": "riddles"} | 2023-11-06T16:15:53+00:00 | []
| [
"en"
]
| TAGS
#task_categories-question-answering #task_categories-text2text-generation #size_categories-n<1K #language-English #license-apache-2.0 #region-us
|
It contains 585 English riddles. The top 173 was adjusted by GPT4. | []
| [
"TAGS\n#task_categories-question-answering #task_categories-text2text-generation #size_categories-n<1K #language-English #license-apache-2.0 #region-us \n"
]
| [
53
]
| [
"passage: TAGS\n#task_categories-question-answering #task_categories-text2text-generation #size_categories-n<1K #language-English #license-apache-2.0 #region-us \n"
]
|
23f802e6212503fef847e666876495fdc577111b |
Meta data for PMC-Patients that might facilitate reproduction or usage of our dataset, consisting of the following files (most of which can be derived from our main files above).
## PMIDs.json
PMIDs of articles from which PMC-Patients are extracted.
List of string, length 140,897.
## train_PMIDs.json & dev_PMIDs.json & test_PMIDs.json
PMIDs of articles in training / dev / test split.
List of string.
## train_patient_uids.json & dev_patient_uids.json & test_patient_uids.json
Patient_uids of notes in training / dev / test split.
List of string.
## patient2article_relevance.json
Full patient-to-article dataset.
A dict where the keys are `patient_uid` of queries and each entry is a list of `PMID`, representing articles relevant to the query.
The 3-point relevance can be obtained by checking whether the `PMID` is in `PMIDs.json`.
## patient2patient_similarity.json
Full patient-to-patient similarity dataset.
A dict where the keys are `patient_uid` of queries and each entry is a list of `patient_uid`, representing similar patients to the query.
The 3-point similarity can be obtained by checking whether the similar patient share the `PMID` (the string before '-' in `patient_uid`) with the query patient.
## PMID2Mesh.json
Dict of PMIDs to MeSH terms of the article.
## MeSH_Humans_patient_uids.json
`patient_uid` of the patients in PMC-Patients-Humans (extracted from articles with "Humans" MeSH term).
List of string.
## PMC-Patients_citations.json
Citations for all articles we used to collect our dataset.
A dict where the keys are `patient_uid` and each entry is the citation of the source article.
## human_PMIDs.json
PMIDs of the 500 randomly sampled articles for human evaluation.
List of string.
## PMC-Patients_human_eval.json
Expert annotation results of the 500 articles in `human_PMIDs.json`, including manually annotated patient note, demographics, and relations of the top 5 retrieved articles / patients.
List of dict, and the keys are almost identical to `PMC-Patients.json`, with the exception of `human_patient_id` and `human_patient_uid`.
The relational annotations are different from automatic ones. They are strings indicating on which dimension(s) are the patient-article / patient-patient pair relevant / similar.
"0", "1", "2", and "3" represent "Irrelevant", "Diagnosis", "Test", "Treatment" in ReCDS-PAR, and represent "Dissimilar", "Features", "Outcomes", "Exposure" in ReCDS-PPR.
Note that a pair can be relevant / similar on multiple dimensions at the same time.
## PAR_PMIDs.json
PMIDs of the 11.7M articles used as PAR corpus.
List of string.
| zhengyun21/PMC-Patients-MetaData | [
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-sa-4.0",
"medical",
"region:us"
]
| 2023-11-06T16:29:19+00:00 | {"language": ["en"], "license": "cc-by-nc-sa-4.0", "size_categories": ["100K<n<1M"], "tags": ["medical"]} | 2023-11-06T16:49:53+00:00 | []
| [
"en"
]
| TAGS
#size_categories-100K<n<1M #language-English #license-cc-by-nc-sa-4.0 #medical #region-us
|
Meta data for PMC-Patients that might facilitate reproduction or usage of our dataset, consisting of the following files (most of which can be derived from our main files above).
## URL
PMIDs of articles from which PMC-Patients are extracted.
List of string, length 140,897.
## train_PMIDs.json & dev_PMIDs.json & test_PMIDs.json
PMIDs of articles in training / dev / test split.
List of string.
## train_patient_uids.json & dev_patient_uids.json & test_patient_uids.json
Patient_uids of notes in training / dev / test split.
List of string.
## patient2article_relevance.json
Full patient-to-article dataset.
A dict where the keys are 'patient_uid' of queries and each entry is a list of 'PMID', representing articles relevant to the query.
The 3-point relevance can be obtained by checking whether the 'PMID' is in 'URL'.
## patient2patient_similarity.json
Full patient-to-patient similarity dataset.
A dict where the keys are 'patient_uid' of queries and each entry is a list of 'patient_uid', representing similar patients to the query.
The 3-point similarity can be obtained by checking whether the similar patient share the 'PMID' (the string before '-' in 'patient_uid') with the query patient.
## URL
Dict of PMIDs to MeSH terms of the article.
## MeSH_Humans_patient_uids.json
'patient_uid' of the patients in PMC-Patients-Humans (extracted from articles with "Humans" MeSH term).
List of string.
## PMC-Patients_citations.json
Citations for all articles we used to collect our dataset.
A dict where the keys are 'patient_uid' and each entry is the citation of the source article.
## human_PMIDs.json
PMIDs of the 500 randomly sampled articles for human evaluation.
List of string.
## PMC-Patients_human_eval.json
Expert annotation results of the 500 articles in 'human_PMIDs.json', including manually annotated patient note, demographics, and relations of the top 5 retrieved articles / patients.
List of dict, and the keys are almost identical to 'URL', with the exception of 'human_patient_id' and 'human_patient_uid'.
The relational annotations are different from automatic ones. They are strings indicating on which dimension(s) are the patient-article / patient-patient pair relevant / similar.
"0", "1", "2", and "3" represent "Irrelevant", "Diagnosis", "Test", "Treatment" in ReCDS-PAR, and represent "Dissimilar", "Features", "Outcomes", "Exposure" in ReCDS-PPR.
Note that a pair can be relevant / similar on multiple dimensions at the same time.
## PAR_PMIDs.json
PMIDs of the 11.7M articles used as PAR corpus.
List of string.
| [
"## URL\n\nPMIDs of articles from which PMC-Patients are extracted.\nList of string, length 140,897.",
"## train_PMIDs.json & dev_PMIDs.json & test_PMIDs.json\n\nPMIDs of articles in training / dev / test split.\nList of string.",
"## train_patient_uids.json & dev_patient_uids.json & test_patient_uids.json\n\nPatient_uids of notes in training / dev / test split.\nList of string.",
"## patient2article_relevance.json\n\nFull patient-to-article dataset.\nA dict where the keys are 'patient_uid' of queries and each entry is a list of 'PMID', representing articles relevant to the query.\n\nThe 3-point relevance can be obtained by checking whether the 'PMID' is in 'URL'.",
"## patient2patient_similarity.json\n\nFull patient-to-patient similarity dataset.\nA dict where the keys are 'patient_uid' of queries and each entry is a list of 'patient_uid', representing similar patients to the query.\n\nThe 3-point similarity can be obtained by checking whether the similar patient share the 'PMID' (the string before '-' in 'patient_uid') with the query patient.",
"## URL\n\nDict of PMIDs to MeSH terms of the article.",
"## MeSH_Humans_patient_uids.json\n\n'patient_uid' of the patients in PMC-Patients-Humans (extracted from articles with \"Humans\" MeSH term).\nList of string.",
"## PMC-Patients_citations.json\n\nCitations for all articles we used to collect our dataset.\nA dict where the keys are 'patient_uid' and each entry is the citation of the source article.",
"## human_PMIDs.json\n\nPMIDs of the 500 randomly sampled articles for human evaluation.\nList of string.",
"## PMC-Patients_human_eval.json\n\nExpert annotation results of the 500 articles in 'human_PMIDs.json', including manually annotated patient note, demographics, and relations of the top 5 retrieved articles / patients.\nList of dict, and the keys are almost identical to 'URL', with the exception of 'human_patient_id' and 'human_patient_uid'.\n\nThe relational annotations are different from automatic ones. They are strings indicating on which dimension(s) are the patient-article / patient-patient pair relevant / similar. \n\"0\", \"1\", \"2\", and \"3\" represent \"Irrelevant\", \"Diagnosis\", \"Test\", \"Treatment\" in ReCDS-PAR, and represent \"Dissimilar\", \"Features\", \"Outcomes\", \"Exposure\" in ReCDS-PPR.\nNote that a pair can be relevant / similar on multiple dimensions at the same time.",
"## PAR_PMIDs.json\n\nPMIDs of the 11.7M articles used as PAR corpus.\nList of string."
]
| [
"TAGS\n#size_categories-100K<n<1M #language-English #license-cc-by-nc-sa-4.0 #medical #region-us \n",
"## URL\n\nPMIDs of articles from which PMC-Patients are extracted.\nList of string, length 140,897.",
"## train_PMIDs.json & dev_PMIDs.json & test_PMIDs.json\n\nPMIDs of articles in training / dev / test split.\nList of string.",
"## train_patient_uids.json & dev_patient_uids.json & test_patient_uids.json\n\nPatient_uids of notes in training / dev / test split.\nList of string.",
"## patient2article_relevance.json\n\nFull patient-to-article dataset.\nA dict where the keys are 'patient_uid' of queries and each entry is a list of 'PMID', representing articles relevant to the query.\n\nThe 3-point relevance can be obtained by checking whether the 'PMID' is in 'URL'.",
"## patient2patient_similarity.json\n\nFull patient-to-patient similarity dataset.\nA dict where the keys are 'patient_uid' of queries and each entry is a list of 'patient_uid', representing similar patients to the query.\n\nThe 3-point similarity can be obtained by checking whether the similar patient share the 'PMID' (the string before '-' in 'patient_uid') with the query patient.",
"## URL\n\nDict of PMIDs to MeSH terms of the article.",
"## MeSH_Humans_patient_uids.json\n\n'patient_uid' of the patients in PMC-Patients-Humans (extracted from articles with \"Humans\" MeSH term).\nList of string.",
"## PMC-Patients_citations.json\n\nCitations for all articles we used to collect our dataset.\nA dict where the keys are 'patient_uid' and each entry is the citation of the source article.",
"## human_PMIDs.json\n\nPMIDs of the 500 randomly sampled articles for human evaluation.\nList of string.",
"## PMC-Patients_human_eval.json\n\nExpert annotation results of the 500 articles in 'human_PMIDs.json', including manually annotated patient note, demographics, and relations of the top 5 retrieved articles / patients.\nList of dict, and the keys are almost identical to 'URL', with the exception of 'human_patient_id' and 'human_patient_uid'.\n\nThe relational annotations are different from automatic ones. They are strings indicating on which dimension(s) are the patient-article / patient-patient pair relevant / similar. \n\"0\", \"1\", \"2\", and \"3\" represent \"Irrelevant\", \"Diagnosis\", \"Test\", \"Treatment\" in ReCDS-PAR, and represent \"Dissimilar\", \"Features\", \"Outcomes\", \"Exposure\" in ReCDS-PPR.\nNote that a pair can be relevant / similar on multiple dimensions at the same time.",
"## PAR_PMIDs.json\n\nPMIDs of the 11.7M articles used as PAR corpus.\nList of string."
]
| [
38,
29,
44,
48,
81,
104,
16,
51,
52,
28,
224,
27
]
| [
"passage: TAGS\n#size_categories-100K<n<1M #language-English #license-cc-by-nc-sa-4.0 #medical #region-us \n## URL\n\nPMIDs of articles from which PMC-Patients are extracted.\nList of string, length 140,897.## train_PMIDs.json & dev_PMIDs.json & test_PMIDs.json\n\nPMIDs of articles in training / dev / test split.\nList of string.## train_patient_uids.json & dev_patient_uids.json & test_patient_uids.json\n\nPatient_uids of notes in training / dev / test split.\nList of string.## patient2article_relevance.json\n\nFull patient-to-article dataset.\nA dict where the keys are 'patient_uid' of queries and each entry is a list of 'PMID', representing articles relevant to the query.\n\nThe 3-point relevance can be obtained by checking whether the 'PMID' is in 'URL'.## patient2patient_similarity.json\n\nFull patient-to-patient similarity dataset.\nA dict where the keys are 'patient_uid' of queries and each entry is a list of 'patient_uid', representing similar patients to the query.\n\nThe 3-point similarity can be obtained by checking whether the similar patient share the 'PMID' (the string before '-' in 'patient_uid') with the query patient.## URL\n\nDict of PMIDs to MeSH terms of the article.## MeSH_Humans_patient_uids.json\n\n'patient_uid' of the patients in PMC-Patients-Humans (extracted from articles with \"Humans\" MeSH term).\nList of string.## PMC-Patients_citations.json\n\nCitations for all articles we used to collect our dataset.\nA dict where the keys are 'patient_uid' and each entry is the citation of the source article.## human_PMIDs.json\n\nPMIDs of the 500 randomly sampled articles for human evaluation.\nList of string."
]
|
1e8efb0616a6e5cbbec2ba627a2ebb02387a571f | # Dataset Card for "movie_posters-genres-80k-transformed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | anforsm/movie_posters-genres-80k-transformed | [
"region:us"
]
| 2023-11-06T16:42:07+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "genres", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 23128566147.416473, "num_examples": 78352}, {"name": "test", "num_bytes": 295187948.58352655, "num_examples": 1000}], "download_size": 22030369211, "dataset_size": 23423754096.0}} | 2023-11-06T17:00:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "movie_posters-genres-80k-transformed"
More Information needed | [
"# Dataset Card for \"movie_posters-genres-80k-transformed\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"movie_posters-genres-80k-transformed\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"movie_posters-genres-80k-transformed\"\n\nMore Information needed"
]
|
eed683b74cdbe98d4b7b0db9d2f9a85c3302bd6a | # Dataset Card for "medical_meadow_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hippocrates/medical_meadow_train | [
"region:us"
]
| 2023-11-06T16:45:23+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 25828936, "num_examples": 33955}, {"name": "valid", "num_bytes": 25828936, "num_examples": 33955}, {"name": "test", "num_bytes": 25828936, "num_examples": 33955}], "download_size": 31650800, "dataset_size": 77486808}} | 2023-11-06T16:45:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "medical_meadow_train"
More Information needed | [
"# Dataset Card for \"medical_meadow_train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"medical_meadow_train\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"medical_meadow_train\"\n\nMore Information needed"
]
|
5559af817747155eebda159bef49b0ace891e669 | # Dataset Card for "DocVQA_layoutLM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Sharka/DocVQA_layoutLM | [
"region:us"
]
| 2023-11-06T16:54:09+00:00 | {"dataset_info": {"features": [{"name": "image", "sequence": {"sequence": {"sequence": "uint8"}}}, {"name": "answers", "sequence": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "bbox", "sequence": {"sequence": "int64"}}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}, {"name": "questions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6674557036, "num_examples": 38174}, {"name": "validation", "num_bytes": 882472789, "num_examples": 5047}], "download_size": 2458338968, "dataset_size": 7557029825}} | 2023-11-07T23:28:00+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "DocVQA_layoutLM"
More Information needed | [
"# Dataset Card for \"DocVQA_layoutLM\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"DocVQA_layoutLM\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"DocVQA_layoutLM\"\n\nMore Information needed"
]
|
2b1cf24ebf6773335cf220a686370e14cc3c6f70 | # Dataset Card for "sentiment_labelled_data"
* This dataset is manually labelled and validated.
* model used for raw prediction : Harvinder6766/news_sentiment_distillbert & then manually reviews and annonated
* Now this data will be used to train a model
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Harvinder6766/sentiment_labelled_data | [
"region:us"
]
| 2023-11-06T17:20:57+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "NEGATIVE", "1": "NEUTRAL", "2": "POSITIVE"}}}}], "splits": [{"name": "train", "num_bytes": 378043, "num_examples": 1572}], "download_size": 259617, "dataset_size": 378043}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T17:23:50+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sentiment_labelled_data"
* This dataset is manually labelled and validated.
* model used for raw prediction : Harvinder6766/news_sentiment_distillbert & then manually reviews and annonated
* Now this data will be used to train a model
More Information needed | [
"# Dataset Card for \"sentiment_labelled_data\"\n\n* This dataset is manually labelled and validated.\n* model used for raw prediction : Harvinder6766/news_sentiment_distillbert & then manually reviews and annonated\n* Now this data will be used to train a model\n\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sentiment_labelled_data\"\n\n* This dataset is manually labelled and validated.\n* model used for raw prediction : Harvinder6766/news_sentiment_distillbert & then manually reviews and annonated\n* Now this data will be used to train a model\n\n\nMore Information needed"
]
| [
6,
71
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"sentiment_labelled_data\"\n\n* This dataset is manually labelled and validated.\n* model used for raw prediction : Harvinder6766/news_sentiment_distillbert & then manually reviews and annonated\n* Now this data will be used to train a model\n\n\nMore Information needed"
]
|
7f4e15373fe7cd2549b4d2b41c9c2e26f2de4185 | # Dataset Card for "SFD_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AdvayK/SFD_7 | [
"region:us"
]
| 2023-11-06T17:32:07+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 382894422.7379618, "num_examples": 625}, {"name": "test", "num_bytes": 164473290.26203808, "num_examples": 268}], "download_size": 444577398, "dataset_size": 547367712.9999999}} | 2023-11-06T17:32:48+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "SFD_7"
More Information needed | [
"# Dataset Card for \"SFD_7\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"SFD_7\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"SFD_7\"\n\nMore Information needed"
]
|
a229db006a18616d150c0318fe82d68dcc9203b3 | # Dataset Card for "medical_meadow_advice_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hippocrates/medical_meadow_advice_train | [
"region:us"
]
| 2023-11-06T17:42:58+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9431718, "num_examples": 8676}], "download_size": 2439830, "dataset_size": 9431718}} | 2023-11-06T17:43:00+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "medical_meadow_advice_train"
More Information needed | [
"# Dataset Card for \"medical_meadow_advice_train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"medical_meadow_advice_train\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"medical_meadow_advice_train\"\n\nMore Information needed"
]
|
7a4d9db7c30963841ab5ac607d849cbce34ce152 | # Dataset Card for "LogDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Lollitor/LogDataset | [
"region:us"
]
| 2023-11-06T18:07:23+00:00 | {"dataset_info": {"features": [{"name": "-logKd/Ki", "dtype": "float64"}, {"name": "inputs", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22099679, "num_examples": 18926}], "download_size": 8110526, "dataset_size": 22099679}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-06T18:07:27+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "LogDataset"
More Information needed | [
"# Dataset Card for \"LogDataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"LogDataset\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"LogDataset\"\n\nMore Information needed"
]
|
54d9cf2e2d247168c9ffe56329200e09e4049d37 | # Dataset Card for "scb_mt_2020_en2th_prompt"
This dataset made from [scb_mt_enth_2020](https://huggingface.co/datasets/scb_mt_enth_2020) that removed nus_sms and paracrawl from source.
Source code for create dataset: [https://github.com/PyThaiNLP/support-aya-datasets/blob/main/translation/scb_mt.ipynb](https://github.com/PyThaiNLP/support-aya-datasets/blob/main/translation/scb_mt.ipynb)
## Template
```
Inputs: แปลประโยคหรือย่อหน้าต่อไปนี้จากภาษาอังกฤษเป็นภาษาไทย:\n{en}
Targets: Thai sentence
``` | pythainlp/scb_mt_2020_en2th_prompt | [
"task_categories:text2text-generation",
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:th",
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-11-06T18:16:05+00:00 | {"language": ["th"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text2text-generation", "text-classification"], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 500257169, "num_examples": 801402}, {"name": "validation", "num_bytes": 61671631, "num_examples": 88927}, {"name": "test", "num_bytes": 61225544, "num_examples": 88931}], "download_size": 212863737, "dataset_size": 623154344}} | 2023-11-06T18:28:35+00:00 | []
| [
"th"
]
| TAGS
#task_categories-text2text-generation #task_categories-text-classification #size_categories-100K<n<1M #language-Thai #license-cc-by-sa-4.0 #region-us
| # Dataset Card for "scb_mt_2020_en2th_prompt"
This dataset made from scb_mt_enth_2020 that removed nus_sms and paracrawl from source.
Source code for create dataset: URL
## Template
| [
"# Dataset Card for \"scb_mt_2020_en2th_prompt\"\n\nThis dataset made from scb_mt_enth_2020 that removed nus_sms and paracrawl from source.\n\nSource code for create dataset: URL",
"## Template"
]
| [
"TAGS\n#task_categories-text2text-generation #task_categories-text-classification #size_categories-100K<n<1M #language-Thai #license-cc-by-sa-4.0 #region-us \n",
"# Dataset Card for \"scb_mt_2020_en2th_prompt\"\n\nThis dataset made from scb_mt_enth_2020 that removed nus_sms and paracrawl from source.\n\nSource code for create dataset: URL",
"## Template"
]
| [
58,
59,
2
]
| [
"passage: TAGS\n#task_categories-text2text-generation #task_categories-text-classification #size_categories-100K<n<1M #language-Thai #license-cc-by-sa-4.0 #region-us \n# Dataset Card for \"scb_mt_2020_en2th_prompt\"\n\nThis dataset made from scb_mt_enth_2020 that removed nus_sms and paracrawl from source.\n\nSource code for create dataset: URL## Template"
]
|
7096451efb9a4db736cd43bfdb7b75838bdf2154 | # Dataset Card for "scb_mt_2020_th2en_prompt"
This dataset made from [scb_mt_enth_2020](https://huggingface.co/datasets/scb_mt_enth_2020) that removed nus_sms and paracrawl from source.
Source code for create dataset: [https://github.com/PyThaiNLP/support-aya-datasets/blob/main/translation/scb_mt.ipynb](https://github.com/PyThaiNLP/support-aya-datasets/blob/main/translation/scb_mt.ipynb)
## Template
```
Inputs: แปลประโยคหรือย่อหน้าต่อไปนี้จากภาษาไทยเป็นภาษาอังกฤษ:\n{th}
Targets: English sentence
``` | pythainlp/scb_mt_2020_th2en_prompt | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:th",
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-11-06T18:20:59+00:00 | {"language": ["th"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text2text-generation", "text-generation"], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 500257169, "num_examples": 801402}, {"name": "validation", "num_bytes": 61671631, "num_examples": 88927}, {"name": "test", "num_bytes": 61225544, "num_examples": 88931}], "download_size": 212800258, "dataset_size": 623154344}} | 2023-11-06T18:28:57+00:00 | []
| [
"th"
]
| TAGS
#task_categories-text2text-generation #task_categories-text-generation #size_categories-100K<n<1M #language-Thai #license-cc-by-sa-4.0 #region-us
| # Dataset Card for "scb_mt_2020_th2en_prompt"
This dataset made from scb_mt_enth_2020 that removed nus_sms and paracrawl from source.
Source code for create dataset: URL
## Template
| [
"# Dataset Card for \"scb_mt_2020_th2en_prompt\"\n\n\nThis dataset made from scb_mt_enth_2020 that removed nus_sms and paracrawl from source.\n\nSource code for create dataset: URL",
"## Template"
]
| [
"TAGS\n#task_categories-text2text-generation #task_categories-text-generation #size_categories-100K<n<1M #language-Thai #license-cc-by-sa-4.0 #region-us \n",
"# Dataset Card for \"scb_mt_2020_th2en_prompt\"\n\n\nThis dataset made from scb_mt_enth_2020 that removed nus_sms and paracrawl from source.\n\nSource code for create dataset: URL",
"## Template"
]
| [
58,
59,
2
]
| [
"passage: TAGS\n#task_categories-text2text-generation #task_categories-text-generation #size_categories-100K<n<1M #language-Thai #license-cc-by-sa-4.0 #region-us \n# Dataset Card for \"scb_mt_2020_th2en_prompt\"\n\n\nThis dataset made from scb_mt_enth_2020 that removed nus_sms and paracrawl from source.\n\nSource code for create dataset: URL## Template"
]
|
ee3cc271cefed2579b937b589ed611502da2b7af | # Dataset Card for "thai_usembassy_en2th_prompt"
This dataset made from [pythainlp/thai_usembassy](https://huggingface.co/datasets/pythainlp/thai_usembassy).
Source code for create dataset: [https://github.com/PyThaiNLP/support-aya-datasets/blob/main/translation/thai_usembassy.ipynb](https://github.com/PyThaiNLP/support-aya-datasets/blob/main/translation/thai_usembassy.ipynb)
## Template
```
Inputs: แปลประโยคหรือย่อหน้าต่อไปนี้จากภาษาอังกฤษเป็นภาษาไทย:\n{en}
Targets: Thai sentence
``` | pythainlp/thai_usembassy_en2th_prompt | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:n<1K",
"language:th",
"license:cc0-1.0",
"region:us"
]
| 2023-11-06T18:31:48+00:00 | {"language": ["th"], "license": "cc0-1.0", "size_categories": ["n<1K"], "task_categories": ["text2text-generation", "text-generation"], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4932896, "num_examples": 615}], "download_size": 1971462, "dataset_size": 4932896}} | 2023-11-06T18:34:42+00:00 | []
| [
"th"
]
| TAGS
#task_categories-text2text-generation #task_categories-text-generation #size_categories-n<1K #language-Thai #license-cc0-1.0 #region-us
| # Dataset Card for "thai_usembassy_en2th_prompt"
This dataset made from pythainlp/thai_usembassy.
Source code for create dataset: URL
## Template
| [
"# Dataset Card for \"thai_usembassy_en2th_prompt\"\n\nThis dataset made from pythainlp/thai_usembassy.\n\nSource code for create dataset: URL",
"## Template"
]
| [
"TAGS\n#task_categories-text2text-generation #task_categories-text-generation #size_categories-n<1K #language-Thai #license-cc0-1.0 #region-us \n",
"# Dataset Card for \"thai_usembassy_en2th_prompt\"\n\nThis dataset made from pythainlp/thai_usembassy.\n\nSource code for create dataset: URL",
"## Template"
]
| [
53,
44,
2
]
| [
"passage: TAGS\n#task_categories-text2text-generation #task_categories-text-generation #size_categories-n<1K #language-Thai #license-cc0-1.0 #region-us \n# Dataset Card for \"thai_usembassy_en2th_prompt\"\n\nThis dataset made from pythainlp/thai_usembassy.\n\nSource code for create dataset: URL## Template"
]
|
c8cd9850ba3d9ff961e65b78ce56880d3fe81b78 | # Dataset Card for "thai_usembassy_th2en_prompt"
This dataset made from [pythainlp/thai_usembassy](https://huggingface.co/datasets/pythainlp/thai_usembassy).
Source code for create dataset: [https://github.com/PyThaiNLP/support-aya-datasets/blob/main/translation/thai_usembassy.ipynb](https://github.com/PyThaiNLP/support-aya-datasets/blob/main/translation/thai_usembassy.ipynb)
## Template
```
Inputs: แปลประโยคหรือย่อหน้าต่อไปนี้จากภาษาไทยเป็นภาษาอังกฤษ:\n{th}
Targets: English sentence
``` | pythainlp/thai_usembassy_th2en_prompt | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:n<1K",
"language:th",
"license:cc0-1.0",
"region:us"
]
| 2023-11-06T18:32:34+00:00 | {"language": ["th"], "license": "cc0-1.0", "size_categories": ["n<1K"], "task_categories": ["text2text-generation", "text-generation"], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4932896, "num_examples": 615}], "download_size": 1969489, "dataset_size": 4932896}} | 2023-11-06T18:35:51+00:00 | []
| [
"th"
]
| TAGS
#task_categories-text2text-generation #task_categories-text-generation #size_categories-n<1K #language-Thai #license-cc0-1.0 #region-us
| # Dataset Card for "thai_usembassy_th2en_prompt"
This dataset made from pythainlp/thai_usembassy.
Source code for create dataset: URL
## Template
| [
"# Dataset Card for \"thai_usembassy_th2en_prompt\"\n\nThis dataset made from pythainlp/thai_usembassy.\n\nSource code for create dataset: URL",
"## Template"
]
| [
"TAGS\n#task_categories-text2text-generation #task_categories-text-generation #size_categories-n<1K #language-Thai #license-cc0-1.0 #region-us \n",
"# Dataset Card for \"thai_usembassy_th2en_prompt\"\n\nThis dataset made from pythainlp/thai_usembassy.\n\nSource code for create dataset: URL",
"## Template"
]
| [
53,
44,
2
]
| [
"passage: TAGS\n#task_categories-text2text-generation #task_categories-text-generation #size_categories-n<1K #language-Thai #license-cc0-1.0 #region-us \n# Dataset Card for \"thai_usembassy_th2en_prompt\"\n\nThis dataset made from pythainlp/thai_usembassy.\n\nSource code for create dataset: URL## Template"
]
|
7855668c5c2386b7b78fc99111304418119ee76b | # Dataset Card for "medical_meadow_mediqa_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hippocrates/medical_meadow_mediqa_train | [
"region:us"
]
| 2023-11-06T18:42:10+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30570668, "num_examples": 2208}], "download_size": 12800020, "dataset_size": 30570668}} | 2023-11-06T18:42:12+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "medical_meadow_mediqa_train"
More Information needed | [
"# Dataset Card for \"medical_meadow_mediqa_train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"medical_meadow_mediqa_train\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"medical_meadow_mediqa_train\"\n\nMore Information needed"
]
|
bc50ac0f3cf16f22051531a24fd513d7c3277df4 | # Dataset Card for "hi_en_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ShrinivasSK/hi_en_1 | [
"region:us"
]
| 2023-11-06T18:54:23+00:00 | {"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "tgt", "dtype": "string"}, {"name": "src", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6349061.7, "num_examples": 18000}, {"name": "test", "num_bytes": 705451.3, "num_examples": 2000}], "download_size": 3779852, "dataset_size": 7054513.0}} | 2023-11-06T19:06:53+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "hi_en_1"
More Information needed | [
"# Dataset Card for \"hi_en_1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"hi_en_1\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"hi_en_1\"\n\nMore Information needed"
]
|
57e4fcb42558857034218b40600ed30165d24531 | # Dataset Card for "hi_en_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ShrinivasSK/hi_en_2 | [
"region:us"
]
| 2023-11-06T18:54:29+00:00 | {"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "tgt", "dtype": "string"}, {"name": "src", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6376404.6, "num_examples": 18000}, {"name": "test", "num_bytes": 708489.4, "num_examples": 2000}], "download_size": 3796444, "dataset_size": 7084894.0}} | 2023-11-06T19:07:04+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "hi_en_2"
More Information needed | [
"# Dataset Card for \"hi_en_2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"hi_en_2\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"hi_en_2\"\n\nMore Information needed"
]
|
4f20da2f1a81aab9d331a9fd9834e00d148e5ca1 | # Dataset Card for "hi_en_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ShrinivasSK/hi_en_3 | [
"region:us"
]
| 2023-11-06T18:54:34+00:00 | {"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "tgt", "dtype": "string"}, {"name": "src", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6366803.4, "num_examples": 18000}, {"name": "test", "num_bytes": 707422.6, "num_examples": 2000}], "download_size": 3789240, "dataset_size": 7074226.0}} | 2023-11-06T19:07:15+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "hi_en_3"
More Information needed | [
"# Dataset Card for \"hi_en_3\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"hi_en_3\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"hi_en_3\"\n\nMore Information needed"
]
|
425ae79370135ca632f98f26bb06fda8275dab29 | # Dataset Card for "kn_en_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ShrinivasSK/kn_en_1 | [
"region:us"
]
| 2023-11-06T18:54:38+00:00 | {"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "tgt", "dtype": "string"}, {"name": "src", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3986574.3, "num_examples": 18000}, {"name": "test", "num_bytes": 442952.7, "num_examples": 2000}], "download_size": 2373508, "dataset_size": 4429527.0}} | 2023-11-06T19:07:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "kn_en_1"
More Information needed | [
"# Dataset Card for \"kn_en_1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"kn_en_1\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"kn_en_1\"\n\nMore Information needed"
]
|
ef0c9a9c098723b806828ef4eb6981b49f9f920b | # Dataset Card for "kn_en_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ShrinivasSK/kn_en_2 | [
"region:us"
]
| 2023-11-06T18:54:43+00:00 | {"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "tgt", "dtype": "string"}, {"name": "src", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3982082.4, "num_examples": 18000}, {"name": "test", "num_bytes": 442453.6, "num_examples": 2000}], "download_size": 2369798, "dataset_size": 4424536.0}} | 2023-11-06T19:07:36+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "kn_en_2"
More Information needed | [
"# Dataset Card for \"kn_en_2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"kn_en_2\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"kn_en_2\"\n\nMore Information needed"
]
|
2761b2f5928d7118ded58bd9b3b8839e42d3b05b | # Dataset Card for "kn_en_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ShrinivasSK/kn_en_3 | [
"region:us"
]
| 2023-11-06T18:54:47+00:00 | {"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "tgt", "dtype": "string"}, {"name": "src", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3978395.1, "num_examples": 18000}, {"name": "test", "num_bytes": 442043.9, "num_examples": 2000}], "download_size": 2367278, "dataset_size": 4420439.0}} | 2023-11-06T19:07:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "kn_en_3"
More Information needed | [
"# Dataset Card for \"kn_en_3\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"kn_en_3\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"kn_en_3\"\n\nMore Information needed"
]
|
e50b84e6ebd06cdc37795fb980163eb94d431485 | # Dataset Card for "mr_en_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ShrinivasSK/mr_en_1 | [
"region:us"
]
| 2023-11-06T18:54:52+00:00 | {"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "tgt", "dtype": "string"}, {"name": "src", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4586634.0, "num_examples": 18000}, {"name": "test", "num_bytes": 509626.0, "num_examples": 2000}], "download_size": 2687176, "dataset_size": 5096260.0}} | 2023-11-06T19:07:57+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "mr_en_1"
More Information needed | [
"# Dataset Card for \"mr_en_1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"mr_en_1\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"mr_en_1\"\n\nMore Information needed"
]
|
972b0a00678c67e7b2350b0634c07b67f1e3cd33 | # Dataset Card for "te_en_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ShrinivasSK/te_en_1 | [
"region:us"
]
| 2023-11-06T18:54:57+00:00 | {"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "tgt", "dtype": "string"}, {"name": "src", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4096206.9, "num_examples": 18000}, {"name": "test", "num_bytes": 455134.1, "num_examples": 2000}], "download_size": 2442401, "dataset_size": 4551341.0}} | 2023-11-06T19:08:08+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "te_en_1"
More Information needed | [
"# Dataset Card for \"te_en_1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"te_en_1\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"te_en_1\"\n\nMore Information needed"
]
|
d1f0dbcf30f02872b6706d76d09e3a4bff4ccefd | # Dataset Card for "te_en_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ShrinivasSK/te_en_2 | [
"region:us"
]
| 2023-11-06T18:55:01+00:00 | {"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "tgt", "dtype": "string"}, {"name": "src", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4065421.5, "num_examples": 18000}, {"name": "test", "num_bytes": 451713.5, "num_examples": 2000}], "download_size": 2431811, "dataset_size": 4517135.0}} | 2023-11-06T19:08:19+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "te_en_2"
More Information needed | [
"# Dataset Card for \"te_en_2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"te_en_2\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"te_en_2\"\n\nMore Information needed"
]
|
1c31b16e854ef383b50a2b8be9e0b2f5be231db0 | # Dataset Card for "te_en_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ShrinivasSK/te_en_3 | [
"region:us"
]
| 2023-11-06T18:55:06+00:00 | {"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "tgt", "dtype": "string"}, {"name": "src", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4067874.0, "num_examples": 18000}, {"name": "test", "num_bytes": 451986.0, "num_examples": 2000}], "download_size": 2432870, "dataset_size": 4519860.0}} | 2023-11-06T19:12:56+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "te_en_3"
More Information needed | [
"# Dataset Card for \"te_en_3\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"te_en_3\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"te_en_3\"\n\nMore Information needed"
]
|
f815b8438ba47314961c597d196b046d05ab9ae7 |
## How to Start
```python
from datasets import load_dataset
import json
qas = load_dataset("hobeter/JJQA","qa")["train"]
songs = load_dataset("hobeter/JJQA","song")["train"]
song_index=json.loads(load_dataset("hobeter/JJQA","song_index")["train"]["dic"][0])[0]
```
# JJQA: a Chinese QA dataset on the lyrics of JJ Lin's songs
**GitHub: https://github.com/bebetterest/JJQA**
Large Language Models (LLMs) have shown powerful capability of text understanding, analysis and generation. It seems a good tool for text-style knowledge based question answering (QA) where semantically retrieving related texts, understanding them and generating correct answers are required.
However, many feasible QA datasets are not challenging enough. First, given text-style knowledge might be easy to perceive and analyse. Second, the questions & answers follow commonsense. Thus, LLMs may benefit from training of language modeling and even take a shortcut. In this case, we want to build a new text-style knowledge based logical QA dataset where the text-style knowledge is tricky and LLMs are not likely to give correct answers without successfully retrieving and reasoning related texts.
Chinese is a language where each single character could contain abundant meanings while just a few words, especially pieces of lyrics, are able to express complex conceptions, feelings and impressions. <font color=Purple size="">Besides, Junjie Lin, known as **[JJ Lin](https://www.jjlin.com/home/news/all)**, is a famous Singaporean Mandarin singer.</font> The lyrics of his songs are always imaginative, poetic and romantic.
Hense, we propose JJQA, a Chinese text-style knowledge based question answering dataset on the lyrics of JJ Lin's songs, where related lyrics are provided as text-style knowledge for retrieval while the questions and answers are based on the lyrics. The Q&As are always abstract and follow anti-commonsense. For example, according to the related lyrics of a song called "爱情Yogurt", the question is "热量有什么作用?" ("What is the impact of heat?") and the answer is "降低爱情的过敏反应。" ("Ease the anaphylaxis of love"). It is indeed ridiculous and funny (you could find more in the dataset)🤪. Even human beings could not give the right answer without knowing the related lyrics. In addition, LLMs are not likely to naturally generate the right answer with the capability from training. Therefore, only if the related lyrics are retrieved and understood, are the right answers possibly generated By LLMs.
## Dataset Details
According to [QQMusicSpider](https://github.com/yangjianxin1/QQMusicSpider), we crawled lyrics of all songs of JJ Lin from [QQMusic](https://y.qq.com/). After data cleaning and label annotation, 648 Q&As with 181 related song lyrics are included.
Three fields ("qa", "song", "song_index") are included in JJQA.
"qa" contains Q&As with 6 features. "q" and "a" are a question and the corresponding answer. "song_title" and "song_id" are the title and the corresponding id of the related song. "id" is the id for the Q&A. "rf" locates the lines of lyrics for reference, splited by a space " ".
"song" contains information of songs with 4 features. "title" and "name" are the title and the corresponding name of the song. "id" is the id of the song. "lyric" is the lyrics of the song, where each line is splited by "\n".
"song_index" contains one dictionary, whose keys are the ids of songs and values are indexes of the corresponding song in "song" field, to align QAs with the corresponding songs.
## Baselines
We evaluate three baseline methods on JJQA. The first one (*wo_info*) is to "ask" the question directly without any additional lyric, which is to show the performance of uninformed LLMs; the second one (*w_song*) is to include whole lyrics of the related song as in-contexts; the third one (*w_rf*) is to just include related lyrics. *w_song* and *w_rf* are two reference lines for retrieval-based method.
Six feasible LLMs (*ernie-turbo*, *chatglm2_6b_32k*, *qwen-turbo*, *baichuan2-7b-chat-v1*, *gpt-4*, *gpt-3.5-turbo*) are included. We apply *ernie-turbo* and *chatglm2_6b_32k* in [qianfan platform](https://cloud.baidu.com/product/wenxinworkshop); *qwen-turbo* and *baichuan2-7b-chat-v1* in [dashscope platform](https://dashscope.aliyun.com/); *gpt-4* and *gpt-3.5-turbo* in [openai platform](https://platform.openai.com/).
We consider [BERTScore](https://github.com/Tiiiger/bert_score) with *rescale_with_baseline=True* as the metric.
The results are as follows.
|LLM|Method|Precision|Recall|F1|Date|
|:---:|:---:|:---:|:---:|:---:|:---:|
|*ernie-turbo*|*wo_info*|-0.0350|0.1568|0.0511|2023/11/06|
|*ernie-turbo*|*w_song*|0.2472|0.5765|0.3895|2023/11/06|
|*ernie-turbo*|*w_rf*|0.3600|0.6528|0.4864|2023/11/06|2023/11/05|
|*chatglm2_6b_32k*|*wo_info*|0.0466|0.1787|0.1066|2023/11/05|
|*chatglm2_6b_32k*|*w_song*|0.2361|0.4606|0.3335|2023/11/05|
|*chatglm2_6b_32k*|*w_rf*|0.4650|0.6477|0.5436|2023/11/05|
|*qwen-turbo*|*wo_info*|0.2331|0.2150|0.2208|2023/11/05|
|*qwen-turbo*|*w_song*|0.7673|0.8041|0.7804|2023/11/05|
|*qwen-turbo*|*w_rf*|0.8600|0.8251|0.8386|2023/11/05|
|*baichuan2-7b-chat-v1*|*wo_info*|0.1755|0.2012|0.1857|2023/11/05|
|*baichuan2-7b-chat-v1*|*w_song*|0.4635|0.6324|0.5371|2023/11/05|
|*baichuan2-7b-chat-v1*|*w_rf*|0.6567|0.7272|0.6851|2023/11/05|
|*gpt-3.5-turbo*|*wo_info*|0.2201|0.1983|0.2061|2023/11/06|
|*gpt-3.5-turbo*|*w_song*|0.8031|0.7812|0.7884|2023/11/06|
|*gpt-3.5-turbo*|*w_rf*|0.8110|0.7484|0.7758|2023/11/06|
|*gpt-4*|*wo_info*|0.2426|0.2377|0.2376|2023/11/06|
|*gpt-4*|*w_song*|0.8405|0.8587|0.8464|2023/11/06|
|*gpt-4*|*w_rf*|0.8865|0.8643|0.8732|2023/11/06|
It is worth noting that *Date* stands for the time (UTC+8) for evaluation. In addition, a small number of samples are not feasible in the dashscope platform because of its safety system. We just skip these Q&As. (1 sample for *qwen-turbo* *wo_info*; 3 samples for *qwen-turbo* *w_song*; 3 samples for *baichuan2-7b-chat-v1* *w_song*)
| hobeter/JJQA | [
"task_categories:question-answering",
"size_categories:n<1K",
"license:apache-2.0",
"music",
"art",
"region:us"
]
| 2023-11-06T18:55:51+00:00 | {"license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["question-answering"], "dataset_info": [{"config_name": "qa", "features": [{"name": "q", "dtype": "string"}, {"name": "a", "dtype": "string"}, {"name": "rf", "dtype": "string"}, {"name": "song_title", "dtype": "string"}, {"name": "song_id", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 67824, "num_examples": 648}], "download_size": 134589, "dataset_size": 67824}, {"config_name": "song", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "lyric", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 253605, "num_examples": 181}], "download_size": 276024, "dataset_size": 253605}, {"config_name": "song_index", "features": [{"name": "dic", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2872, "num_examples": 1}], "download_size": 4168, "dataset_size": 2872}], "tags": ["music", "art"]} | 2023-11-06T19:30:08+00:00 | []
| []
| TAGS
#task_categories-question-answering #size_categories-n<1K #license-apache-2.0 #music #art #region-us
| How to Start
------------
JJQA: a Chinese QA dataset on the lyrics of JJ Lin's songs
==========================================================
GitHub: URL
Large Language Models (LLMs) have shown powerful capability of text understanding, analysis and generation. It seems a good tool for text-style knowledge based question answering (QA) where semantically retrieving related texts, understanding them and generating correct answers are required.
However, many feasible QA datasets are not challenging enough. First, given text-style knowledge might be easy to perceive and analyse. Second, the questions & answers follow commonsense. Thus, LLMs may benefit from training of language modeling and even take a shortcut. In this case, we want to build a new text-style knowledge based logical QA dataset where the text-style knowledge is tricky and LLMs are not likely to give correct answers without successfully retrieving and reasoning related texts.
Chinese is a language where each single character could contain abundant meanings while just a few words, especially pieces of lyrics, are able to express complex conceptions, feelings and impressions. Besides, Junjie Lin, known as JJ Lin, is a famous Singaporean Mandarin singer. The lyrics of his songs are always imaginative, poetic and romantic.
Hense, we propose JJQA, a Chinese text-style knowledge based question answering dataset on the lyrics of JJ Lin's songs, where related lyrics are provided as text-style knowledge for retrieval while the questions and answers are based on the lyrics. The Q&As are always abstract and follow anti-commonsense. For example, according to the related lyrics of a song called "爱情Yogurt", the question is "热量有什么作用?" ("What is the impact of heat?") and the answer is "降低爱情的过敏反应。" ("Ease the anaphylaxis of love"). It is indeed ridiculous and funny (you could find more in the dataset). Even human beings could not give the right answer without knowing the related lyrics. In addition, LLMs are not likely to naturally generate the right answer with the capability from training. Therefore, only if the related lyrics are retrieved and understood, are the right answers possibly generated By LLMs.
Dataset Details
---------------
According to QQMusicSpider, we crawled lyrics of all songs of JJ Lin from QQMusic. After data cleaning and label annotation, 648 Q&As with 181 related song lyrics are included.
Three fields ("qa", "song", "song\_index") are included in JJQA.
"qa" contains Q&As with 6 features. "q" and "a" are a question and the corresponding answer. "song\_title" and "song\_id" are the title and the corresponding id of the related song. "id" is the id for the Q&A. "rf" locates the lines of lyrics for reference, splited by a space " ".
"song" contains information of songs with 4 features. "title" and "name" are the title and the corresponding name of the song. "id" is the id of the song. "lyric" is the lyrics of the song, where each line is splited by "\n".
"song\_index" contains one dictionary, whose keys are the ids of songs and values are indexes of the corresponding song in "song" field, to align QAs with the corresponding songs.
Baselines
---------
We evaluate three baseline methods on JJQA. The first one (*wo\_info*) is to "ask" the question directly without any additional lyric, which is to show the performance of uninformed LLMs; the second one (*w\_song*) is to include whole lyrics of the related song as in-contexts; the third one (*w\_rf*) is to just include related lyrics. *w\_song* and *w\_rf* are two reference lines for retrieval-based method.
Six feasible LLMs (*ernie-turbo*, *chatglm2\_6b\_32k*, *qwen-turbo*, *baichuan2-7b-chat-v1*, *gpt-4*, *gpt-3.5-turbo*) are included. We apply *ernie-turbo* and *chatglm2\_6b\_32k* in qianfan platform; *qwen-turbo* and *baichuan2-7b-chat-v1* in dashscope platform; *gpt-4* and *gpt-3.5-turbo* in openai platform.
We consider BERTScore with *rescale\_with\_baseline=True* as the metric.
The results are as follows.
It is worth noting that *Date* stands for the time (UTC+8) for evaluation. In addition, a small number of samples are not feasible in the dashscope platform because of its safety system. We just skip these Q&As. (1 sample for *qwen-turbo* *wo\_info*; 3 samples for *qwen-turbo* *w\_song*; 3 samples for *baichuan2-7b-chat-v1* *w\_song*)
| []
| [
"TAGS\n#task_categories-question-answering #size_categories-n<1K #license-apache-2.0 #music #art #region-us \n"
]
| [
40
]
| [
"passage: TAGS\n#task_categories-question-answering #size_categories-n<1K #license-apache-2.0 #music #art #region-us \n"
]
|
7cb6e3e1d2612cbacfed8be2a0ef928214e875aa | # Dataset Card for "medical_meadow_mmmlu_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hippocrates/medical_meadow_mmmlu_train | [
"region:us"
]
| 2023-11-06T19:07:48+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3507993, "num_examples": 3787}], "download_size": 1633148, "dataset_size": 3507993}} | 2023-11-06T19:07:49+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "medical_meadow_mmmlu_train"
More Information needed | [
"# Dataset Card for \"medical_meadow_mmmlu_train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"medical_meadow_mmmlu_train\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"medical_meadow_mmmlu_train\"\n\nMore Information needed"
]
|
bec23c96e91f9b67cc11a503a0caf46e0381816a | # 📚🤖 Querypls-prompt2sql
## Dataset Information
The Querypls-prompt2sql dataset is designed for text classification tasks related to generating SQL queries. It contains the following features:
- **Context:** String
- **Answer:** String
- **Autotrain Text:** String
The dataset is split into two parts:
- **Training Set:**
- Number of Examples: 78,577
- Size: 17,419,604 bytes
- **Validation Set:**
- Number of Examples: 78,577
- Size: 17,419,604 bytes
The total download size of the dataset is 13,675,124 bytes, and the dataset size is 34,839,208 bytes.
## Dataset Configuration
The default configuration includes the following data files:
- **Training Split:**
- Path: data/train-*
- **Validation Split:**
- Path: data/validation-*
The dataset is licensed under Apache-2.0.
## Task Categories
- Text Classification
## Language
- English
## How to Contribute
For information on contributing to the dataset cards, please refer to the [Hugging Face Datasets Contribution Guidelines](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards).
| samadpls/querypls-prompt2sql-dataset | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-11-06T19:09:13+00:00 | {"language": ["en"], "license": "apache-2.0", "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "autotrain_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17419604, "num_examples": 78577}, {"name": "validation", "num_bytes": 17419604, "num_examples": 78577}], "download_size": 13675124, "dataset_size": 34839208}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-26T19:21:08+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #language-English #license-apache-2.0 #region-us
| # Querypls-prompt2sql
## Dataset Information
The Querypls-prompt2sql dataset is designed for text classification tasks related to generating SQL queries. It contains the following features:
- Context: String
- Answer: String
- Autotrain Text: String
The dataset is split into two parts:
- Training Set:
- Number of Examples: 78,577
- Size: 17,419,604 bytes
- Validation Set:
- Number of Examples: 78,577
- Size: 17,419,604 bytes
The total download size of the dataset is 13,675,124 bytes, and the dataset size is 34,839,208 bytes.
## Dataset Configuration
The default configuration includes the following data files:
- Training Split:
- Path: data/train-*
- Validation Split:
- Path: data/validation-*
The dataset is licensed under Apache-2.0.
## Task Categories
- Text Classification
## Language
- English
## How to Contribute
For information on contributing to the dataset cards, please refer to the Hugging Face Datasets Contribution Guidelines.
| [
"# Querypls-prompt2sql",
"## Dataset Information\n\nThe Querypls-prompt2sql dataset is designed for text classification tasks related to generating SQL queries. It contains the following features:\n\n- Context: String\n- Answer: String\n- Autotrain Text: String\n\nThe dataset is split into two parts:\n\n- Training Set:\n - Number of Examples: 78,577\n - Size: 17,419,604 bytes\n\n- Validation Set:\n - Number of Examples: 78,577\n - Size: 17,419,604 bytes\n\nThe total download size of the dataset is 13,675,124 bytes, and the dataset size is 34,839,208 bytes.",
"## Dataset Configuration\n\nThe default configuration includes the following data files:\n\n- Training Split:\n - Path: data/train-*\n\n- Validation Split:\n - Path: data/validation-*\n\nThe dataset is licensed under Apache-2.0.",
"## Task Categories\n\n- Text Classification",
"## Language\n\n- English",
"## How to Contribute\n\nFor information on contributing to the dataset cards, please refer to the Hugging Face Datasets Contribution Guidelines."
]
| [
"TAGS\n#task_categories-text-classification #language-English #license-apache-2.0 #region-us \n",
"# Querypls-prompt2sql",
"## Dataset Information\n\nThe Querypls-prompt2sql dataset is designed for text classification tasks related to generating SQL queries. It contains the following features:\n\n- Context: String\n- Answer: String\n- Autotrain Text: String\n\nThe dataset is split into two parts:\n\n- Training Set:\n - Number of Examples: 78,577\n - Size: 17,419,604 bytes\n\n- Validation Set:\n - Number of Examples: 78,577\n - Size: 17,419,604 bytes\n\nThe total download size of the dataset is 13,675,124 bytes, and the dataset size is 34,839,208 bytes.",
"## Dataset Configuration\n\nThe default configuration includes the following data files:\n\n- Training Split:\n - Path: data/train-*\n\n- Validation Split:\n - Path: data/validation-*\n\nThe dataset is licensed under Apache-2.0.",
"## Task Categories\n\n- Text Classification",
"## Language\n\n- English",
"## How to Contribute\n\nFor information on contributing to the dataset cards, please refer to the Hugging Face Datasets Contribution Guidelines."
]
| [
29,
12,
145,
56,
8,
4,
31
]
| [
"passage: TAGS\n#task_categories-text-classification #language-English #license-apache-2.0 #region-us \n# Querypls-prompt2sql## Dataset Information\n\nThe Querypls-prompt2sql dataset is designed for text classification tasks related to generating SQL queries. It contains the following features:\n\n- Context: String\n- Answer: String\n- Autotrain Text: String\n\nThe dataset is split into two parts:\n\n- Training Set:\n - Number of Examples: 78,577\n - Size: 17,419,604 bytes\n\n- Validation Set:\n - Number of Examples: 78,577\n - Size: 17,419,604 bytes\n\nThe total download size of the dataset is 13,675,124 bytes, and the dataset size is 34,839,208 bytes.## Dataset Configuration\n\nThe default configuration includes the following data files:\n\n- Training Split:\n - Path: data/train-*\n\n- Validation Split:\n - Path: data/validation-*\n\nThe dataset is licensed under Apache-2.0.## Task Categories\n\n- Text Classification## Language\n\n- English## How to Contribute\n\nFor information on contributing to the dataset cards, please refer to the Hugging Face Datasets Contribution Guidelines."
]
|
4afc06be39fdc1919b42651abd4e1f14b0812b75 | # Dataset Card for "multiwoz_turns_v22"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Brendan/multiwoz_turns_v22 | [
"region:us"
]
| 2023-11-06T19:14:21+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid_20p_ablation", "path": "data/valid_20p_ablation-*"}, {"split": "valid_10p", "path": "data/valid_10p-*"}, {"split": "valid_50p", "path": "data/valid_50p-*"}, {"split": "1p_train_v1", "path": "data/1p_train_v1-*"}, {"split": "1p_train_v2", "path": "data/1p_train_v2-*"}, {"split": "1p_train_v3", "path": "data/1p_train_v3-*"}, {"split": "5p_train_v1", "path": "data/5p_train_v1-*"}, {"split": "5p_train_v2", "path": "data/5p_train_v2-*"}, {"split": "5p_train_v3", "path": "data/5p_train_v3-*"}, {"split": "10p_train_v1", "path": "data/10p_train_v1-*"}, {"split": "10p_train_v2", "path": "data/10p_train_v2-*"}, {"split": "10p_train_v3", "path": "data/10p_train_v3-*"}, {"split": "train_evaluable_only", "path": "data/train_evaluable_only-*"}, {"split": "valid_evaluable_only", "path": "data/valid_evaluable_only-*"}]}], "dataset_info": {"features": [{"name": "dialogue_id", "dtype": "string"}, {"name": "turn_id", "dtype": "int8"}, {"name": "domains", "sequence": "string"}, {"name": "system_utterances", "sequence": "string"}, {"name": "user_utterances", "sequence": "string"}, {"name": "slot_values", "struct": [{"name": "hotel", "struct": [{"name": "price range", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "parking", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book stay", "dtype": "string"}, {"name": "stars", "dtype": "string"}, {"name": "internet", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "area", "dtype": "string"}]}, {"name": "train", "struct": [{"name": "arrive by", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}]}, {"name": "attraction", "struct": [{"name": "area", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "restaurant", "struct": [{"name": "price range", "dtype": "string"}, {"name": "area", "dtype": "string"}, {"name": "food", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book time", "dtype": "string"}]}, {"name": "hospital", "struct": [{"name": "department", "dtype": "string"}]}, {"name": "taxi", "struct": [{"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "arrive by", "dtype": "string"}]}, {"name": "bus", "struct": [{"name": "departure", "dtype": "string"}, {"name": "destination", "dtype": "string"}, {"name": "leave at", "dtype": "string"}, {"name": "day", "dtype": "string"}]}, {"name": "police", "struct": [{"name": "name", "dtype": "string"}]}]}, {"name": "turn_slot_values", "struct": [{"name": "hotel", "struct": [{"name": "price range", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "parking", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book stay", "dtype": "string"}, {"name": "stars", "dtype": "string"}, {"name": "internet", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "area", "dtype": "string"}]}, {"name": "train", "struct": [{"name": "arrive by", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}]}, {"name": "attraction", "struct": [{"name": "area", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "restaurant", "struct": [{"name": "price range", "dtype": "string"}, {"name": "area", "dtype": "string"}, {"name": "food", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book time", "dtype": "string"}]}, {"name": "hospital", "struct": [{"name": "department", "dtype": "string"}]}, {"name": "taxi", "struct": [{"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "arrive by", "dtype": "string"}]}, {"name": "bus", "struct": [{"name": "departure", "dtype": "string"}, {"name": "destination", "dtype": "string"}, {"name": "leave at", "dtype": "string"}, {"name": "day", "dtype": "string"}]}, {"name": "police", "struct": [{"name": "name", "dtype": "string"}]}]}, {"name": "last_slot_values", "struct": [{"name": "hotel", "struct": [{"name": "price range", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "parking", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book stay", "dtype": "string"}, {"name": "stars", "dtype": "string"}, {"name": "internet", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "area", "dtype": "string"}]}, {"name": "train", "struct": [{"name": "arrive by", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}]}, {"name": "attraction", "struct": [{"name": "area", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "restaurant", "struct": [{"name": "price range", "dtype": "string"}, {"name": "area", "dtype": "string"}, {"name": "food", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "book day", "dtype": "string"}, {"name": "book people", "dtype": "string"}, {"name": "book time", "dtype": "string"}]}, {"name": "hospital", "struct": [{"name": "department", "dtype": "string"}]}, {"name": "taxi", "struct": [{"name": "leave at", "dtype": "string"}, {"name": "destination", "dtype": "string"}, {"name": "departure", "dtype": "string"}, {"name": "arrive by", "dtype": "string"}]}, {"name": "bus", "struct": [{"name": "departure", "dtype": "string"}, {"name": "destination", "dtype": "string"}, {"name": "leave at", "dtype": "string"}, {"name": "day", "dtype": "string"}]}, {"name": "police", "struct": [{"name": "name", "dtype": "string"}]}]}, {"name": "last_system_response_acts", "sequence": "string"}, {"name": "system_response_acts", "sequence": "string"}, {"name": "system_response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 84139088, "num_examples": 56776}, {"name": "validation", "num_bytes": 11271758, "num_examples": 7374}, {"name": "test", "num_bytes": 11295224, "num_examples": 7372}, {"name": "valid_20p_ablation", "num_bytes": 2273000.2910225117, "num_examples": 1487}, {"name": "valid_10p", "num_bytes": 1114335.7176566315, "num_examples": 729}, {"name": "valid_50p", "num_bytes": 5667979.2058584215, "num_examples": 3708}, {"name": "1p_train_v1", "num_bytes": 798770.0512892772, "num_examples": 539}, {"name": "1p_train_v2", "num_bytes": 890650.8364097506, "num_examples": 601}, {"name": "1p_train_v3", "num_bytes": 861011.8734676624, "num_examples": 581}, {"name": "5p_train_v1", "num_bytes": 4245781.441454136, "num_examples": 2865}, {"name": "5p_train_v2", "num_bytes": 4103514.419332112, "num_examples": 2769}, {"name": "5p_train_v3", "num_bytes": 4220588.32295336, "num_examples": 2848}, {"name": "10p_train_v1", "num_bytes": 8368561.186698605, "num_examples": 5647}, {"name": "10p_train_v2", "num_bytes": 8447104.438495139, "num_examples": 5700}, {"name": "10p_train_v3", "num_bytes": 8398200.149640692, "num_examples": 5667}, {"name": "train_evaluable_only", "num_bytes": 83498886.4004509, "num_examples": 56344}, {"name": "valid_evaluable_only", "num_bytes": 11261057.931380527, "num_examples": 7367}], "download_size": 39840521, "dataset_size": 250855512.26610973}} | 2023-11-11T07:21:13+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "multiwoz_turns_v22"
More Information needed | [
"# Dataset Card for \"multiwoz_turns_v22\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"multiwoz_turns_v22\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"multiwoz_turns_v22\"\n\nMore Information needed"
]
|
7a27044f6da6f1eb38a0c7b920550590adf70b61 | # Dataset Card for "hi_te_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ShrinivasSK/hi_te_1 | [
"region:us"
]
| 2023-11-06T19:21:57+00:00 | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5287422.6, "num_examples": 18000}, {"name": "test", "num_bytes": 587491.4, "num_examples": 2000}], "download_size": 2682481, "dataset_size": 5874914.0}} | 2023-11-06T19:22:07+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "hi_te_1"
More Information needed | [
"# Dataset Card for \"hi_te_1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"hi_te_1\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"hi_te_1\"\n\nMore Information needed"
]
|
2612eca86c09c0c48035df5e71728fcdf5501ee4 | # Dataset Card for "hi_kn_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ShrinivasSK/hi_kn_1 | [
"region:us"
]
| 2023-11-06T19:22:07+00:00 | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5155860.6, "num_examples": 18000}, {"name": "test", "num_bytes": 572873.4, "num_examples": 2000}], "download_size": 2612672, "dataset_size": 5728734.0}} | 2023-11-06T19:22:17+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "hi_kn_1"
More Information needed | [
"# Dataset Card for \"hi_kn_1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"hi_kn_1\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"hi_kn_1\"\n\nMore Information needed"
]
|
29ecd727501b7f027fcaf0b0df882b6ba8f0a0a7 | # Dataset Card for "cord_donut_multitask"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zyxleo/cord_donut_multitask | [
"region:us"
]
| 2023-11-06T19:32:25+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "task", "dtype": "string"}, {"name": "image_path", "dtype": "string"}, {"name": "ground_truth", "dtype": "string"}, {"name": "labels", "sequence": "int64"}, {"name": "input_ids", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1260759, "num_examples": 800}, {"name": "test", "num_bytes": 93059, "num_examples": 100}, {"name": "validation", "num_bytes": 86619, "num_examples": 100}], "download_size": 299877, "dataset_size": 1440437}} | 2023-11-07T04:44:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "cord_donut_multitask"
More Information needed | [
"# Dataset Card for \"cord_donut_multitask\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"cord_donut_multitask\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"cord_donut_multitask\"\n\nMore Information needed"
]
|
71257694f1dfec4579d746e311756b80e49f684e | # Dataset Card for "contrastive-matters-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ajax-law/contrastive-matters-2 | [
"region:us"
]
| 2023-11-06T19:35:17+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text_a", "dtype": "string"}, {"name": "text_b", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3232779, "num_examples": 6549}, {"name": "test", "num_bytes": 41112, "num_examples": 90}], "download_size": 100367, "dataset_size": 3273891}} | 2023-11-06T19:35:19+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "contrastive-matters-2"
More Information needed | [
"# Dataset Card for \"contrastive-matters-2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"contrastive-matters-2\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"contrastive-matters-2\"\n\nMore Information needed"
]
|
1a5e71bb7f1ad0cac7172fe5085bfd5259f347c7 | # Dataset Card for "humansleepproject-rr-pretraining"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | emi429/humansleepproject-rr-pretraining | [
"region:us"
]
| 2023-11-06T19:48:00+00:00 | {"dataset_info": {"features": [{"name": "rr_intervals", "sequence": "float64"}, {"name": "patient_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 119094553, "num_examples": 469}], "download_size": 19848443, "dataset_size": 119094553}} | 2023-11-08T18:46:04+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "humansleepproject-rr-pretraining"
More Information needed | [
"# Dataset Card for \"humansleepproject-rr-pretraining\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"humansleepproject-rr-pretraining\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"humansleepproject-rr-pretraining\"\n\nMore Information needed"
]
|
355188db0439b89ae0032b401c1596a87b831d0c | # Dataset Card for "ticket_donut_multitask"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zyxleo/ticket_donut_multitask | [
"region:us"
]
| 2023-11-06T19:59:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "task", "dtype": "string"}, {"name": "ground_truth", "dtype": "string"}, {"name": "labels", "sequence": "int64"}, {"name": "input_ids", "sequence": "int64"}, {"name": "image_path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1770622, "num_examples": 1520}, {"name": "test", "num_bytes": 305841, "num_examples": 398}], "download_size": 296797, "dataset_size": 2076463}} | 2023-11-07T04:55:46+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ticket_donut_multitask"
More Information needed | [
"# Dataset Card for \"ticket_donut_multitask\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ticket_donut_multitask\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ticket_donut_multitask\"\n\nMore Information needed"
]
|
823da0c598dfde5e648cc52c08d8f0e528051486 | # Dataset Card for "iCliniq_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hippocrates/iCliniq_train | [
"region:us"
]
| 2023-11-06T20:21:45+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12755267, "num_examples": 7321}], "download_size": 6748421, "dataset_size": 12755267}} | 2023-11-06T20:21:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "iCliniq_train"
More Information needed | [
"# Dataset Card for \"iCliniq_train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"iCliniq_train\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"iCliniq_train\"\n\nMore Information needed"
]
|
e6b9e49ee23f3c3bd44b35582a5c81b39de665ef | # Dataset Card for "HealthCareMagic_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hippocrates/HealthCareMagic_train | [
"region:us"
]
| 2023-11-06T20:29:52+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 202222080, "num_examples": 112165}], "download_size": 112037983, "dataset_size": 202222080}} | 2023-11-06T20:30:01+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "HealthCareMagic_train"
More Information needed | [
"# Dataset Card for \"HealthCareMagic_train\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"HealthCareMagic_train\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"HealthCareMagic_train\"\n\nMore Information needed"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.