sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3027ae53ccba297be9e16ae0b4728f2a06639057
|
# Dataset Card for "yarn-train-tokenized-32k-mistral"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
emozilla/yarn-train-tokenized-32k-mistral
|
[
"region:us"
] |
2023-10-21T03:48:16+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 44335107704, "num_examples": 104074}], "download_size": 12138496030, "dataset_size": 44335107704}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T03:56:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "yarn-train-tokenized-32k-mistral"
More Information needed
|
[
"# Dataset Card for \"yarn-train-tokenized-32k-mistral\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"yarn-train-tokenized-32k-mistral\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"yarn-train-tokenized-32k-mistral\"\n\nMore Information needed"
] |
c459612fbd74d57d18e924371cc85c0b1f310dda
|
# DIFrauD - Domain Independent Fraud Detection Benchmark
Domain Independent Fraud Detection Benchmark is a labeled corpus containing over 95,854 samples of deceitful
and truthful texts from a number of independent domains and tasks. Deception, however, can be different --
in this corpus we made sure to gather strictly real examples of deception that are intentionally malicious
and cause real harm, despite them often having very little in common. Covering seven domains, this benchmark
is designed to serve as a representative slice of the various security challenges that remain open problems
today.
## DATASET
The entire dataset contains 95854 samples, 37282 are deceptive and 58572 non-deceptive.
There are 7 independent domains in the dataset.
Each task is (or has been converted to) a binary classification problem where `y` is an indicator of deception.
1) **Phishing** (2020 Email phishing benchmark with manually labeled emails)
*- total: 15272 deceptive: 6074 non-deceptive: 9198*
2) **Fake News** (News Articles)
*- total: 20456 deceptive: 8832 non-deceptive: 11624*
3) **Political Statements** (Claims and statements by politicians and other entities, made from Politifact by relabeling LIAR)
*- total: 12497 deceptive: 8042 non-deceptive: 4455*
4) **Product Reviews** (Amazon product reviews)
*- total: 20971 deceptive: 10492 non-deceptive: 10479*
5) **Job Scams** (Job postings on an online board)
*- total: 14295 deceptive: 599 non-deceptive: 13696*
6) **SMS** (combination of SMS Spam from UCI repository and SMS Phishing datasets)
*- total: 6574 deceptive: 1274 non-deceptive: 5300*
7) **Twitter Rumours** (Collection of rumours from PHEME dataset, covers multiple topics)
*- total: 5789 deceptive: 1969 non-deceptive: 3820*
Each one was constructed from one or more datasets. Some tasks were not initially binary and had to be relabeled.
The inputs vary wildly both stylistically and syntactically, as well as in terms of the goal of deception
(or absence of thereof) being performed in the context of each dataset. Nonetheless, all seven datasets contain a significant
fraction of texts that are meant to deceive the person reading them one way or another.
Each subdirectory/config contains the domain/individual dataset split into three files:
`train.jsonl`, `test.jsonl`, and `validation.jsonl`
that contain train, test, and validation sets, respectively.
The splits are:
-- train=80%
-- test=10%
-- valid=10%
The sampling process was random with seed=42. It was stratified with respect to `y` (label) for each domain.
### Fields
Each `jsonl` file has two fields (columns): `text` (string) and `label` (integer)
`text` contains a statement or a claim that is either deceptive or thruthful.
It is guaranteed to be valid unicode, less than 1 million characters, and contains no empty entries or non-values.
`label` answers the question whether text is deceptive: `1` means yes, it is deceptive, `0` means no,
the text is not deceptive (it is truthful).
### Processing and Cleaning
Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries,
entries of length less than 2 characters or exceeding 1000000 characters were all removed.
Labels were manually curated and corrected in cases of clear error.
Whitespace, quotes, bulletpoints, unicode is normalized.
### Layout
The directory layout of `difraud` is like so:
``
difraud
fake_news/
train.jsonl
test.jsonl
validation.jsonl
README.md
...
...
...
sms/
train.jsonl
test.jsonl
validation.jsonl
README.md
README.md
LICENSE.txt
``
### Documentation
Primary documentation is this README file. Each dataset's directory contains a `README.md` file with additional details.
The contents of these files are also included at the end of this document in the Appendix.
LICENSE.txt contains the MIT license this dataset is distributed under.
## CHANGES
This dataset is a successor of [the GDD dataset](https://zenodo.org/record/6512468).
Noteable changes from GDD are:
1) Addition of SMS and Twitter Rumours datasets, making it 7 deception datasets from different domains in total
2) Re-labeling of Political Statements dataset using a scheme that better fits with prior published work that used it and is stricter in terms of non-deceptive statement criteria of acceptance (see the README file specific to the dataset within its directory)
3) Job Scams datasets' labeles were previously inverted, with ~13500 labeled as deceptive (is_deceptive=True) and ~600 as non-deceptive. This could lead to potential issues with using metrics such as f1-score, which for binary classification is computed for the class considered to be positive. This issue has been addressed and the deceptive texts are labeled as 1 (e.g. positive or True) while non-deceptive as 0 (e.g. negative or False)
4) All datasets have been processed using Cleanlab, with problematic samples maually examined and issues addressed if needed. See the details in each of the individual datasets README files.
5) All datasets now come in 2 formats: the entirety of the data in a single jsonl file located in the `data/` subdirectory of each dataset, and a standard train-test-valid stratified split of 80-10-10, in 3 separate jsonl files.
6) All datasets have two fields: "text" (string) and "label" (integer, 0 or 1 - 0 indicates that the text is non-deceptive, 1 means it is deceptive)
7) '\n' has been normalized to ' ' for all datasets as it causes issues with BERT's tokenizer in some cases (and to be in line with general whitespace normalization). Broken unicode has been fixed. Whitespace, quotations, and bullet points were normalized. Text is limited to 1,000,000 characters in length and guaranteed to be non-empty. Duplicates within the the same dataset (even in text only) were dropped, so were empty and None values.
## LICENSE
This dataset is published under the MIT license and can be used and modified by anyone free of charge.
See LICENSE.txt file for details.
## CITING
If you found this dataset useful in your research, please consider citing it as:
TODO: ADD our paper reference
## REFERENCES
Original GDD paper:
@inproceedings{10.1145/3508398.3519358,
author = {Zeng, Victor and Liu, Xuting and Verma, Rakesh M.},
title = {Does Deception Leave a Content Independent Stylistic Trace?},
year = {2022},
isbn = {9781450392204},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3508398.3519358},
doi = {10.1145/3508398.3519358},
abstract = {A recent survey claims that there are em no general linguistic cues for deception. Since Internet societies are plagued with deceptive attacks such as phishing and fake news, this claim means that we must build individual datasets and detectors for each kind of attack. It also implies that when a new scam (e.g., Covid) arrives, we must start the whole process of data collection, annotation, and model building from scratch. In this paper, we put this claim to the test by building a quality domain-independent deception dataset and investigating whether a model can perform well on more than one form of deception.},
booktitle = {Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy},
pages = {349–351},
numpages = {3},
keywords = {domain-independent deception detection, dataset quality/cleaning},
location = {Baltimore, MD, USA},
series = {CODASPY '22}
}
## APPENDIX: Dataset and Domain Details
This section describes each domain/dataset in greater detail.
### FAKE NEWS
Fake News used WELFake as a basis. The WELFake dataset combines 72,134 news articles from four pre-existing datasets
(Kaggle, McIntire, Reuters, and BuzzFeed Political). The dataset was cleaned of data leaks in the form of citations of
often reputable sources, such as "[claim] (Reuters)". It contains 35,028 real news articles and 37,106 fake news articles.
We found a number of out-of-domain statements that are clearly not relevant to news, such as "Cool", which is a potential
problem for transfer learning as well as classification.
The training set contains 16364 samples, the validation and the test sets have 2064 and 2064 samles, respectively.
### JOB SCAMS
The Employment Scam Aegean Dataset, henceforth referred to as the Job Scams dataset, consisted of 17,880 human-annotated job listings of
job descriptions labeled as fraudulent or not.
#### Relabeling
The original Job Labels dataset had the labels inverted when released. The problem is now fixed, the labels are correct.
#### Cleaning
It was cleaned by removing all HTML tags, empty descriptions, and duplicates.
The final dataset is heavily imbalanced, with 599 deceptive and 13696 non-deceptive samples out of the 14295 total.
### PHISHING
This dataset consists of various phishing attacks as well as benign emails collected from real users.
The training set contains 12217 samples, the validation and the test sets have 1527 and 1528 samples, respectively.
### POLITICAL STATEMENTS
This corpus was created from the Liar dataset which consists of political statements made by US speakers assigned
a fine-grain truthfulness label by PolitiFact.
#### Labeling
The primary difference is the change in the re-labeling scheme when converting the task from multiclass to binary.
#### Old scheme
We use the claim field as the text and map labels “pants-fire,” “false,”
“barely-true,” to deceptive and “half-true,” “mostly-true,” and “true”
to non-deceptive, resulting in 5,669 deceptive and 7,167 truthful
statements.
#### New scheme
Following
*Upadhayay, B., Behzadan, V.: "Sentimental liar: Extended corpus and deep learning models for fake claim classification" (2020)*
and
*Shahriar, Sadat, Arjun Mukherjee, and Omprakash Gnawali. "Deception Detection with Feature-Augmentation by Soft Domain Transfer."
International Conference on Social Informatics. Cham: Springer International Publishing, 2022.*
we map the labels map labels “pants-fire,” “false,”
“barely-true,” **and “half-true,”** to deceptive; the labels "mostly-true" and "true" are mapped to non-deceptive.
The statements that are only half-true are now considered to be deceptive, making the criterion for statement being non-deceptive stricter:
now 2 out of 6 labels map to non-deceptive and 4 map to deceptive.
#### Cleaning
The dataset has been cleaned using cleanlab with visual inspection of problems found. Partial sentences, such as "On Iran nuclear deal",
"On inflation", were removed. Text with large number of errors induced by a parser were also removed.
Statements in language other than English (namely, Spanish) were also removed.
The training set contains 9997 samples, the validation and the test sets have 1250 samples each in them.
### PRODUCT REVIEWS
The dataset is produced from English Amazon Reviews labeled as either real or fake, relabeled as deceptive and non-deceptive respectively.
The reviews cover a variety of products with no particular product dominating the dataset. Although the dataset authors filtered out
non-English reviews, through outlier detection we found that the dataset still contains reviews in Spanish and other languages.
Problematic label detection shows that over 6713 samples are potentially mislabeled; since this technique is error-prone,
we visually examine 67 reviews that are found to be the largest potential sources of error (the top percentile) and confirm that
most of them appear to be mislabeled. The final dataset of 20,971 reviews is evenly balanced with 10,492 deceptive and 10,479
non-deceptive samples.
The training set contains 16776 samples, the validation and the test sets have 2097 and 2098 samples, respectively.
### SMS
This dataset was created from the SMS Spam Collection and SMS Phishing Dataset for Machine Learning and Pattern Recognition,
which contained 5,574 and 5,971 real English SMS messages, respectively. As these two datasets overlap, after de-duplication,
the final dataset is made up of 6574 texts released by a private UK-based wireless operator; 1274 of them are deceptive,
and the remaining 5300 are not.
The training set contains 5259 samples, the validation and the test sets have 657 and 658 samples,
respectively.
### TWITTER RUMOURS
This deception dataset was created using PHEME dataset from
https://figshare.com/articles/dataset/PHEME_dataset_of_rumours_and_non-rumours/4010619/1
was used in creation of this dataset. We took source tweets only, and ignored replies to them.
We used source tweet's label as being a rumour or non-rumour to label it as deceptive or non-deceptive.
The training set contains 4631 samples, the validation and the test sets have 579 samples each
|
difraud/difraud
|
[
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"fraud-detection",
"deception-detection",
"phishing",
"fake-news",
"benchmark",
"opinion-spam",
"multi-domain",
"region:us"
] |
2023-10-21T04:16:53+00:00
|
{"language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "zero-shot-classification"], "pretty_name": "DIFrauD - Domain-Independent Fraud Detection benchmark", "tags": ["fraud-detection", "deception-detection", "phishing", "fake-news", "benchmark", "opinion-spam", "multi-domain"], "configs": [{"config_name": "fake_news", "data_files": [{"split": "train", "path": "fake_news/train.jsonl"}, {"split": "test", "path": "fake_news/test.jsonl"}, {"split": "validation", "path": "fake_news/validation.jsonl"}]}, {"config_name": "job_scams", "data_files": [{"split": "train", "path": "job_scams/train.jsonl"}, {"split": "test", "path": "job_scams/test.jsonl"}, {"split": "validation", "path": "job_scams/validation.jsonl"}]}, {"config_name": "phishing", "data_files": [{"split": "train", "path": "phishing/train.jsonl"}, {"split": "test", "path": "phishing/test.jsonl"}, {"split": "validation", "path": "phishing/validation.jsonl"}]}, {"config_name": "political_statements", "data_files": [{"split": "train", "path": "political_statements/train.jsonl"}, {"split": "test", "path": "political_statements/test.jsonl"}, {"split": "validation", "path": "political_statements/validation.jsonl"}]}, {"config_name": "product_reviews", "data_files": [{"split": "train", "path": "product_reviews/train.jsonl"}, {"split": "test", "path": "product_reviews/test.jsonl"}, {"split": "validation", "path": "product_reviews/validation.jsonl"}]}, {"config_name": "sms", "data_files": [{"split": "train", "path": "sms/train.jsonl"}, {"split": "test", "path": "sms/test.jsonl"}, {"split": "validation", "path": "sms/validation.jsonl"}]}, {"config_name": "twitter_rumours", "data_files": [{"split": "train", "path": "twitter_rumours/train.jsonl"}, {"split": "test", "path": "twitter_rumours/test.jsonl"}, {"split": "validation", "path": "twitter_rumours/validation.jsonl"}]}]}
|
2023-10-21T04:46:50+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #task_categories-zero-shot-classification #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-mit #fraud-detection #deception-detection #phishing #fake-news #benchmark #opinion-spam #multi-domain #region-us
|
# DIFrauD - Domain Independent Fraud Detection Benchmark
Domain Independent Fraud Detection Benchmark is a labeled corpus containing over 95,854 samples of deceitful
and truthful texts from a number of independent domains and tasks. Deception, however, can be different --
in this corpus we made sure to gather strictly real examples of deception that are intentionally malicious
and cause real harm, despite them often having very little in common. Covering seven domains, this benchmark
is designed to serve as a representative slice of the various security challenges that remain open problems
today.
## DATASET
The entire dataset contains 95854 samples, 37282 are deceptive and 58572 non-deceptive.
There are 7 independent domains in the dataset.
Each task is (or has been converted to) a binary classification problem where 'y' is an indicator of deception.
1) Phishing (2020 Email phishing benchmark with manually labeled emails)
*- total: 15272 deceptive: 6074 non-deceptive: 9198*
2) Fake News (News Articles)
*- total: 20456 deceptive: 8832 non-deceptive: 11624*
3) Political Statements (Claims and statements by politicians and other entities, made from Politifact by relabeling LIAR)
*- total: 12497 deceptive: 8042 non-deceptive: 4455*
4) Product Reviews (Amazon product reviews)
*- total: 20971 deceptive: 10492 non-deceptive: 10479*
5) Job Scams (Job postings on an online board)
*- total: 14295 deceptive: 599 non-deceptive: 13696*
6) SMS (combination of SMS Spam from UCI repository and SMS Phishing datasets)
*- total: 6574 deceptive: 1274 non-deceptive: 5300*
7) Twitter Rumours (Collection of rumours from PHEME dataset, covers multiple topics)
*- total: 5789 deceptive: 1969 non-deceptive: 3820*
Each one was constructed from one or more datasets. Some tasks were not initially binary and had to be relabeled.
The inputs vary wildly both stylistically and syntactically, as well as in terms of the goal of deception
(or absence of thereof) being performed in the context of each dataset. Nonetheless, all seven datasets contain a significant
fraction of texts that are meant to deceive the person reading them one way or another.
Each subdirectory/config contains the domain/individual dataset split into three files:
'URL', 'URL', and 'URL'
that contain train, test, and validation sets, respectively.
The splits are:
-- train=80%
-- test=10%
-- valid=10%
The sampling process was random with seed=42. It was stratified with respect to 'y' (label) for each domain.
### Fields
Each 'jsonl' file has two fields (columns): 'text' (string) and 'label' (integer)
'text' contains a statement or a claim that is either deceptive or thruthful.
It is guaranteed to be valid unicode, less than 1 million characters, and contains no empty entries or non-values.
'label' answers the question whether text is deceptive: '1' means yes, it is deceptive, '0' means no,
the text is not deceptive (it is truthful).
### Processing and Cleaning
Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries,
entries of length less than 2 characters or exceeding 1000000 characters were all removed.
Labels were manually curated and corrected in cases of clear error.
Whitespace, quotes, bulletpoints, unicode is normalized.
### Layout
The directory layout of 'difraud' is like so:
''
difraud
fake_news/
URL
URL
URL
URL
...
...
...
sms/
URL
URL
URL
URL
URL
URL
''
### Documentation
Primary documentation is this README file. Each dataset's directory contains a 'URL' file with additional details.
The contents of these files are also included at the end of this document in the Appendix.
URL contains the MIT license this dataset is distributed under.
## CHANGES
This dataset is a successor of the GDD dataset.
Noteable changes from GDD are:
1) Addition of SMS and Twitter Rumours datasets, making it 7 deception datasets from different domains in total
2) Re-labeling of Political Statements dataset using a scheme that better fits with prior published work that used it and is stricter in terms of non-deceptive statement criteria of acceptance (see the README file specific to the dataset within its directory)
3) Job Scams datasets' labeles were previously inverted, with ~13500 labeled as deceptive (is_deceptive=True) and ~600 as non-deceptive. This could lead to potential issues with using metrics such as f1-score, which for binary classification is computed for the class considered to be positive. This issue has been addressed and the deceptive texts are labeled as 1 (e.g. positive or True) while non-deceptive as 0 (e.g. negative or False)
4) All datasets have been processed using Cleanlab, with problematic samples maually examined and issues addressed if needed. See the details in each of the individual datasets README files.
5) All datasets now come in 2 formats: the entirety of the data in a single jsonl file located in the 'data/' subdirectory of each dataset, and a standard train-test-valid stratified split of 80-10-10, in 3 separate jsonl files.
6) All datasets have two fields: "text" (string) and "label" (integer, 0 or 1 - 0 indicates that the text is non-deceptive, 1 means it is deceptive)
7) '\n' has been normalized to ' ' for all datasets as it causes issues with BERT's tokenizer in some cases (and to be in line with general whitespace normalization). Broken unicode has been fixed. Whitespace, quotations, and bullet points were normalized. Text is limited to 1,000,000 characters in length and guaranteed to be non-empty. Duplicates within the the same dataset (even in text only) were dropped, so were empty and None values.
## LICENSE
This dataset is published under the MIT license and can be used and modified by anyone free of charge.
See URL file for details.
## CITING
If you found this dataset useful in your research, please consider citing it as:
TODO: ADD our paper reference
## REFERENCES
Original GDD paper:
@inproceedings{10.1145/3508398.3519358,
author = {Zeng, Victor and Liu, Xuting and Verma, Rakesh M.},
title = {Does Deception Leave a Content Independent Stylistic Trace?},
year = {2022},
isbn = {9781450392204},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {URL
doi = {10.1145/3508398.3519358},
abstract = {A recent survey claims that there are em no general linguistic cues for deception. Since Internet societies are plagued with deceptive attacks such as phishing and fake news, this claim means that we must build individual datasets and detectors for each kind of attack. It also implies that when a new scam (e.g., Covid) arrives, we must start the whole process of data collection, annotation, and model building from scratch. In this paper, we put this claim to the test by building a quality domain-independent deception dataset and investigating whether a model can perform well on more than one form of deception.},
booktitle = {Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy},
pages = {349–351},
numpages = {3},
keywords = {domain-independent deception detection, dataset quality/cleaning},
location = {Baltimore, MD, USA},
series = {CODASPY '22}
}
## APPENDIX: Dataset and Domain Details
This section describes each domain/dataset in greater detail.
### FAKE NEWS
Fake News used WELFake as a basis. The WELFake dataset combines 72,134 news articles from four pre-existing datasets
(Kaggle, McIntire, Reuters, and BuzzFeed Political). The dataset was cleaned of data leaks in the form of citations of
often reputable sources, such as "[claim] (Reuters)". It contains 35,028 real news articles and 37,106 fake news articles.
We found a number of out-of-domain statements that are clearly not relevant to news, such as "Cool", which is a potential
problem for transfer learning as well as classification.
The training set contains 16364 samples, the validation and the test sets have 2064 and 2064 samles, respectively.
### JOB SCAMS
The Employment Scam Aegean Dataset, henceforth referred to as the Job Scams dataset, consisted of 17,880 human-annotated job listings of
job descriptions labeled as fraudulent or not.
#### Relabeling
The original Job Labels dataset had the labels inverted when released. The problem is now fixed, the labels are correct.
#### Cleaning
It was cleaned by removing all HTML tags, empty descriptions, and duplicates.
The final dataset is heavily imbalanced, with 599 deceptive and 13696 non-deceptive samples out of the 14295 total.
### PHISHING
This dataset consists of various phishing attacks as well as benign emails collected from real users.
The training set contains 12217 samples, the validation and the test sets have 1527 and 1528 samples, respectively.
### POLITICAL STATEMENTS
This corpus was created from the Liar dataset which consists of political statements made by US speakers assigned
a fine-grain truthfulness label by PolitiFact.
#### Labeling
The primary difference is the change in the re-labeling scheme when converting the task from multiclass to binary.
#### Old scheme
We use the claim field as the text and map labels “pants-fire,” “false,”
“barely-true,” to deceptive and “half-true,” “mostly-true,” and “true”
to non-deceptive, resulting in 5,669 deceptive and 7,167 truthful
statements.
#### New scheme
Following
*Upadhayay, B., Behzadan, V.: "Sentimental liar: Extended corpus and deep learning models for fake claim classification" (2020)*
and
*Shahriar, Sadat, Arjun Mukherjee, and Omprakash Gnawali. "Deception Detection with Feature-Augmentation by Soft Domain Transfer."
International Conference on Social Informatics. Cham: Springer International Publishing, 2022.*
we map the labels map labels “pants-fire,” “false,”
“barely-true,” and “half-true,” to deceptive; the labels "mostly-true" and "true" are mapped to non-deceptive.
The statements that are only half-true are now considered to be deceptive, making the criterion for statement being non-deceptive stricter:
now 2 out of 6 labels map to non-deceptive and 4 map to deceptive.
#### Cleaning
The dataset has been cleaned using cleanlab with visual inspection of problems found. Partial sentences, such as "On Iran nuclear deal",
"On inflation", were removed. Text with large number of errors induced by a parser were also removed.
Statements in language other than English (namely, Spanish) were also removed.
The training set contains 9997 samples, the validation and the test sets have 1250 samples each in them.
### PRODUCT REVIEWS
The dataset is produced from English Amazon Reviews labeled as either real or fake, relabeled as deceptive and non-deceptive respectively.
The reviews cover a variety of products with no particular product dominating the dataset. Although the dataset authors filtered out
non-English reviews, through outlier detection we found that the dataset still contains reviews in Spanish and other languages.
Problematic label detection shows that over 6713 samples are potentially mislabeled; since this technique is error-prone,
we visually examine 67 reviews that are found to be the largest potential sources of error (the top percentile) and confirm that
most of them appear to be mislabeled. The final dataset of 20,971 reviews is evenly balanced with 10,492 deceptive and 10,479
non-deceptive samples.
The training set contains 16776 samples, the validation and the test sets have 2097 and 2098 samples, respectively.
### SMS
This dataset was created from the SMS Spam Collection and SMS Phishing Dataset for Machine Learning and Pattern Recognition,
which contained 5,574 and 5,971 real English SMS messages, respectively. As these two datasets overlap, after de-duplication,
the final dataset is made up of 6574 texts released by a private UK-based wireless operator; 1274 of them are deceptive,
and the remaining 5300 are not.
The training set contains 5259 samples, the validation and the test sets have 657 and 658 samples,
respectively.
### TWITTER RUMOURS
This deception dataset was created using PHEME dataset from
URL
was used in creation of this dataset. We took source tweets only, and ignored replies to them.
We used source tweet's label as being a rumour or non-rumour to label it as deceptive or non-deceptive.
The training set contains 4631 samples, the validation and the test sets have 579 samples each
|
[
"# DIFrauD - Domain Independent Fraud Detection Benchmark\n\nDomain Independent Fraud Detection Benchmark is a labeled corpus containing over 95,854 samples of deceitful \nand truthful texts from a number of independent domains and tasks. Deception, however, can be different --\nin this corpus we made sure to gather strictly real examples of deception that are intentionally malicious \nand cause real harm, despite them often having very little in common. Covering seven domains, this benchmark \nis designed to serve as a representative slice of the various security challenges that remain open problems\ntoday.",
"## DATASET \n\nThe entire dataset contains 95854 samples, 37282 are deceptive and 58572 non-deceptive.\n\nThere are 7 independent domains in the dataset.\n\nEach task is (or has been converted to) a binary classification problem where 'y' is an indicator of deception.\n\n1) Phishing (2020 Email phishing benchmark with manually labeled emails) \n\n *- total: 15272 deceptive: 6074 non-deceptive: 9198*\n\n2) Fake News (News Articles)\n\n *- total: 20456 deceptive: 8832 non-deceptive: 11624*\n\n3) Political Statements (Claims and statements by politicians and other entities, made from Politifact by relabeling LIAR)\n\n *- total: 12497 deceptive: 8042 non-deceptive: 4455*\n\n4) Product Reviews (Amazon product reviews)\n\n *- total: 20971 deceptive: 10492 non-deceptive: 10479*\n\n5) Job Scams (Job postings on an online board)\n\n *- total: 14295 deceptive: 599 non-deceptive: 13696*\n\n6) SMS (combination of SMS Spam from UCI repository and SMS Phishing datasets)\n\n *- total: 6574 deceptive: 1274 non-deceptive: 5300*\n\n7) Twitter Rumours (Collection of rumours from PHEME dataset, covers multiple topics)\n \n *- total: 5789 deceptive: 1969 non-deceptive: 3820*\n\nEach one was constructed from one or more datasets. Some tasks were not initially binary and had to be relabeled. \nThe inputs vary wildly both stylistically and syntactically, as well as in terms of the goal of deception \n(or absence of thereof) being performed in the context of each dataset. Nonetheless, all seven datasets contain a significant\nfraction of texts that are meant to deceive the person reading them one way or another.\n\nEach subdirectory/config contains the domain/individual dataset split into three files:\n\n'URL', 'URL', and 'URL' \n\nthat contain train, test, and validation sets, respectively.\n\nThe splits are:\n\n-- train=80%\n\n-- test=10%\n\n-- valid=10%\n\nThe sampling process was random with seed=42. It was stratified with respect to 'y' (label) for each domain.",
"### Fields\n\nEach 'jsonl' file has two fields (columns): 'text' (string) and 'label' (integer)\n\n'text' contains a statement or a claim that is either deceptive or thruthful. \nIt is guaranteed to be valid unicode, less than 1 million characters, and contains no empty entries or non-values.\n\n'label' answers the question whether text is deceptive: '1' means yes, it is deceptive, '0' means no, \nthe text is not deceptive (it is truthful).",
"### Processing and Cleaning\n\nEach dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries, \nentries of length less than 2 characters or exceeding 1000000 characters were all removed.\n\nLabels were manually curated and corrected in cases of clear error. \n\nWhitespace, quotes, bulletpoints, unicode is normalized.",
"### Layout\n\nThe directory layout of 'difraud' is like so:\n\n''\ndifraud\n fake_news/\n URL\n URL\n URL\n URL\n ...\n ...\n ...\n sms/\n URL\n URL\n URL\n URL\n URL\n URL\n''",
"### Documentation\n\nPrimary documentation is this README file. Each dataset's directory contains a 'URL' file with additional details. \nThe contents of these files are also included at the end of this document in the Appendix. \nURL contains the MIT license this dataset is distributed under.",
"## CHANGES\n\nThis dataset is a successor of the GDD dataset. \n\nNoteable changes from GDD are:\n\n1) Addition of SMS and Twitter Rumours datasets, making it 7 deception datasets from different domains in total\n\n2) Re-labeling of Political Statements dataset using a scheme that better fits with prior published work that used it and is stricter in terms of non-deceptive statement criteria of acceptance (see the README file specific to the dataset within its directory)\n\n3) Job Scams datasets' labeles were previously inverted, with ~13500 labeled as deceptive (is_deceptive=True) and ~600 as non-deceptive. This could lead to potential issues with using metrics such as f1-score, which for binary classification is computed for the class considered to be positive. This issue has been addressed and the deceptive texts are labeled as 1 (e.g. positive or True) while non-deceptive as 0 (e.g. negative or False)\n\n4) All datasets have been processed using Cleanlab, with problematic samples maually examined and issues addressed if needed. See the details in each of the individual datasets README files.\n\n5) All datasets now come in 2 formats: the entirety of the data in a single jsonl file located in the 'data/' subdirectory of each dataset, and a standard train-test-valid stratified split of 80-10-10, in 3 separate jsonl files.\n\n6) All datasets have two fields: \"text\" (string) and \"label\" (integer, 0 or 1 - 0 indicates that the text is non-deceptive, 1 means it is deceptive)\n\n7) '\\n' has been normalized to ' ' for all datasets as it causes issues with BERT's tokenizer in some cases (and to be in line with general whitespace normalization). Broken unicode has been fixed. Whitespace, quotations, and bullet points were normalized. Text is limited to 1,000,000 characters in length and guaranteed to be non-empty. Duplicates within the the same dataset (even in text only) were dropped, so were empty and None values.",
"## LICENSE\n\nThis dataset is published under the MIT license and can be used and modified by anyone free of charge. \nSee URL file for details.",
"## CITING\n\nIf you found this dataset useful in your research, please consider citing it as:\n\nTODO: ADD our paper reference",
"## REFERENCES \n\nOriginal GDD paper:\n\n@inproceedings{10.1145/3508398.3519358,\nauthor = {Zeng, Victor and Liu, Xuting and Verma, Rakesh M.},\ntitle = {Does Deception Leave a Content Independent Stylistic Trace?},\nyear = {2022},\nisbn = {9781450392204},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {URL\ndoi = {10.1145/3508398.3519358},\nabstract = {A recent survey claims that there are em no general linguistic cues for deception. Since Internet societies are plagued with deceptive attacks such as phishing and fake news, this claim means that we must build individual datasets and detectors for each kind of attack. It also implies that when a new scam (e.g., Covid) arrives, we must start the whole process of data collection, annotation, and model building from scratch. In this paper, we put this claim to the test by building a quality domain-independent deception dataset and investigating whether a model can perform well on more than one form of deception.},\nbooktitle = {Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy},\npages = {349–351},\nnumpages = {3},\nkeywords = {domain-independent deception detection, dataset quality/cleaning},\nlocation = {Baltimore, MD, USA},\nseries = {CODASPY '22}\n}",
"## APPENDIX: Dataset and Domain Details\n\nThis section describes each domain/dataset in greater detail.",
"### FAKE NEWS\n\nFake News used WELFake as a basis. The WELFake dataset combines 72,134 news articles from four pre-existing datasets \n(Kaggle, McIntire, Reuters, and BuzzFeed Political). The dataset was cleaned of data leaks in the form of citations of \noften reputable sources, such as \"[claim] (Reuters)\". It contains 35,028 real news articles and 37,106 fake news articles. \nWe found a number of out-of-domain statements that are clearly not relevant to news, such as \"Cool\", which is a potential\nproblem for transfer learning as well as classification. \n\nThe training set contains 16364 samples, the validation and the test sets have 2064 and 2064 samles, respectively.",
"### JOB SCAMS\n\nThe Employment Scam Aegean Dataset, henceforth referred to as the Job Scams dataset, consisted of 17,880 human-annotated job listings of\njob descriptions labeled as fraudulent or not.",
"#### Relabeling\n\nThe original Job Labels dataset had the labels inverted when released. The problem is now fixed, the labels are correct.",
"#### Cleaning \n\nIt was cleaned by removing all HTML tags, empty descriptions, and duplicates. \nThe final dataset is heavily imbalanced, with 599 deceptive and 13696 non-deceptive samples out of the 14295 total.",
"### PHISHING\n\nThis dataset consists of various phishing attacks as well as benign emails collected from real users.\n\nThe training set contains 12217 samples, the validation and the test sets have 1527 and 1528 samples, respectively.",
"### POLITICAL STATEMENTS\n\nThis corpus was created from the Liar dataset which consists of political statements made by US speakers assigned\na fine-grain truthfulness label by PolitiFact.",
"#### Labeling\n\nThe primary difference is the change in the re-labeling scheme when converting the task from multiclass to binary.",
"#### Old scheme\n\nWe use the claim field as the text and map labels “pants-fire,” “false,”\n“barely-true,” to deceptive and “half-true,” “mostly-true,” and “true”\nto non-deceptive, resulting in 5,669 deceptive and 7,167 truthful\nstatements.",
"#### New scheme\n\nFollowing \n\n*Upadhayay, B., Behzadan, V.: \"Sentimental liar: Extended corpus and deep learning models for fake claim classification\" (2020)*\n\nand\n\n*Shahriar, Sadat, Arjun Mukherjee, and Omprakash Gnawali. \"Deception Detection with Feature-Augmentation by Soft Domain Transfer.\" \nInternational Conference on Social Informatics. Cham: Springer International Publishing, 2022.*\n\nwe map the labels map labels “pants-fire,” “false,”\n“barely-true,” and “half-true,” to deceptive; the labels \"mostly-true\" and \"true\" are mapped to non-deceptive. \nThe statements that are only half-true are now considered to be deceptive, making the criterion for statement being non-deceptive stricter: \nnow 2 out of 6 labels map to non-deceptive and 4 map to deceptive.",
"#### Cleaning\n\nThe dataset has been cleaned using cleanlab with visual inspection of problems found. Partial sentences, such as \"On Iran nuclear deal\", \n\"On inflation\", were removed. Text with large number of errors induced by a parser were also removed.\nStatements in language other than English (namely, Spanish) were also removed. \n\nThe training set contains 9997 samples, the validation and the test sets have 1250 samples each in them.",
"### PRODUCT REVIEWS\n\nThe dataset is produced from English Amazon Reviews labeled as either real or fake, relabeled as deceptive and non-deceptive respectively. \nThe reviews cover a variety of products with no particular product dominating the dataset. Although the dataset authors filtered out \nnon-English reviews, through outlier detection we found that the dataset still contains reviews in Spanish and other languages. \nProblematic label detection shows that over 6713 samples are potentially mislabeled; since this technique is error-prone,\nwe visually examine 67 reviews that are found to be the largest potential sources of error (the top percentile) and confirm that\nmost of them appear to be mislabeled. The final dataset of 20,971 reviews is evenly balanced with 10,492 deceptive and 10,479 \nnon-deceptive samples.\n\nThe training set contains 16776 samples, the validation and the test sets have 2097 and 2098 samples, respectively.",
"### SMS\n\nThis dataset was created from the SMS Spam Collection and SMS Phishing Dataset for Machine Learning and Pattern Recognition, \nwhich contained 5,574 and 5,971 real English SMS messages, respectively. As these two datasets overlap, after de-duplication, \nthe final dataset is made up of 6574 texts released by a private UK-based wireless operator; 1274 of them are deceptive, \nand the remaining 5300 are not.\n\nThe training set contains 5259 samples, the validation and the test sets have 657 and 658 samples, \nrespectively.",
"### TWITTER RUMOURS\n\nThis deception dataset was created using PHEME dataset from \n\nURL\n\nwas used in creation of this dataset. We took source tweets only, and ignored replies to them. \nWe used source tweet's label as being a rumour or non-rumour to label it as deceptive or non-deceptive.\n\nThe training set contains 4631 samples, the validation and the test sets have 579 samples each"
] |
[
"TAGS\n#task_categories-text-classification #task_categories-zero-shot-classification #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-mit #fraud-detection #deception-detection #phishing #fake-news #benchmark #opinion-spam #multi-domain #region-us \n",
"# DIFrauD - Domain Independent Fraud Detection Benchmark\n\nDomain Independent Fraud Detection Benchmark is a labeled corpus containing over 95,854 samples of deceitful \nand truthful texts from a number of independent domains and tasks. Deception, however, can be different --\nin this corpus we made sure to gather strictly real examples of deception that are intentionally malicious \nand cause real harm, despite them often having very little in common. Covering seven domains, this benchmark \nis designed to serve as a representative slice of the various security challenges that remain open problems\ntoday.",
"## DATASET \n\nThe entire dataset contains 95854 samples, 37282 are deceptive and 58572 non-deceptive.\n\nThere are 7 independent domains in the dataset.\n\nEach task is (or has been converted to) a binary classification problem where 'y' is an indicator of deception.\n\n1) Phishing (2020 Email phishing benchmark with manually labeled emails) \n\n *- total: 15272 deceptive: 6074 non-deceptive: 9198*\n\n2) Fake News (News Articles)\n\n *- total: 20456 deceptive: 8832 non-deceptive: 11624*\n\n3) Political Statements (Claims and statements by politicians and other entities, made from Politifact by relabeling LIAR)\n\n *- total: 12497 deceptive: 8042 non-deceptive: 4455*\n\n4) Product Reviews (Amazon product reviews)\n\n *- total: 20971 deceptive: 10492 non-deceptive: 10479*\n\n5) Job Scams (Job postings on an online board)\n\n *- total: 14295 deceptive: 599 non-deceptive: 13696*\n\n6) SMS (combination of SMS Spam from UCI repository and SMS Phishing datasets)\n\n *- total: 6574 deceptive: 1274 non-deceptive: 5300*\n\n7) Twitter Rumours (Collection of rumours from PHEME dataset, covers multiple topics)\n \n *- total: 5789 deceptive: 1969 non-deceptive: 3820*\n\nEach one was constructed from one or more datasets. Some tasks were not initially binary and had to be relabeled. \nThe inputs vary wildly both stylistically and syntactically, as well as in terms of the goal of deception \n(or absence of thereof) being performed in the context of each dataset. Nonetheless, all seven datasets contain a significant\nfraction of texts that are meant to deceive the person reading them one way or another.\n\nEach subdirectory/config contains the domain/individual dataset split into three files:\n\n'URL', 'URL', and 'URL' \n\nthat contain train, test, and validation sets, respectively.\n\nThe splits are:\n\n-- train=80%\n\n-- test=10%\n\n-- valid=10%\n\nThe sampling process was random with seed=42. It was stratified with respect to 'y' (label) for each domain.",
"### Fields\n\nEach 'jsonl' file has two fields (columns): 'text' (string) and 'label' (integer)\n\n'text' contains a statement or a claim that is either deceptive or thruthful. \nIt is guaranteed to be valid unicode, less than 1 million characters, and contains no empty entries or non-values.\n\n'label' answers the question whether text is deceptive: '1' means yes, it is deceptive, '0' means no, \nthe text is not deceptive (it is truthful).",
"### Processing and Cleaning\n\nEach dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries, \nentries of length less than 2 characters or exceeding 1000000 characters were all removed.\n\nLabels were manually curated and corrected in cases of clear error. \n\nWhitespace, quotes, bulletpoints, unicode is normalized.",
"### Layout\n\nThe directory layout of 'difraud' is like so:\n\n''\ndifraud\n fake_news/\n URL\n URL\n URL\n URL\n ...\n ...\n ...\n sms/\n URL\n URL\n URL\n URL\n URL\n URL\n''",
"### Documentation\n\nPrimary documentation is this README file. Each dataset's directory contains a 'URL' file with additional details. \nThe contents of these files are also included at the end of this document in the Appendix. \nURL contains the MIT license this dataset is distributed under.",
"## CHANGES\n\nThis dataset is a successor of the GDD dataset. \n\nNoteable changes from GDD are:\n\n1) Addition of SMS and Twitter Rumours datasets, making it 7 deception datasets from different domains in total\n\n2) Re-labeling of Political Statements dataset using a scheme that better fits with prior published work that used it and is stricter in terms of non-deceptive statement criteria of acceptance (see the README file specific to the dataset within its directory)\n\n3) Job Scams datasets' labeles were previously inverted, with ~13500 labeled as deceptive (is_deceptive=True) and ~600 as non-deceptive. This could lead to potential issues with using metrics such as f1-score, which for binary classification is computed for the class considered to be positive. This issue has been addressed and the deceptive texts are labeled as 1 (e.g. positive or True) while non-deceptive as 0 (e.g. negative or False)\n\n4) All datasets have been processed using Cleanlab, with problematic samples maually examined and issues addressed if needed. See the details in each of the individual datasets README files.\n\n5) All datasets now come in 2 formats: the entirety of the data in a single jsonl file located in the 'data/' subdirectory of each dataset, and a standard train-test-valid stratified split of 80-10-10, in 3 separate jsonl files.\n\n6) All datasets have two fields: \"text\" (string) and \"label\" (integer, 0 or 1 - 0 indicates that the text is non-deceptive, 1 means it is deceptive)\n\n7) '\\n' has been normalized to ' ' for all datasets as it causes issues with BERT's tokenizer in some cases (and to be in line with general whitespace normalization). Broken unicode has been fixed. Whitespace, quotations, and bullet points were normalized. Text is limited to 1,000,000 characters in length and guaranteed to be non-empty. Duplicates within the the same dataset (even in text only) were dropped, so were empty and None values.",
"## LICENSE\n\nThis dataset is published under the MIT license and can be used and modified by anyone free of charge. \nSee URL file for details.",
"## CITING\n\nIf you found this dataset useful in your research, please consider citing it as:\n\nTODO: ADD our paper reference",
"## REFERENCES \n\nOriginal GDD paper:\n\n@inproceedings{10.1145/3508398.3519358,\nauthor = {Zeng, Victor and Liu, Xuting and Verma, Rakesh M.},\ntitle = {Does Deception Leave a Content Independent Stylistic Trace?},\nyear = {2022},\nisbn = {9781450392204},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {URL\ndoi = {10.1145/3508398.3519358},\nabstract = {A recent survey claims that there are em no general linguistic cues for deception. Since Internet societies are plagued with deceptive attacks such as phishing and fake news, this claim means that we must build individual datasets and detectors for each kind of attack. It also implies that when a new scam (e.g., Covid) arrives, we must start the whole process of data collection, annotation, and model building from scratch. In this paper, we put this claim to the test by building a quality domain-independent deception dataset and investigating whether a model can perform well on more than one form of deception.},\nbooktitle = {Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy},\npages = {349–351},\nnumpages = {3},\nkeywords = {domain-independent deception detection, dataset quality/cleaning},\nlocation = {Baltimore, MD, USA},\nseries = {CODASPY '22}\n}",
"## APPENDIX: Dataset and Domain Details\n\nThis section describes each domain/dataset in greater detail.",
"### FAKE NEWS\n\nFake News used WELFake as a basis. The WELFake dataset combines 72,134 news articles from four pre-existing datasets \n(Kaggle, McIntire, Reuters, and BuzzFeed Political). The dataset was cleaned of data leaks in the form of citations of \noften reputable sources, such as \"[claim] (Reuters)\". It contains 35,028 real news articles and 37,106 fake news articles. \nWe found a number of out-of-domain statements that are clearly not relevant to news, such as \"Cool\", which is a potential\nproblem for transfer learning as well as classification. \n\nThe training set contains 16364 samples, the validation and the test sets have 2064 and 2064 samles, respectively.",
"### JOB SCAMS\n\nThe Employment Scam Aegean Dataset, henceforth referred to as the Job Scams dataset, consisted of 17,880 human-annotated job listings of\njob descriptions labeled as fraudulent or not.",
"#### Relabeling\n\nThe original Job Labels dataset had the labels inverted when released. The problem is now fixed, the labels are correct.",
"#### Cleaning \n\nIt was cleaned by removing all HTML tags, empty descriptions, and duplicates. \nThe final dataset is heavily imbalanced, with 599 deceptive and 13696 non-deceptive samples out of the 14295 total.",
"### PHISHING\n\nThis dataset consists of various phishing attacks as well as benign emails collected from real users.\n\nThe training set contains 12217 samples, the validation and the test sets have 1527 and 1528 samples, respectively.",
"### POLITICAL STATEMENTS\n\nThis corpus was created from the Liar dataset which consists of political statements made by US speakers assigned\na fine-grain truthfulness label by PolitiFact.",
"#### Labeling\n\nThe primary difference is the change in the re-labeling scheme when converting the task from multiclass to binary.",
"#### Old scheme\n\nWe use the claim field as the text and map labels “pants-fire,” “false,”\n“barely-true,” to deceptive and “half-true,” “mostly-true,” and “true”\nto non-deceptive, resulting in 5,669 deceptive and 7,167 truthful\nstatements.",
"#### New scheme\n\nFollowing \n\n*Upadhayay, B., Behzadan, V.: \"Sentimental liar: Extended corpus and deep learning models for fake claim classification\" (2020)*\n\nand\n\n*Shahriar, Sadat, Arjun Mukherjee, and Omprakash Gnawali. \"Deception Detection with Feature-Augmentation by Soft Domain Transfer.\" \nInternational Conference on Social Informatics. Cham: Springer International Publishing, 2022.*\n\nwe map the labels map labels “pants-fire,” “false,”\n“barely-true,” and “half-true,” to deceptive; the labels \"mostly-true\" and \"true\" are mapped to non-deceptive. \nThe statements that are only half-true are now considered to be deceptive, making the criterion for statement being non-deceptive stricter: \nnow 2 out of 6 labels map to non-deceptive and 4 map to deceptive.",
"#### Cleaning\n\nThe dataset has been cleaned using cleanlab with visual inspection of problems found. Partial sentences, such as \"On Iran nuclear deal\", \n\"On inflation\", were removed. Text with large number of errors induced by a parser were also removed.\nStatements in language other than English (namely, Spanish) were also removed. \n\nThe training set contains 9997 samples, the validation and the test sets have 1250 samples each in them.",
"### PRODUCT REVIEWS\n\nThe dataset is produced from English Amazon Reviews labeled as either real or fake, relabeled as deceptive and non-deceptive respectively. \nThe reviews cover a variety of products with no particular product dominating the dataset. Although the dataset authors filtered out \nnon-English reviews, through outlier detection we found that the dataset still contains reviews in Spanish and other languages. \nProblematic label detection shows that over 6713 samples are potentially mislabeled; since this technique is error-prone,\nwe visually examine 67 reviews that are found to be the largest potential sources of error (the top percentile) and confirm that\nmost of them appear to be mislabeled. The final dataset of 20,971 reviews is evenly balanced with 10,492 deceptive and 10,479 \nnon-deceptive samples.\n\nThe training set contains 16776 samples, the validation and the test sets have 2097 and 2098 samples, respectively.",
"### SMS\n\nThis dataset was created from the SMS Spam Collection and SMS Phishing Dataset for Machine Learning and Pattern Recognition, \nwhich contained 5,574 and 5,971 real English SMS messages, respectively. As these two datasets overlap, after de-duplication, \nthe final dataset is made up of 6574 texts released by a private UK-based wireless operator; 1274 of them are deceptive, \nand the remaining 5300 are not.\n\nThe training set contains 5259 samples, the validation and the test sets have 657 and 658 samples, \nrespectively.",
"### TWITTER RUMOURS\n\nThis deception dataset was created using PHEME dataset from \n\nURL\n\nwas used in creation of this dataset. We took source tweets only, and ignored replies to them. \nWe used source tweet's label as being a rumour or non-rumour to label it as deceptive or non-deceptive.\n\nThe training set contains 4631 samples, the validation and the test sets have 579 samples each"
] |
[
93,
135,
545,
129,
95,
40,
66,
510,
32,
29,
368,
23,
176,
61,
32,
57,
58,
45,
29,
88,
223,
104,
222,
132,
102
] |
[
"passage: TAGS\n#task_categories-text-classification #task_categories-zero-shot-classification #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-mit #fraud-detection #deception-detection #phishing #fake-news #benchmark #opinion-spam #multi-domain #region-us \n# DIFrauD - Domain Independent Fraud Detection Benchmark\n\nDomain Independent Fraud Detection Benchmark is a labeled corpus containing over 95,854 samples of deceitful \nand truthful texts from a number of independent domains and tasks. Deception, however, can be different --\nin this corpus we made sure to gather strictly real examples of deception that are intentionally malicious \nand cause real harm, despite them often having very little in common. Covering seven domains, this benchmark \nis designed to serve as a representative slice of the various security challenges that remain open problems\ntoday.",
"passage: ## DATASET \n\nThe entire dataset contains 95854 samples, 37282 are deceptive and 58572 non-deceptive.\n\nThere are 7 independent domains in the dataset.\n\nEach task is (or has been converted to) a binary classification problem where 'y' is an indicator of deception.\n\n1) Phishing (2020 Email phishing benchmark with manually labeled emails) \n\n *- total: 15272 deceptive: 6074 non-deceptive: 9198*\n\n2) Fake News (News Articles)\n\n *- total: 20456 deceptive: 8832 non-deceptive: 11624*\n\n3) Political Statements (Claims and statements by politicians and other entities, made from Politifact by relabeling LIAR)\n\n *- total: 12497 deceptive: 8042 non-deceptive: 4455*\n\n4) Product Reviews (Amazon product reviews)\n\n *- total: 20971 deceptive: 10492 non-deceptive: 10479*\n\n5) Job Scams (Job postings on an online board)\n\n *- total: 14295 deceptive: 599 non-deceptive: 13696*\n\n6) SMS (combination of SMS Spam from UCI repository and SMS Phishing datasets)\n\n *- total: 6574 deceptive: 1274 non-deceptive: 5300*\n\n7) Twitter Rumours (Collection of rumours from PHEME dataset, covers multiple topics)\n \n *- total: 5789 deceptive: 1969 non-deceptive: 3820*\n\nEach one was constructed from one or more datasets. Some tasks were not initially binary and had to be relabeled. \nThe inputs vary wildly both stylistically and syntactically, as well as in terms of the goal of deception \n(or absence of thereof) being performed in the context of each dataset. Nonetheless, all seven datasets contain a significant\nfraction of texts that are meant to deceive the person reading them one way or another.\n\nEach subdirectory/config contains the domain/individual dataset split into three files:\n\n'URL', 'URL', and 'URL' \n\nthat contain train, test, and validation sets, respectively.\n\nThe splits are:\n\n-- train=80%\n\n-- test=10%\n\n-- valid=10%\n\nThe sampling process was random with seed=42. It was stratified with respect to 'y' (label) for each domain.### Fields\n\nEach 'jsonl' file has two fields (columns): 'text' (string) and 'label' (integer)\n\n'text' contains a statement or a claim that is either deceptive or thruthful. \nIt is guaranteed to be valid unicode, less than 1 million characters, and contains no empty entries or non-values.\n\n'label' answers the question whether text is deceptive: '1' means yes, it is deceptive, '0' means no, \nthe text is not deceptive (it is truthful).### Processing and Cleaning\n\nEach dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries, \nentries of length less than 2 characters or exceeding 1000000 characters were all removed.\n\nLabels were manually curated and corrected in cases of clear error. \n\nWhitespace, quotes, bulletpoints, unicode is normalized.### Layout\n\nThe directory layout of 'difraud' is like so:\n\n''\ndifraud\n fake_news/\n URL\n URL\n URL\n URL\n ...\n ...\n ...\n sms/\n URL\n URL\n URL\n URL\n URL\n URL\n''### Documentation\n\nPrimary documentation is this README file. Each dataset's directory contains a 'URL' file with additional details. \nThe contents of these files are also included at the end of this document in the Appendix. \nURL contains the MIT license this dataset is distributed under.",
"passage: ## CHANGES\n\nThis dataset is a successor of the GDD dataset. \n\nNoteable changes from GDD are:\n\n1) Addition of SMS and Twitter Rumours datasets, making it 7 deception datasets from different domains in total\n\n2) Re-labeling of Political Statements dataset using a scheme that better fits with prior published work that used it and is stricter in terms of non-deceptive statement criteria of acceptance (see the README file specific to the dataset within its directory)\n\n3) Job Scams datasets' labeles were previously inverted, with ~13500 labeled as deceptive (is_deceptive=True) and ~600 as non-deceptive. This could lead to potential issues with using metrics such as f1-score, which for binary classification is computed for the class considered to be positive. This issue has been addressed and the deceptive texts are labeled as 1 (e.g. positive or True) while non-deceptive as 0 (e.g. negative or False)\n\n4) All datasets have been processed using Cleanlab, with problematic samples maually examined and issues addressed if needed. See the details in each of the individual datasets README files.\n\n5) All datasets now come in 2 formats: the entirety of the data in a single jsonl file located in the 'data/' subdirectory of each dataset, and a standard train-test-valid stratified split of 80-10-10, in 3 separate jsonl files.\n\n6) All datasets have two fields: \"text\" (string) and \"label\" (integer, 0 or 1 - 0 indicates that the text is non-deceptive, 1 means it is deceptive)\n\n7) '\\n' has been normalized to ' ' for all datasets as it causes issues with BERT's tokenizer in some cases (and to be in line with general whitespace normalization). Broken unicode has been fixed. Whitespace, quotations, and bullet points were normalized. Text is limited to 1,000,000 characters in length and guaranteed to be non-empty. Duplicates within the the same dataset (even in text only) were dropped, so were empty and None values.## LICENSE\n\nThis dataset is published under the MIT license and can be used and modified by anyone free of charge. \nSee URL file for details.## CITING\n\nIf you found this dataset useful in your research, please consider citing it as:\n\nTODO: ADD our paper reference## REFERENCES \n\nOriginal GDD paper:\n\n@inproceedings{10.1145/3508398.3519358,\nauthor = {Zeng, Victor and Liu, Xuting and Verma, Rakesh M.},\ntitle = {Does Deception Leave a Content Independent Stylistic Trace?},\nyear = {2022},\nisbn = {9781450392204},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {URL\ndoi = {10.1145/3508398.3519358},\nabstract = {A recent survey claims that there are em no general linguistic cues for deception. Since Internet societies are plagued with deceptive attacks such as phishing and fake news, this claim means that we must build individual datasets and detectors for each kind of attack. It also implies that when a new scam (e.g., Covid) arrives, we must start the whole process of data collection, annotation, and model building from scratch. In this paper, we put this claim to the test by building a quality domain-independent deception dataset and investigating whether a model can perform well on more than one form of deception.},\nbooktitle = {Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy},\npages = {349–351},\nnumpages = {3},\nkeywords = {domain-independent deception detection, dataset quality/cleaning},\nlocation = {Baltimore, MD, USA},\nseries = {CODASPY '22}\n}## APPENDIX: Dataset and Domain Details\n\nThis section describes each domain/dataset in greater detail.",
"passage: ### FAKE NEWS\n\nFake News used WELFake as a basis. The WELFake dataset combines 72,134 news articles from four pre-existing datasets \n(Kaggle, McIntire, Reuters, and BuzzFeed Political). The dataset was cleaned of data leaks in the form of citations of \noften reputable sources, such as \"[claim] (Reuters)\". It contains 35,028 real news articles and 37,106 fake news articles. \nWe found a number of out-of-domain statements that are clearly not relevant to news, such as \"Cool\", which is a potential\nproblem for transfer learning as well as classification. \n\nThe training set contains 16364 samples, the validation and the test sets have 2064 and 2064 samles, respectively.### JOB SCAMS\n\nThe Employment Scam Aegean Dataset, henceforth referred to as the Job Scams dataset, consisted of 17,880 human-annotated job listings of\njob descriptions labeled as fraudulent or not.#### Relabeling\n\nThe original Job Labels dataset had the labels inverted when released. The problem is now fixed, the labels are correct.#### Cleaning \n\nIt was cleaned by removing all HTML tags, empty descriptions, and duplicates. \nThe final dataset is heavily imbalanced, with 599 deceptive and 13696 non-deceptive samples out of the 14295 total.### PHISHING\n\nThis dataset consists of various phishing attacks as well as benign emails collected from real users.\n\nThe training set contains 12217 samples, the validation and the test sets have 1527 and 1528 samples, respectively.### POLITICAL STATEMENTS\n\nThis corpus was created from the Liar dataset which consists of political statements made by US speakers assigned\na fine-grain truthfulness label by PolitiFact.#### Labeling\n\nThe primary difference is the change in the re-labeling scheme when converting the task from multiclass to binary.#### Old scheme\n\nWe use the claim field as the text and map labels “pants-fire,” “false,”\n“barely-true,” to deceptive and “half-true,” “mostly-true,” and “true”\nto non-deceptive, resulting in 5,669 deceptive and 7,167 truthful\nstatements.",
"passage: #### New scheme\n\nFollowing \n\n*Upadhayay, B., Behzadan, V.: \"Sentimental liar: Extended corpus and deep learning models for fake claim classification\" (2020)*\n\nand\n\n*Shahriar, Sadat, Arjun Mukherjee, and Omprakash Gnawali. \"Deception Detection with Feature-Augmentation by Soft Domain Transfer.\" \nInternational Conference on Social Informatics. Cham: Springer International Publishing, 2022.*\n\nwe map the labels map labels “pants-fire,” “false,”\n“barely-true,” and “half-true,” to deceptive; the labels \"mostly-true\" and \"true\" are mapped to non-deceptive. \nThe statements that are only half-true are now considered to be deceptive, making the criterion for statement being non-deceptive stricter: \nnow 2 out of 6 labels map to non-deceptive and 4 map to deceptive.#### Cleaning\n\nThe dataset has been cleaned using cleanlab with visual inspection of problems found. Partial sentences, such as \"On Iran nuclear deal\", \n\"On inflation\", were removed. Text with large number of errors induced by a parser were also removed.\nStatements in language other than English (namely, Spanish) were also removed. \n\nThe training set contains 9997 samples, the validation and the test sets have 1250 samples each in them.### PRODUCT REVIEWS\n\nThe dataset is produced from English Amazon Reviews labeled as either real or fake, relabeled as deceptive and non-deceptive respectively. \nThe reviews cover a variety of products with no particular product dominating the dataset. Although the dataset authors filtered out \nnon-English reviews, through outlier detection we found that the dataset still contains reviews in Spanish and other languages. \nProblematic label detection shows that over 6713 samples are potentially mislabeled; since this technique is error-prone,\nwe visually examine 67 reviews that are found to be the largest potential sources of error (the top percentile) and confirm that\nmost of them appear to be mislabeled. The final dataset of 20,971 reviews is evenly balanced with 10,492 deceptive and 10,479 \nnon-deceptive samples.\n\nThe training set contains 16776 samples, the validation and the test sets have 2097 and 2098 samples, respectively.### SMS\n\nThis dataset was created from the SMS Spam Collection and SMS Phishing Dataset for Machine Learning and Pattern Recognition, \nwhich contained 5,574 and 5,971 real English SMS messages, respectively. As these two datasets overlap, after de-duplication, \nthe final dataset is made up of 6574 texts released by a private UK-based wireless operator; 1274 of them are deceptive, \nand the remaining 5300 are not.\n\nThe training set contains 5259 samples, the validation and the test sets have 657 and 658 samples, \nrespectively."
] |
ad794cba698c253ffdb6d18eff3fc87b004c1135
|
DrawBench dataset from [Imagen](https://imagen.research.google/).
|
sayakpaul/drawbench
|
[
"license:apache-2.0",
"region:us"
] |
2023-10-21T04:24:45+00:00
|
{"license": "apache-2.0"}
|
2023-10-21T04:25:29+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
DrawBench dataset from Imagen.
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
f10e71b6212adbe80c82ea240c18c831d36100f0
|
# Episode-Specific Spoilers
This is the spoiler matching dataset as presented in Spoiler Detection as Semantic Text Matching. It consists of comments discussing episodes from various TV shows. Unlike other spoiler datasets, this dataset assigns an episode number (and show name) to each comment, enabling matching to specific episodes and very fine-grain spoiler detection. This dataset also includes an episode summary for each (show, episode) pair. For a given show, the task is to rank the summaries for each comment.
# Usage
See the [Spoiler Matching repository](https://github.com/bobotran/spoiler-matching) for examples on how to train a spoiler matching model on this dataset.
# Annotation
522,991 comments from 13 TV shows were scraped from episode discussion threads on Reddit. Of these comments, some are actually *relevant* to their respective episode discussion and others are *irrelevant*. A subset of these comments (11,032) were hand-labeled as *irrelevant* or *relevant*. This hand-labeled dataset was used to train an irrelevant/relevant classifier which auto-labeled the remaining comments. All *relevant* comments were formatted into the `matching` dataset.
# Details
The `matching` folder contains the spoiler matching dataset, and the `filtering` folder contains intermediate data from the auto-labeling step.
## matching/
This folder contains the datasets for training spoiler matching models. All comments in these files were determined to be *relevant*, whether that was done by a human annotator or the auto-labeler. `summaries.json` contains the summary for each episode. `summaries.json` and `test.json` are the same across `with_autolabels` and `handlabeled_only`.
### matching/with_autolabels/
The `with_autolabels` folder contains the main dataset. `test.json` and `val.json` consist of hand-labeled *relevants* while `train.json` contains auto-labeled *relevants*. To measure the performance of spoiler matching models on unseen shows, `test.json` was constructed such that it consists of comments from 4 TV shows which are neither present in `val.json` nor `train.json`.
### matching/handlabeled_only/
The `handlabeled_only` folder shares the same `test.json` with `matching/with_autolabels/`, but `train.json` and `val.json` are split 80-20 from `matching/with_autolabels/val.json` respectively.
## filtering/
This folder contains data from the auto-labeling step.
### filtering/handlabeled
This folder contains the dataset used to train the autolabeler. Comments with a `1` in the first column were hand-labeled as `irrelevant`. Comments with a `0` in the first column were hand-labeled as `relevant`. The last two columns are the show name and episode number respectively, which are not used during this step.
### filtering/unlabeled
The unlabeled comments were split into two chunks to make them more manageable to load into memory during inference. All comments have a `-1` in the first column to represent that they are unlabeled.
|
bobotran/spoiler-matching
|
[
"task_categories:sentence-similarity",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-10-21T04:26:50+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "size_categories": ["100K<n<1M"], "task_categories": ["sentence-similarity"]}
|
2023-10-22T05:51:14+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-sentence-similarity #size_categories-100K<n<1M #language-English #license-cc-by-sa-3.0 #region-us
|
# Episode-Specific Spoilers
This is the spoiler matching dataset as presented in Spoiler Detection as Semantic Text Matching. It consists of comments discussing episodes from various TV shows. Unlike other spoiler datasets, this dataset assigns an episode number (and show name) to each comment, enabling matching to specific episodes and very fine-grain spoiler detection. This dataset also includes an episode summary for each (show, episode) pair. For a given show, the task is to rank the summaries for each comment.
# Usage
See the Spoiler Matching repository for examples on how to train a spoiler matching model on this dataset.
# Annotation
522,991 comments from 13 TV shows were scraped from episode discussion threads on Reddit. Of these comments, some are actually *relevant* to their respective episode discussion and others are *irrelevant*. A subset of these comments (11,032) were hand-labeled as *irrelevant* or *relevant*. This hand-labeled dataset was used to train an irrelevant/relevant classifier which auto-labeled the remaining comments. All *relevant* comments were formatted into the 'matching' dataset.
# Details
The 'matching' folder contains the spoiler matching dataset, and the 'filtering' folder contains intermediate data from the auto-labeling step.
## matching/
This folder contains the datasets for training spoiler matching models. All comments in these files were determined to be *relevant*, whether that was done by a human annotator or the auto-labeler. 'URL' contains the summary for each episode. 'URL' and 'URL' are the same across 'with_autolabels' and 'handlabeled_only'.
### matching/with_autolabels/
The 'with_autolabels' folder contains the main dataset. 'URL' and 'URL' consist of hand-labeled *relevants* while 'URL' contains auto-labeled *relevants*. To measure the performance of spoiler matching models on unseen shows, 'URL' was constructed such that it consists of comments from 4 TV shows which are neither present in 'URL' nor 'URL'.
### matching/handlabeled_only/
The 'handlabeled_only' folder shares the same 'URL' with 'matching/with_autolabels/', but 'URL' and 'URL' are split 80-20 from 'matching/with_autolabels/URL' respectively.
## filtering/
This folder contains data from the auto-labeling step.
### filtering/handlabeled
This folder contains the dataset used to train the autolabeler. Comments with a '1' in the first column were hand-labeled as 'irrelevant'. Comments with a '0' in the first column were hand-labeled as 'relevant'. The last two columns are the show name and episode number respectively, which are not used during this step.
### filtering/unlabeled
The unlabeled comments were split into two chunks to make them more manageable to load into memory during inference. All comments have a '-1' in the first column to represent that they are unlabeled.
|
[
"# Episode-Specific Spoilers\nThis is the spoiler matching dataset as presented in Spoiler Detection as Semantic Text Matching. It consists of comments discussing episodes from various TV shows. Unlike other spoiler datasets, this dataset assigns an episode number (and show name) to each comment, enabling matching to specific episodes and very fine-grain spoiler detection. This dataset also includes an episode summary for each (show, episode) pair. For a given show, the task is to rank the summaries for each comment.",
"# Usage\nSee the Spoiler Matching repository for examples on how to train a spoiler matching model on this dataset.",
"# Annotation\n522,991 comments from 13 TV shows were scraped from episode discussion threads on Reddit. Of these comments, some are actually *relevant* to their respective episode discussion and others are *irrelevant*. A subset of these comments (11,032) were hand-labeled as *irrelevant* or *relevant*. This hand-labeled dataset was used to train an irrelevant/relevant classifier which auto-labeled the remaining comments. All *relevant* comments were formatted into the 'matching' dataset.",
"# Details\nThe 'matching' folder contains the spoiler matching dataset, and the 'filtering' folder contains intermediate data from the auto-labeling step.",
"## matching/\nThis folder contains the datasets for training spoiler matching models. All comments in these files were determined to be *relevant*, whether that was done by a human annotator or the auto-labeler. 'URL' contains the summary for each episode. 'URL' and 'URL' are the same across 'with_autolabels' and 'handlabeled_only'.",
"### matching/with_autolabels/\nThe 'with_autolabels' folder contains the main dataset. 'URL' and 'URL' consist of hand-labeled *relevants* while 'URL' contains auto-labeled *relevants*. To measure the performance of spoiler matching models on unseen shows, 'URL' was constructed such that it consists of comments from 4 TV shows which are neither present in 'URL' nor 'URL'.",
"### matching/handlabeled_only/\nThe 'handlabeled_only' folder shares the same 'URL' with 'matching/with_autolabels/', but 'URL' and 'URL' are split 80-20 from 'matching/with_autolabels/URL' respectively.",
"## filtering/\nThis folder contains data from the auto-labeling step.",
"### filtering/handlabeled\nThis folder contains the dataset used to train the autolabeler. Comments with a '1' in the first column were hand-labeled as 'irrelevant'. Comments with a '0' in the first column were hand-labeled as 'relevant'. The last two columns are the show name and episode number respectively, which are not used during this step.",
"### filtering/unlabeled\nThe unlabeled comments were split into two chunks to make them more manageable to load into memory during inference. All comments have a '-1' in the first column to represent that they are unlabeled."
] |
[
"TAGS\n#task_categories-sentence-similarity #size_categories-100K<n<1M #language-English #license-cc-by-sa-3.0 #region-us \n",
"# Episode-Specific Spoilers\nThis is the spoiler matching dataset as presented in Spoiler Detection as Semantic Text Matching. It consists of comments discussing episodes from various TV shows. Unlike other spoiler datasets, this dataset assigns an episode number (and show name) to each comment, enabling matching to specific episodes and very fine-grain spoiler detection. This dataset also includes an episode summary for each (show, episode) pair. For a given show, the task is to rank the summaries for each comment.",
"# Usage\nSee the Spoiler Matching repository for examples on how to train a spoiler matching model on this dataset.",
"# Annotation\n522,991 comments from 13 TV shows were scraped from episode discussion threads on Reddit. Of these comments, some are actually *relevant* to their respective episode discussion and others are *irrelevant*. A subset of these comments (11,032) were hand-labeled as *irrelevant* or *relevant*. This hand-labeled dataset was used to train an irrelevant/relevant classifier which auto-labeled the remaining comments. All *relevant* comments were formatted into the 'matching' dataset.",
"# Details\nThe 'matching' folder contains the spoiler matching dataset, and the 'filtering' folder contains intermediate data from the auto-labeling step.",
"## matching/\nThis folder contains the datasets for training spoiler matching models. All comments in these files were determined to be *relevant*, whether that was done by a human annotator or the auto-labeler. 'URL' contains the summary for each episode. 'URL' and 'URL' are the same across 'with_autolabels' and 'handlabeled_only'.",
"### matching/with_autolabels/\nThe 'with_autolabels' folder contains the main dataset. 'URL' and 'URL' consist of hand-labeled *relevants* while 'URL' contains auto-labeled *relevants*. To measure the performance of spoiler matching models on unseen shows, 'URL' was constructed such that it consists of comments from 4 TV shows which are neither present in 'URL' nor 'URL'.",
"### matching/handlabeled_only/\nThe 'handlabeled_only' folder shares the same 'URL' with 'matching/with_autolabels/', but 'URL' and 'URL' are split 80-20 from 'matching/with_autolabels/URL' respectively.",
"## filtering/\nThis folder contains data from the auto-labeling step.",
"### filtering/handlabeled\nThis folder contains the dataset used to train the autolabeler. Comments with a '1' in the first column were hand-labeled as 'irrelevant'. Comments with a '0' in the first column were hand-labeled as 'relevant'. The last two columns are the show name and episode number respectively, which are not used during this step.",
"### filtering/unlabeled\nThe unlabeled comments were split into two chunks to make them more manageable to load into memory during inference. All comments have a '-1' in the first column to represent that they are unlabeled."
] |
[
46,
124,
29,
116,
37,
89,
104,
70,
17,
92,
55
] |
[
"passage: TAGS\n#task_categories-sentence-similarity #size_categories-100K<n<1M #language-English #license-cc-by-sa-3.0 #region-us \n# Episode-Specific Spoilers\nThis is the spoiler matching dataset as presented in Spoiler Detection as Semantic Text Matching. It consists of comments discussing episodes from various TV shows. Unlike other spoiler datasets, this dataset assigns an episode number (and show name) to each comment, enabling matching to specific episodes and very fine-grain spoiler detection. This dataset also includes an episode summary for each (show, episode) pair. For a given show, the task is to rank the summaries for each comment.# Usage\nSee the Spoiler Matching repository for examples on how to train a spoiler matching model on this dataset.# Annotation\n522,991 comments from 13 TV shows were scraped from episode discussion threads on Reddit. Of these comments, some are actually *relevant* to their respective episode discussion and others are *irrelevant*. A subset of these comments (11,032) were hand-labeled as *irrelevant* or *relevant*. This hand-labeled dataset was used to train an irrelevant/relevant classifier which auto-labeled the remaining comments. All *relevant* comments were formatted into the 'matching' dataset.# Details\nThe 'matching' folder contains the spoiler matching dataset, and the 'filtering' folder contains intermediate data from the auto-labeling step.## matching/\nThis folder contains the datasets for training spoiler matching models. All comments in these files were determined to be *relevant*, whether that was done by a human annotator or the auto-labeler. 'URL' contains the summary for each episode. 'URL' and 'URL' are the same across 'with_autolabels' and 'handlabeled_only'."
] |
af65dd7b9e2443503503e98fe39ddd8fe058e54c
|
# Synthetic Clinical Notes Dataset
This dataset, generated using LLAMA2, is designed to mimic FHIR Document Reference Clinical Notes. It follows a layout similar to MIMIC, but it's important to note that this dataset contains no Personal Health Information (PHI) or Personally Identifiable Information (PII).
## Dataset Details
- **Name**: Synthetic Clinical Notes Dataset
## Key Features
- **Synthetic Data**: All data in this dataset is synthetic, ensuring no risk of exposing real patient information.
- **FHIR Document Reference Layout**: The dataset closely mirrors the structure and format of FHIR Document Reference Clinical Notes, making it suitable for healthcare-related machine learning tasks.
- **MIMIC-style Layout**: For researchers familiar with the MIMIC dataset, this synthetic dataset offers a similar layout, facilitating a smoother transition.
## Usage
The dataset is compatible with the Hugging Face Datasets library. Here's a quick start guide:
```python
from datasets import load_dataset
# Load the synthetic clinical notes dataset
dataset = load_dataset("your_huggingface_dataset_name_here")
# Exploring the dataset
print(dataset["train"][0]) # Print the first entry from the training set
|
dlyog/synth_clin_notes
|
[
"region:us"
] |
2023-10-21T05:00:48+00:00
|
{}
|
2023-10-21T05:07:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Synthetic Clinical Notes Dataset
This dataset, generated using LLAMA2, is designed to mimic FHIR Document Reference Clinical Notes. It follows a layout similar to MIMIC, but it's important to note that this dataset contains no Personal Health Information (PHI) or Personally Identifiable Information (PII).
## Dataset Details
- Name: Synthetic Clinical Notes Dataset
## Key Features
- Synthetic Data: All data in this dataset is synthetic, ensuring no risk of exposing real patient information.
- FHIR Document Reference Layout: The dataset closely mirrors the structure and format of FHIR Document Reference Clinical Notes, making it suitable for healthcare-related machine learning tasks.
- MIMIC-style Layout: For researchers familiar with the MIMIC dataset, this synthetic dataset offers a similar layout, facilitating a smoother transition.
## Usage
The dataset is compatible with the Hugging Face Datasets library. Here's a quick start guide:
'''python
from datasets import load_dataset
# Load the synthetic clinical notes dataset
dataset = load_dataset("your_huggingface_dataset_name_here")
# Exploring the dataset
print(dataset["train"][0]) # Print the first entry from the training set
|
[
"# Synthetic Clinical Notes Dataset\n\nThis dataset, generated using LLAMA2, is designed to mimic FHIR Document Reference Clinical Notes. It follows a layout similar to MIMIC, but it's important to note that this dataset contains no Personal Health Information (PHI) or Personally Identifiable Information (PII).",
"## Dataset Details\n\n- Name: Synthetic Clinical Notes Dataset",
"## Key Features\n\n- Synthetic Data: All data in this dataset is synthetic, ensuring no risk of exposing real patient information.\n- FHIR Document Reference Layout: The dataset closely mirrors the structure and format of FHIR Document Reference Clinical Notes, making it suitable for healthcare-related machine learning tasks.\n- MIMIC-style Layout: For researchers familiar with the MIMIC dataset, this synthetic dataset offers a similar layout, facilitating a smoother transition.",
"## Usage\n\nThe dataset is compatible with the Hugging Face Datasets library. Here's a quick start guide:\n\n'''python\nfrom datasets import load_dataset",
"# Load the synthetic clinical notes dataset\ndataset = load_dataset(\"your_huggingface_dataset_name_here\")",
"# Exploring the dataset\nprint(dataset[\"train\"][0]) # Print the first entry from the training set"
] |
[
"TAGS\n#region-us \n",
"# Synthetic Clinical Notes Dataset\n\nThis dataset, generated using LLAMA2, is designed to mimic FHIR Document Reference Clinical Notes. It follows a layout similar to MIMIC, but it's important to note that this dataset contains no Personal Health Information (PHI) or Personally Identifiable Information (PII).",
"## Dataset Details\n\n- Name: Synthetic Clinical Notes Dataset",
"## Key Features\n\n- Synthetic Data: All data in this dataset is synthetic, ensuring no risk of exposing real patient information.\n- FHIR Document Reference Layout: The dataset closely mirrors the structure and format of FHIR Document Reference Clinical Notes, making it suitable for healthcare-related machine learning tasks.\n- MIMIC-style Layout: For researchers familiar with the MIMIC dataset, this synthetic dataset offers a similar layout, facilitating a smoother transition.",
"## Usage\n\nThe dataset is compatible with the Hugging Face Datasets library. Here's a quick start guide:\n\n'''python\nfrom datasets import load_dataset",
"# Load the synthetic clinical notes dataset\ndataset = load_dataset(\"your_huggingface_dataset_name_here\")",
"# Exploring the dataset\nprint(dataset[\"train\"][0]) # Print the first entry from the training set"
] |
[
6,
76,
16,
110,
40,
34,
28
] |
[
"passage: TAGS\n#region-us \n# Synthetic Clinical Notes Dataset\n\nThis dataset, generated using LLAMA2, is designed to mimic FHIR Document Reference Clinical Notes. It follows a layout similar to MIMIC, but it's important to note that this dataset contains no Personal Health Information (PHI) or Personally Identifiable Information (PII).## Dataset Details\n\n- Name: Synthetic Clinical Notes Dataset## Key Features\n\n- Synthetic Data: All data in this dataset is synthetic, ensuring no risk of exposing real patient information.\n- FHIR Document Reference Layout: The dataset closely mirrors the structure and format of FHIR Document Reference Clinical Notes, making it suitable for healthcare-related machine learning tasks.\n- MIMIC-style Layout: For researchers familiar with the MIMIC dataset, this synthetic dataset offers a similar layout, facilitating a smoother transition.## Usage\n\nThe dataset is compatible with the Hugging Face Datasets library. Here's a quick start guide:\n\n'''python\nfrom datasets import load_dataset# Load the synthetic clinical notes dataset\ndataset = load_dataset(\"your_huggingface_dataset_name_here\")# Exploring the dataset\nprint(dataset[\"train\"][0]) # Print the first entry from the training set"
] |
65bddc9999d1a9e8e07b66ceab8c3e7ea4cd745b
|
# Dataset Card for "PE_augment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Imran1/PE_augment
|
[
"region:us"
] |
2023-10-21T05:09:08+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Elbove Extension", "1": "KNEE Flexion", "2": "NECK Exercise", "3": "PlanterFlexion of Foot", "4": "Trunk Extension", "5": "Trunk Flexion", "6": "Wrist Extension", "7": "Wrist Flexion"}}}}], "splits": [{"name": "train", "num_bytes": 5056424123.125, "num_examples": 9125}], "download_size": 4562640948, "dataset_size": 5056424123.125}}
|
2023-10-21T05:13:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "PE_augment"
More Information needed
|
[
"# Dataset Card for \"PE_augment\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"PE_augment\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"PE_augment\"\n\nMore Information needed"
] |
798de861740e774b67a7bb1b2e3ba1a44cd432f1
|
# Dataset Card for "merged-no-pad-text-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shossain/merged-no-pad-text-16384
|
[
"region:us"
] |
2023-10-21T05:10:01+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 372439533, "num_examples": 6401}], "download_size": 184155020, "dataset_size": 372439533}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-11-06T21:54:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "merged-no-pad-text-16384"
More Information needed
|
[
"# Dataset Card for \"merged-no-pad-text-16384\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"merged-no-pad-text-16384\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"merged-no-pad-text-16384\"\n\nMore Information needed"
] |
90dff60fe17dbd339c34ddcef5841e4afe33df99
|
In the pursuit of better health and well-being, finding the right supplements is paramount. One name that's been making waves in the world of wellness is **Flexavico Pills**. If you're wondering how Flexavico Pills can boost your vitality, relieve discomfort, or enhance your overall quality of life, you've come to the right place.
_We've prepared a detailed guide to help you navigate the world of Flexavico Pills. From its benefits and uses to personal experiences and expert insights, we've got it all covered. So, let's dive into the world of Flexavico Pills and unlock its secrets for a healthier you._
Flexavico Pills is a remarkable supplement known for its exceptional properties. It is designed to promote overall well-being, tackle discomfort, and enhance the quality of life. Let's explore this incredible supplement further.
[Click Here To Get Flexavico Pills From Official Website](https://www.glitco.com/es-flexavico)
**Flexavico Pills** is a game changer when it comes to wellness. Its unique blend of ingredients is carefully crafted to target various aspects of your health, making it a versatile addition to your daily routine.
[.png)](https://www.glitco.com/es-flexavico)
### **Unlocking the Benefits**
➣ Boosts Energy Levels: Flexavico Pills are known to provide a natural energy boost, helping you stay active and focused throughout the day.
➣ Relieves Discomfort: It's a great companion for those dealing with discomfort, as it offers relief without any harsh side effects.
➣ Supports Joint Health: If you're looking to improve joint mobility and reduce joint-related issues, Flexavico Pills are worth considering.
➣ Enhances Immune System: A robust immune system is crucial for overall health, and Flexavico Pills are designed to support and strengthen your body's defenses.
➣ Promotes Relaxation: Stress and anxiety can take a toll on your well-being. Flexavico Pills have ingredients that help promote relaxation and reduce stress.
Incorporating Flexavico Pills into Your Routine
--------------------------------------------------
Flexavico Pills are easy to incorporate into your daily life. They are available in convenient pill form, making it simple to add to your supplement regimen. For the best results, follow the recommended dosage instructions provided on the packaging or by your healthcare provider.
What the Experts Say?
---------------------
To provide you with the most accurate and trustworthy information, we've consulted with experts in the field. Here are some insights from healthcare professionals:
⏩ Dr. Sarah Miller, a renowned nutritionist, recommends Flexavico Pills for their natural and effective approach to wellness.
⏩ Dr. James Anderson, a leading rheumatologist, highlights the benefits of **Flexavico Pills** in supporting joint health.
⏩ Dr. Emily Carter, a psychologist, emphasizes the importance of stress reduction in maintaining overall well-being, and she recommends Flexavico Pills as part of a holistic approach.
Real Stories, Real Results
--------------------------
Many individuals have experienced positive changes in their lives after incorporating **Flexavico Pills** into their daily routine. Here are a few real-life stories:
➣ **John's Journey** : John, a 45-year-old office worker, shares his experience with Flexavico Pills. "I used to feel tired all the time, and my joints were constantly bothering me. Since I started taking Flexavico Pills, I have more energy, and my joint discomfort has significantly reduced."
**➣ Sarah's Story** : Sarah, a busy mother of three, says, "Flexavico Pills have made a noticeable difference in my stress levels. I feel more relaxed and better equipped to handle the daily challenges of motherhood."
**➣ Lisa's Testimonial** : Lisa, a retiree, shares, "**Flexavico Pills** have improved my overall vitality. I can now enjoy my retirement years to the fullest."
_These personal experiences illustrate the positive impact of Flexavico Pills on various aspects of life._
_[Click Here To Get Flexavico Pills From Official Website](https://www.glitco.com/es-flexavico)_
_[.png)](https://www.glitco.com/es-flexavico)_
Flexavico Pills: Frequently Asked Questions
-------------------------------------------
**Q: Are Flexavico Pills safe to use?** A: Yes, Flexavico Pills are generally safe to use. However, it's always a good idea to consult with a healthcare professional before adding any new supplement to your routine, especially if you have underlying health conditions or are taking **medications.**
**Q: How long does it take to see results with Flexavico Pills?** A: The time it takes to experience results can vary from person to person. Some individuals may notice changes in a few weeks, while others might take a bit longer. Consistency in taking the supplement is key.
**Q: Can Flexavico Pills be taken with other medications?** A: It's best to consult with your healthcare provider if you are taking other medications. They can provide guidance on potential interactions and the best way to incorporate Flexavico Pills into your routine.
**Q: Are there any side effects associated with Flexavico Pills?** A: Flexavico Pills are formulated to minimize side effects. However, some individuals may experience mild digestive issues initially. If you have any concerns, consult with a healthcare professional.
**Q: Can I take Flexavico Pills on an empty stomach?** A: Flexavico Pills are generally well-tolerated on an empty stomach, but it's advisable to follow the dosage instructions provided on the packaging.
**Q: Where can I purchase Flexavico Pills?** A: Flexavico Pills are available at reputable health stores and online retailers. Ensure you purchase from a trusted source to guarantee product authenticity.
[Click Here To Get Flexavico Pills From Official Website](https://www.glitco.com/es-flexavico)
Conclusion
----------
Flexavico Pills offer a promising path to enhanced wellness. With the power to boost energy, relieve discomfort, support joint health, enhance the immune system, and promote relaxation, they have become a valuable addition to many people's daily lives. It's important to consult with a healthcare professional before starting any new supplement, but the potential benefits of **Flexavico Pills** are certainly worth considering.
|
flexavicospain/Flexavico
|
[
"region:us"
] |
2023-10-21T05:10:43+00:00
|
{}
|
2023-10-21T05:11:36+00:00
|
[] |
[] |
TAGS
#region-us
|
In the pursuit of better health and well-being, finding the right supplements is paramount. One name that's been making waves in the world of wellness is Flexavico Pills. If you're wondering how Flexavico Pills can boost your vitality, relieve discomfort, or enhance your overall quality of life, you've come to the right place.
_We've prepared a detailed guide to help you navigate the world of Flexavico Pills. From its benefits and uses to personal experiences and expert insights, we've got it all covered. So, let's dive into the world of Flexavico Pills and unlock its secrets for a healthier you._
Flexavico Pills is a remarkable supplement known for its exceptional properties. It is designed to promote overall well-being, tackle discomfort, and enhance the quality of life. Let's explore this incredible supplement further.
Click Here To Get Flexavico Pills From Official Website
Flexavico Pills is a game changer when it comes to wellness. Its unique blend of ingredients is carefully crafted to target various aspects of your health, making it a versatile addition to your daily routine.
](URL
Flexavico Pills: Frequently Asked Questions
-------------------------------------------
Q: Are Flexavico Pills safe to use? A: Yes, Flexavico Pills are generally safe to use. However, it's always a good idea to consult with a healthcare professional before adding any new supplement to your routine, especially if you have underlying health conditions or are taking medications.
Q: How long does it take to see results with Flexavico Pills? A: The time it takes to experience results can vary from person to person. Some individuals may notice changes in a few weeks, while others might take a bit longer. Consistency in taking the supplement is key.
Q: Can Flexavico Pills be taken with other medications? A: It's best to consult with your healthcare provider if you are taking other medications. They can provide guidance on potential interactions and the best way to incorporate Flexavico Pills into your routine.
Q: Are there any side effects associated with Flexavico Pills? A: Flexavico Pills are formulated to minimize side effects. However, some individuals may experience mild digestive issues initially. If you have any concerns, consult with a healthcare professional.
Q: Can I take Flexavico Pills on an empty stomach? A: Flexavico Pills are generally well-tolerated on an empty stomach, but it's advisable to follow the dosage instructions provided on the packaging.
Q: Where can I purchase Flexavico Pills? A: Flexavico Pills are available at reputable health stores and online retailers. Ensure you purchase from a trusted source to guarantee product authenticity.
Click Here To Get Flexavico Pills From Official Website
Conclusion
----------
Flexavico Pills offer a promising path to enhanced wellness. With the power to boost energy, relieve discomfort, support joint health, enhance the immune system, and promote relaxation, they have become a valuable addition to many people's daily lives. It's important to consult with a healthcare professional before starting any new supplement, but the potential benefits of Flexavico Pills are certainly worth considering.
|
[
"### Unlocking the Benefits\n\n Boosts Energy Levels: Flexavico Pills are known to provide a natural energy boost, helping you stay active and focused throughout the day. \n \n Relieves Discomfort: It's a great companion for those dealing with discomfort, as it offers relief without any harsh side effects. \n \n Supports Joint Health: If you're looking to improve joint mobility and reduce joint-related issues, Flexavico Pills are worth considering. \n \n Enhances Immune System: A robust immune system is crucial for overall health, and Flexavico Pills are designed to support and strengthen your body's defenses. \n \n Promotes Relaxation: Stress and anxiety can take a toll on your well-being. Flexavico Pills have ingredients that help promote relaxation and reduce stress.\n\n \nIncorporating Flexavico Pills into Your Routine\n--------------------------------------------------\n\n \nFlexavico Pills are easy to incorporate into your daily life. They are available in convenient pill form, making it simple to add to your supplement regimen. For the best results, follow the recommended dosage instructions provided on the packaging or by your healthcare provider.\n\nWhat the Experts Say?\n---------------------\n\n \nTo provide you with the most accurate and trustworthy information, we've consulted with experts in the field. Here are some insights from healthcare professionals: \n \n⏩ Dr. Sarah Miller, a renowned nutritionist, recommends Flexavico Pills for their natural and effective approach to wellness. \n \n⏩ Dr. James Anderson, a leading rheumatologist, highlights the benefits of Flexavico Pills in supporting joint health. \n \n⏩ Dr. Emily Carter, a psychologist, emphasizes the importance of stress reduction in maintaining overall well-being, and she recommends Flexavico Pills as part of a holistic approach.\n\nReal Stories, Real Results\n--------------------------\n\nMany individuals have experienced positive changes in their lives after incorporating Flexavico Pills into their daily routine. Here are a few real-life stories: \n \n John's Journey : John, a 45-year-old office worker, shares his experience with Flexavico Pills. \"I used to feel tired all the time, and my joints were constantly bothering me. Since I started taking Flexavico Pills, I have more energy, and my joint discomfort has significantly reduced.\" \n \n Sarah's Story : Sarah, a busy mother of three, says, \"Flexavico Pills have made a noticeable difference in my stress levels. I feel more relaxed and better equipped to handle the daily challenges of motherhood.\" \n \n Lisa's Testimonial : Lisa, a retiree, shares, \"Flexavico Pills have improved my overall vitality. I can now enjoy my retirement years to the fullest.\" \n \n_These personal experiences illustrate the positive impact of Flexavico Pills on various aspects of life._\n\n_Click Here To Get Flexavico Pills From Official Website_\n\n_](URL\n\nFlexavico Pills: Frequently Asked Questions\n-------------------------------------------\n\nQ: Are Flexavico Pills safe to use? A: Yes, Flexavico Pills are generally safe to use. However, it's always a good idea to consult with a healthcare professional before adding any new supplement to your routine, especially if you have underlying health conditions or are taking medications. \n \nQ: How long does it take to see results with Flexavico Pills? A: The time it takes to experience results can vary from person to person. Some individuals may notice changes in a few weeks, while others might take a bit longer. Consistency in taking the supplement is key. \n \nQ: Can Flexavico Pills be taken with other medications? A: It's best to consult with your healthcare provider if you are taking other medications. They can provide guidance on potential interactions and the best way to incorporate Flexavico Pills into your routine. \n \nQ: Are there any side effects associated with Flexavico Pills? A: Flexavico Pills are formulated to minimize side effects. However, some individuals may experience mild digestive issues initially. If you have any concerns, consult with a healthcare professional. \n \nQ: Can I take Flexavico Pills on an empty stomach? A: Flexavico Pills are generally well-tolerated on an empty stomach, but it's advisable to follow the dosage instructions provided on the packaging. \n \nQ: Where can I purchase Flexavico Pills? A: Flexavico Pills are available at reputable health stores and online retailers. Ensure you purchase from a trusted source to guarantee product authenticity.\n\nClick Here To Get Flexavico Pills From Official Website\n\nConclusion\n----------\n\nFlexavico Pills offer a promising path to enhanced wellness. With the power to boost energy, relieve discomfort, support joint health, enhance the immune system, and promote relaxation, they have become a valuable addition to many people's daily lives. It's important to consult with a healthcare professional before starting any new supplement, but the potential benefits of Flexavico Pills are certainly worth considering."
] |
[
6,
1120
] |
[
"passage: TAGS\n#region-us \n"
] |
ee04d1205fff68270b38872271efe84d56978691
|
# Dataset Card for "expertllama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
diwank/expertllama
|
[
"region:us"
] |
2023-10-21T05:22:48+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "expert_identity", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 94172791, "num_examples": 52002}], "download_size": 51939845, "dataset_size": 94172791}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T05:30:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "expertllama"
More Information needed
|
[
"# Dataset Card for \"expertllama\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"expertllama\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"expertllama\"\n\nMore Information needed"
] |
7d52e3f7e51e39fc731a5b053d7e1850da7fd90b
|
# Dataset Card for "vivos_mms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aiface/vivos_mms
|
[
"region:us"
] |
2023-10-21T05:52:05+00:00
|
{"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "input_length", "dtype": "int64"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 3443268452, "num_examples": 11660}, {"name": "test", "num_bytes": 172149180, "num_examples": 760}], "download_size": 3175004057, "dataset_size": 3615417632}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-10-21T05:55:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "vivos_mms"
More Information needed
|
[
"# Dataset Card for \"vivos_mms\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"vivos_mms\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"vivos_mms\"\n\nMore Information needed"
] |
3736ab2b221f6ee58da52b7d6fb6697742fdd1a2
|
# Dataset Card for "undl_text_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ranWang/undl_text_split
|
[
"region:us"
] |
2023-10-21T05:57:38+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "token_total", "dtype": "int32"}, {"name": "split_symbol", "dtype": "string"}, {"name": "record", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29153560, "num_examples": 68352}], "download_size": 11816238, "dataset_size": 29153560}}
|
2023-10-21T05:57:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "undl_text_split"
More Information needed
|
[
"# Dataset Card for \"undl_text_split\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"undl_text_split\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"undl_text_split\"\n\nMore Information needed"
] |
683d449776f906949674b945b8697d31221afeb8
|
# Dataset Card for "drawbench-upsampled-zephyr-7b-alpha"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sayakpaul/drawbench-upsampled-zephyr-7b-alpha
|
[
"region:us"
] |
2023-10-21T06:37:21+00:00
|
{"dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}, {"name": "Upsampled Prompt", "dtype": "string"}, {"name": "Category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 87868, "num_examples": 200}], "download_size": 53341, "dataset_size": 87868}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T06:37:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "drawbench-upsampled-zephyr-7b-alpha"
More Information needed
|
[
"# Dataset Card for \"drawbench-upsampled-zephyr-7b-alpha\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"drawbench-upsampled-zephyr-7b-alpha\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"drawbench-upsampled-zephyr-7b-alpha\"\n\nMore Information needed"
] |
414a2f4d5902d839563d282a2a56c73f3006e517
|
To use the dataset
```py
from datasets import load_dataset
dataset = load_dataset("sulpha/anime-sceneries")
```
This is a web scraped dataset of (mostly) anime sceneries/paintings. Initially scraped to train an unconditional image generation model.
An example fastGAN model utilizing this dataset can be view [here](https://github.com/sulphatet/gan-anime-sceneries)
|
sulpha/anime-sceneries
|
[
"task_categories:unconditional-image-generation",
"license:apache-2.0",
"images",
"region:us"
] |
2023-10-21T07:11:56+00:00
|
{"license": "apache-2.0", "task_categories": ["unconditional-image-generation"], "tags": ["images"]}
|
2023-10-21T08:04:29+00:00
|
[] |
[] |
TAGS
#task_categories-unconditional-image-generation #license-apache-2.0 #images #region-us
|
To use the dataset
This is a web scraped dataset of (mostly) anime sceneries/paintings. Initially scraped to train an unconditional image generation model.
An example fastGAN model utilizing this dataset can be view here
|
[] |
[
"TAGS\n#task_categories-unconditional-image-generation #license-apache-2.0 #images #region-us \n"
] |
[
32
] |
[
"passage: TAGS\n#task_categories-unconditional-image-generation #license-apache-2.0 #images #region-us \n"
] |
7d12fbc3be4a1f7de2050a0e932577bc459da379
|
# Dataset Card for JHumanEval: Japanese Hand-Translated HumanEval
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/KuramitsuLab/jhuman-eval)
## Dataset Summary
This is a Japanese translated version of HumanEval, an evaluation harness for the HumanEval problem solving dataset described in the paper "Evaluating Large Language Models Trained on Code".
LLM のコード生成能力の標準ベンチマーク HumanEval の日本語翻訳版です。
機械翻訳(DeepL, GPT-4)の翻訳結果を全て人手によって再修正し、 訳文を日本人のプログラマが読んで理解し、コードが書ける内容かチェックしました。
ただし、英語版 HumanEval の間違いは、修正せずに残して、 HumanEval 同様に不完全なドキュメントからの生成能力を見るようになっています。
日本語LLM のベンチマークとしてお使いください。
## Languages
The programming problems are written in Python and contain English and Japanese natural text in comments and docstrings.
Python で書かれたプログラミング問題のデータセットには、英語と日本語のコメントやドキュメント文字列がそれぞれ別々に含まれています。
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("kogi-jwu/jhumaneval")
DatasetDict({
test: Dataset({
features: ['task_id', 'prompt_en', 'prompt', 'entry_point', 'canonical_solution', 'test'],
num_rows: 164
})
})
```
## Data Instances
An example of a dataset instance:
```
{
"task_id": "test/0",
"prompt_en": "def return1():\n \"\"\"\n A simple function that returns the integer 1.\n \"\"\"\n",
"prompt": "def return1():\n \"\"\"\n 整数1を返すシンプルな関数。\n \"\"\"\n",
"canonical_solution": " return 1",
"test": "def check(candidate):\n assert candidate() == 1",
"entry_point": "return1"
}
```
## Data Fields
- `task_id` : Unique identifier for a task.
- `prompt_en` : Function header and English docstrings as model input.
- `prompt` : Function header and Japanese docstrings, parallel to prompt_en.
- `canonical_solution` : The expected function implementation.
- `test` : Function to verify the correctness of generated code.
- `entry_point` : Function name to initiate the test.
## Data Splits
The dataset only consists of a test split with 164 samples.
## How to Use
参照コードで pass@1 を算出する例:
```python
import os
from datasets import load_dataset
from evaluate import load
os.environ["HF_ALLOW_CODE_EVAL"] = "1"
ds = load_dataset("kogi-jwu/jhumaneval")['test']
code_eval = load("code_eval")
candidates = []
test_cases = []
for d in ds:
# FIXME: 参照コードをそのまま入れているが、予測コードに置き換えるべき
candidates.append([d['prompt']+d['canonical_solution']])
# テストケースを実行可能な形式にする
text_cases.append([d['test']+f"\n\ncheck({d['entry_point']})\n"])
pass_at_k, results = code_eval.compute(references=test_cases, predictions=candidates, k=[1])
print(pass_at_k)
```
## Additional Information
### Licensing Information
MIT License
|
kogi-jwu/jhumaneval
|
[
"task_categories:text2text-generation",
"size_categories:n<1K",
"source_datasets:openai_humaneval",
"language:ja",
"language:en",
"license:mit",
"region:us"
] |
2023-10-21T07:20:14+00:00
|
{"language": ["ja", "en"], "license": "mit", "size_categories": ["n<1K"], "source_datasets": ["openai_humaneval"], "task_categories": ["text2text-generation"], "dataset_info": {"config_name": "jhumaneval", "features": [{"name": "task_id", "dtype": "string"}, {"name": "prompt_en", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "entry_point", "dtype": "string"}, {"name": "canonical_solution", "dtype": "string"}, {"name": "test", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 275012, "num_examples": 164}], "download_size": 125206, "dataset_size": 275012}, "configs": [{"config_name": "jhumaneval", "data_files": [{"split": "test", "path": "jhumaneval/test-*"}]}]}
|
2024-01-10T21:52:35+00:00
|
[] |
[
"ja",
"en"
] |
TAGS
#task_categories-text2text-generation #size_categories-n<1K #source_datasets-openai_humaneval #language-Japanese #language-English #license-mit #region-us
|
# Dataset Card for JHumanEval: Japanese Hand-Translated HumanEval
## Dataset Description
- Repository: GitHub Repository
## Dataset Summary
This is a Japanese translated version of HumanEval, an evaluation harness for the HumanEval problem solving dataset described in the paper "Evaluating Large Language Models Trained on Code".
LLM のコード生成能力の標準ベンチマーク HumanEval の日本語翻訳版です。
機械翻訳(DeepL, GPT-4)の翻訳結果を全て人手によって再修正し、 訳文を日本人のプログラマが読んで理解し、コードが書ける内容かチェックしました。
ただし、英語版 HumanEval の間違いは、修正せずに残して、 HumanEval 同様に不完全なドキュメントからの生成能力を見るようになっています。
日本語LLM のベンチマークとしてお使いください。
## Languages
The programming problems are written in Python and contain English and Japanese natural text in comments and docstrings.
Python で書かれたプログラミング問題のデータセットには、英語と日本語のコメントやドキュメント文字列がそれぞれ別々に含まれています。
## Dataset Structure
## Data Instances
An example of a dataset instance:
## Data Fields
- 'task_id' : Unique identifier for a task.
- 'prompt_en' : Function header and English docstrings as model input.
- 'prompt' : Function header and Japanese docstrings, parallel to prompt_en.
- 'canonical_solution' : The expected function implementation.
- 'test' : Function to verify the correctness of generated code.
- 'entry_point' : Function name to initiate the test.
## Data Splits
The dataset only consists of a test split with 164 samples.
## How to Use
参照コードで pass@1 を算出する例:
## Additional Information
### Licensing Information
MIT License
|
[
"# Dataset Card for JHumanEval: Japanese Hand-Translated HumanEval",
"## Dataset Description\n\n- Repository: GitHub Repository",
"## Dataset Summary\nThis is a Japanese translated version of HumanEval, an evaluation harness for the HumanEval problem solving dataset described in the paper \"Evaluating Large Language Models Trained on Code\".\n\nLLM のコード生成能力の標準ベンチマーク HumanEval の日本語翻訳版です。 \n機械翻訳(DeepL, GPT-4)の翻訳結果を全て人手によって再修正し、 訳文を日本人のプログラマが読んで理解し、コードが書ける内容かチェックしました。 \nただし、英語版 HumanEval の間違いは、修正せずに残して、 HumanEval 同様に不完全なドキュメントからの生成能力を見るようになっています。 \n日本語LLM のベンチマークとしてお使いください。",
"## Languages\nThe programming problems are written in Python and contain English and Japanese natural text in comments and docstrings. \n\nPython で書かれたプログラミング問題のデータセットには、英語と日本語のコメントやドキュメント文字列がそれぞれ別々に含まれています。",
"## Dataset Structure",
"## Data Instances\nAn example of a dataset instance:",
"## Data Fields\n- 'task_id' : Unique identifier for a task.\n- 'prompt_en' : Function header and English docstrings as model input.\n- 'prompt' : Function header and Japanese docstrings, parallel to prompt_en.\n- 'canonical_solution' : The expected function implementation.\n- 'test' : Function to verify the correctness of generated code.\n- 'entry_point' : Function name to initiate the test.",
"## Data Splits\nThe dataset only consists of a test split with 164 samples.",
"## How to Use\n\n参照コードで pass@1 を算出する例:",
"## Additional Information",
"### Licensing Information\nMIT License"
] |
[
"TAGS\n#task_categories-text2text-generation #size_categories-n<1K #source_datasets-openai_humaneval #language-Japanese #language-English #license-mit #region-us \n",
"# Dataset Card for JHumanEval: Japanese Hand-Translated HumanEval",
"## Dataset Description\n\n- Repository: GitHub Repository",
"## Dataset Summary\nThis is a Japanese translated version of HumanEval, an evaluation harness for the HumanEval problem solving dataset described in the paper \"Evaluating Large Language Models Trained on Code\".\n\nLLM のコード生成能力の標準ベンチマーク HumanEval の日本語翻訳版です。 \n機械翻訳(DeepL, GPT-4)の翻訳結果を全て人手によって再修正し、 訳文を日本人のプログラマが読んで理解し、コードが書ける内容かチェックしました。 \nただし、英語版 HumanEval の間違いは、修正せずに残して、 HumanEval 同様に不完全なドキュメントからの生成能力を見るようになっています。 \n日本語LLM のベンチマークとしてお使いください。",
"## Languages\nThe programming problems are written in Python and contain English and Japanese natural text in comments and docstrings. \n\nPython で書かれたプログラミング問題のデータセットには、英語と日本語のコメントやドキュメント文字列がそれぞれ別々に含まれています。",
"## Dataset Structure",
"## Data Instances\nAn example of a dataset instance:",
"## Data Fields\n- 'task_id' : Unique identifier for a task.\n- 'prompt_en' : Function header and English docstrings as model input.\n- 'prompt' : Function header and Japanese docstrings, parallel to prompt_en.\n- 'canonical_solution' : The expected function implementation.\n- 'test' : Function to verify the correctness of generated code.\n- 'entry_point' : Function name to initiate the test.",
"## Data Splits\nThe dataset only consists of a test split with 164 samples.",
"## How to Use\n\n参照コードで pass@1 を算出する例:",
"## Additional Information",
"### Licensing Information\nMIT License"
] |
[
57,
19,
15,
169,
59,
6,
13,
115,
19,
18,
5,
8
] |
[
"passage: TAGS\n#task_categories-text2text-generation #size_categories-n<1K #source_datasets-openai_humaneval #language-Japanese #language-English #license-mit #region-us \n# Dataset Card for JHumanEval: Japanese Hand-Translated HumanEval## Dataset Description\n\n- Repository: GitHub Repository## Dataset Summary\nThis is a Japanese translated version of HumanEval, an evaluation harness for the HumanEval problem solving dataset described in the paper \"Evaluating Large Language Models Trained on Code\".\n\nLLM のコード生成能力の標準ベンチマーク HumanEval の日本語翻訳版です。 \n機械翻訳(DeepL, GPT-4)の翻訳結果を全て人手によって再修正し、 訳文を日本人のプログラマが読んで理解し、コードが書ける内容かチェックしました。 \nただし、英語版 HumanEval の間違いは、修正せずに残して、 HumanEval 同様に不完全なドキュメントからの生成能力を見るようになっています。 \n日本語LLM のベンチマークとしてお使いください。## Languages\nThe programming problems are written in Python and contain English and Japanese natural text in comments and docstrings. \n\nPython で書かれたプログラミング問題のデータセットには、英語と日本語のコメントやドキュメント文字列がそれぞれ別々に含まれています。## Dataset Structure## Data Instances\nAn example of a dataset instance:## Data Fields\n- 'task_id' : Unique identifier for a task.\n- 'prompt_en' : Function header and English docstrings as model input.\n- 'prompt' : Function header and Japanese docstrings, parallel to prompt_en.\n- 'canonical_solution' : The expected function implementation.\n- 'test' : Function to verify the correctness of generated code.\n- 'entry_point' : Function name to initiate the test.## Data Splits\nThe dataset only consists of a test split with 164 samples.## How to Use\n\n参照コードで pass@1 を算出する例:## Additional Information### Licensing Information\nMIT License"
] |
551acb4ed85612e53700be8402d89fb4f8d16775
|
* 2023.12.4更新:改进答案的格式,强制所有答案在回答时必须先给出原文。旧版本的问答已经移至old文件夹。
# 中文多文档问答数据集
* 参考文档源数据均来自[悟道开源200G数据](https://data.baai.ac.cn/data)
* 问题和回答是通过大语言模型(gpt-3.5)自动生成的,但质量很高。
* raw数据集中,每个样本包含 <font color=red> 一个参考文档、99个无关文档、一个问题、一个基于参考文档的回答</font>。可以训练模型从大量文档中抽取关键信息的能力。不同领域的文档保存在不同json文件中。
* 原始数据经过筛选、整合转化为chatml形式的指令微调数据后,每条数据大约包含30个参考文档,以及5个对应的问答对。
|
yuyijiong/Multi-Doc-QA-Chinese
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-nc-4.0",
"region:us"
] |
2023-10-21T07:23:55+00:00
|
{"language": ["zh"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"]}
|
2023-12-06T04:38:57+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #license-cc-by-nc-4.0 #region-us
|
* 2023.12.4更新:改进答案的格式,强制所有答案在回答时必须先给出原文。旧版本的问答已经移至old文件夹。
# 中文多文档问答数据集
* 参考文档源数据均来自悟道开源200G数据
* 问题和回答是通过大语言模型(gpt-3.5)自动生成的,但质量很高。
* raw数据集中,每个样本包含 <font color=red> 一个参考文档、99个无关文档、一个问题、一个基于参考文档的回答</font>。可以训练模型从大量文档中抽取关键信息的能力。不同领域的文档保存在不同json文件中。
* 原始数据经过筛选、整合转化为chatml形式的指令微调数据后,每条数据大约包含30个参考文档,以及5个对应的问答对。
|
[
"# 中文多文档问答数据集\n* 参考文档源数据均来自悟道开源200G数据\n* 问题和回答是通过大语言模型(gpt-3.5)自动生成的,但质量很高。\n* raw数据集中,每个样本包含 <font color=red> 一个参考文档、99个无关文档、一个问题、一个基于参考文档的回答</font>。可以训练模型从大量文档中抽取关键信息的能力。不同领域的文档保存在不同json文件中。\n* 原始数据经过筛选、整合转化为chatml形式的指令微调数据后,每条数据大约包含30个参考文档,以及5个对应的问答对。"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #license-cc-by-nc-4.0 #region-us \n",
"# 中文多文档问答数据集\n* 参考文档源数据均来自悟道开源200G数据\n* 问题和回答是通过大语言模型(gpt-3.5)自动生成的,但质量很高。\n* raw数据集中,每个样本包含 <font color=red> 一个参考文档、99个无关文档、一个问题、一个基于参考文档的回答</font>。可以训练模型从大量文档中抽取关键信息的能力。不同领域的文档保存在不同json文件中。\n* 原始数据经过筛选、整合转化为chatml形式的指令微调数据后,每条数据大约包含30个参考文档,以及5个对应的问答对。"
] |
[
45,
158
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #license-cc-by-nc-4.0 #region-us \n# 中文多文档问答数据集\n* 参考文档源数据均来自悟道开源200G数据\n* 问题和回答是通过大语言模型(gpt-3.5)自动生成的,但质量很高。\n* raw数据集中,每个样本包含 <font color=red> 一个参考文档、99个无关文档、一个问题、一个基于参考文档的回答</font>。可以训练模型从大量文档中抽取关键信息的能力。不同领域的文档保存在不同json文件中。\n* 原始数据经过筛选、整合转化为chatml形式的指令微调数据后,每条数据大约包含30个参考文档,以及5个对应的问答对。"
] |
46ced9820aeaab312140eca3863eaa7fceba859c
|
# Dataset Card for "KAP4ICL-C4-UL2-15k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
crumb/KAP4ICL-C4-UL2-15k
|
[
"region:us"
] |
2023-10-21T07:24:20+00:00
|
{"dataset_info": {"features": [{"name": "combined_facts_text", "dtype": "string"}, {"name": "raw_text", "dtype": "string"}, {"name": "raw_facts", "sequence": "string"}, {"name": "raw_fact_prompts", "sequence": "string"}, {"name": "raw_topics", "sequence": "string"}, {"name": "raw_topic_prompts", "sequence": "string"}, {"name": "len_text", "dtype": "int64"}, {"name": "num_identifications", "dtype": "int64"}, {"name": "base_topic_count", "dtype": "int64"}, {"name": "len_raw_text", "dtype": "int64"}, {"name": "len_raw_facts", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 82413380, "num_examples": 15000}], "download_size": 47256239, "dataset_size": 82413380}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T07:24:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "KAP4ICL-C4-UL2-15k"
More Information needed
|
[
"# Dataset Card for \"KAP4ICL-C4-UL2-15k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"KAP4ICL-C4-UL2-15k\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"KAP4ICL-C4-UL2-15k\"\n\nMore Information needed"
] |
7ff517dc4911678ae27fae86e4ba6ea866c1e2af
|
# 中文图书总结数据集
每个样本包含:
<font color=red> 图书的一个章节、此章节的总结、图书名字</font>,可以训练模型总结长文本的能力。\
数据主要来自较为著名的中文版小说。
|
yuyijiong/Book_Summary_Chinese
|
[
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:zh",
"license:cc-by-nc-4.0",
"region:us"
] |
2023-10-21T07:30:30+00:00
|
{"language": ["zh"], "license": "cc-by-nc-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"]}
|
2023-10-21T07:35:15+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #license-cc-by-nc-4.0 #region-us
|
# 中文图书总结数据集
每个样本包含:
<font color=red> 图书的一个章节、此章节的总结、图书名字</font>,可以训练模型总结长文本的能力。\
数据主要来自较为著名的中文版小说。
|
[
"# 中文图书总结数据集\n每个样本包含:\n<font color=red> 图书的一个章节、此章节的总结、图书名字</font>,可以训练模型总结长文本的能力。\\\n数据主要来自较为著名的中文版小说。"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #license-cc-by-nc-4.0 #region-us \n",
"# 中文图书总结数据集\n每个样本包含:\n<font color=red> 图书的一个章节、此章节的总结、图书名字</font>,可以训练模型总结长文本的能力。\\\n数据主要来自较为著名的中文版小说。"
] |
[
45,
56
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #license-cc-by-nc-4.0 #region-us \n# 中文图书总结数据集\n每个样本包含:\n<font color=red> 图书的一个章节、此章节的总结、图书名字</font>,可以训练模型总结长文本的能力。\\\n数据主要来自较为著名的中文版小说。"
] |
8f6263ea00fb4ad4de691aac83a4d0b77e2c8840
|
# Dataset Card for "tamil-alpaca"
This repository includes a Tamil-translated versions of the [Alpaca dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned) and a subset of [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) dataset.
This dataset is part of the release of Tamil LLaMA family of models – an important step in advancing LLMs for the Tamil language. To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP) ]() that outlines our journey and the model's potential impact.
**GitHub Repository:** [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama)
## Models trained using this dataset
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Tamil LLaMA 7B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) |
| Tamil LLaMA 13B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
## Citation
If you use this model or any of the the Tamil-Llama datasets in your research, please cite:
```bibtex
@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
abhinand/tamil-alpaca-orca
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ta",
"license:gpl-3.0",
"arxiv:2311.05845",
"region:us"
] |
2023-10-21T07:33:54+00:00
|
{"language": ["ta"], "license": "gpl-3.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "tamil-alpaca-orca", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-11-24T14:39:23+00:00
|
[
"2311.05845"
] |
[
"ta"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-Tamil #license-gpl-3.0 #arxiv-2311.05845 #region-us
|
Dataset Card for "tamil-alpaca"
===============================
This repository includes a Tamil-translated versions of the Alpaca dataset and a subset of OpenOrca dataset.
This dataset is part of the release of Tamil LLaMA family of models – an important step in advancing LLMs for the Tamil language. To dive deep into the development and capabilities of this model, please read the research paper and the introductory blog post (WIP) that outlines our journey and the model's potential impact.
GitHub Repository: URL
Models trained using this dataset
---------------------------------
Meet the Developers
-------------------
Get to know the creators behind this innovative model and follow their contributions to the field:
* Abhinand Balachandran
If you use this model or any of the the Tamil-Llama datasets in your research, please cite:
|
[] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Tamil #license-gpl-3.0 #arxiv-2311.05845 #region-us \n"
] |
[
50
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Tamil #license-gpl-3.0 #arxiv-2311.05845 #region-us \n"
] |
3002582f3d66cce5fa5849cc02f0bd8e212a1f4b
|
# Dataset Card for "twolabels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jay401521/twolabels
|
[
"region:us"
] |
2023-10-21T07:34:10+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "domain", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "rank", "dtype": "int64"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6505957, "num_examples": 70594}], "download_size": 0, "dataset_size": 6505957}}
|
2023-10-21T08:26:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "twolabels"
More Information needed
|
[
"# Dataset Card for \"twolabels\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"twolabels\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"twolabels\"\n\nMore Information needed"
] |
fd7f7deeb63311ca34e5af37a344a39fc5e0fea0
|
# Dataset Card for "imagenet-1k-rand_canny_colorgrid"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
acozma/imagenet-1k-rand_canny_colorgrid
|
[
"region:us"
] |
2023-10-21T07:39:14+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "params", "struct": [{"name": "downsample", "dtype": "int64"}, {"name": "grid_size", "dtype": "int64"}, {"name": "high_threshold", "dtype": "int64"}, {"name": "low_threshold", "dtype": "int64"}, {"name": "sigma", "dtype": "float64"}]}], "splits": [{"name": "train", "num_bytes": 189495511555.0, "num_examples": 500000}], "download_size": 14160878993, "dataset_size": 189495511555.0}}
|
2023-10-31T08:53:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "imagenet-1k-rand_canny_colorgrid"
More Information needed
|
[
"# Dataset Card for \"imagenet-1k-rand_canny_colorgrid\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"imagenet-1k-rand_canny_colorgrid\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"imagenet-1k-rand_canny_colorgrid\"\n\nMore Information needed"
] |
315983a34e9729fdafb31ade8dbc762e01cd1228
|
#### **TLDR**
- website: [Vikatan-MY](https://www.vikatan.com/topics/malaysia)
- num. of webpages scraped: 65 (7 locked behind paywall)
- link to dataset: https://huggingface.co/datasets/wanadzhar913/crawl-vikatan-my/resolve/main/vikatan-my-scraped-data.jsonl
- date of scraping: 21st October 2023
- pull request: mesolitica/malaysian-dataset#353
- contributed to: https://github.com/mesolitica/malaysian-dataset
|
wanadzhar913/crawl-vikatan-my
|
[
"language:ta",
"license:apache-2.0",
"region:us"
] |
2023-10-21T07:55:43+00:00
|
{"language": ["ta"], "license": "apache-2.0"}
|
2023-10-21T10:41:57+00:00
|
[] |
[
"ta"
] |
TAGS
#language-Tamil #license-apache-2.0 #region-us
|
#### TLDR
- website: Vikatan-MY
- num. of webpages scraped: 65 (7 locked behind paywall)
- link to dataset: URL
- date of scraping: 21st October 2023
- pull request: mesolitica/malaysian-dataset#353
- contributed to: URL
|
[
"#### TLDR\n- website: Vikatan-MY\n- num. of webpages scraped: 65 (7 locked behind paywall)\n- link to dataset: URL\n- date of scraping: 21st October 2023\n- pull request: mesolitica/malaysian-dataset#353\n- contributed to: URL"
] |
[
"TAGS\n#language-Tamil #license-apache-2.0 #region-us \n",
"#### TLDR\n- website: Vikatan-MY\n- num. of webpages scraped: 65 (7 locked behind paywall)\n- link to dataset: URL\n- date of scraping: 21st October 2023\n- pull request: mesolitica/malaysian-dataset#353\n- contributed to: URL"
] |
[
18,
68
] |
[
"passage: TAGS\n#language-Tamil #license-apache-2.0 #region-us \n#### TLDR\n- website: Vikatan-MY\n- num. of webpages scraped: 65 (7 locked behind paywall)\n- link to dataset: URL\n- date of scraping: 21st October 2023\n- pull request: mesolitica/malaysian-dataset#353\n- contributed to: URL"
] |
f3ee7212619e0afd682a1101828221959bf42d1b
|
# Dataset Card for TExtPhish
## Dataset Description
### Dataset Summary
This dataset card aims to describe the **TExtPhish** collection and its intended use.
### Languages
The current version only includes data samples in English, as spoken partially by Reddit users on the [r/Scams](https://www.reddit.com/r/Scams/comments/n00kg3/the_blackmail_email_scam_part_7/###) blackmail subreddits.
In the Future, we would like to explore more in different languages. Collaborators are encouraged to contact the authors to extend the current version with more diverse extortion emails in different languages.
## Dataset Structure
### Initial Data Collection and Sanitization
First, we select benign samples from the publicly available dataset, such as Enron and SpamAssassin.
We extract each email from email threads and tokenize personally sensitive information using name entity recognition, regular expression and synthetically replaced information.
Second, we collect extortion attacks from reddit posts |[r/Scams](https://www.reddit.com/r/Scams/comments/n00kg3/the_blackmail_email_scam_part_7/###) and botnet ransomware emails from |[Malware Traffic Analysis repository](https://www.malware-traffic-analysis.net).
We remove unecessary comment from the reddit thread and we only keep extortion emails.
To make the dataset challenging, we keep only the most semantically similar benign emails to the extortion attacks.
For semantic textual similarity, we first applied sentence transformers (SBERT) to get contextual sentence embeddings of benign and extortion samples.
Then, we apply the Facebook AI Similarity Search (FAISS) measure to search for similar benign instances to extortion attacks.
### Data Instances
|Extortion Class| Examples from Sentence-level subset|
|---|---|
|Blackmail| - I will delete the corresponding recording and I will not blackmail you ever again.|
|Ransomware| - Tap to Download Attachment Xinalink_servicescom (10.3 KB).|
|Sextortion| - In case you ignore me, within 96 h, ur sex tape will be posted on the net.|
### Data Sources
The following tables describe the data sources used to generate this dataset.
* **Extortion Data**
|Source|Total number of Emails| Total number of Sentences|
|---|---|---|
|[r/Scams](https://www.reddit.com/r/Scams/comments/n00kg3/the_blackmail_email_scam_part_7/###) Extortion Emails | 1,113 | 17,393 |
|Botnet Ransomware Emails | 150 | 1,510 |
* **Benign Data**
|Source|Total number of Emails| Total number of Sentences|
|---|---|---|
|[Enron](https://www.cs.cmu.edu/~enron/)| 1,360 | 26,835 |
|[SpamAssasin](https://spamassassin.apache.org/old/publiccorpus/)| 1,010 | 12,348 |
### Data Fields
The dataset is structered as follow:
list[{
"src": str, # Data source (e.g, SpamAssassin, Enron, Reddit)
"content": str, # Content (sentence-level or email-level)
"label": str, # Extortion label (blackmail, ransomware, sextortion) or benign label
}]
### Loading TExtPhish Dataset
To load the email-level subset, use the following instructions:
email_subset = load_dataset("TExtPhish/TExtPhish", data_dir="email-level", split="train", sep=";")
To load the sentence-level subset, use the following instructions:
sentence_subset = load_dataset("TExtPhish/TExtPhish", data_dir="sentence-level", split="train", sep=";")
To load the Homograph-Perturbed subset on sentences, use the following instructions:
homograph_subset = load_dataset("TExtPhish/TExtPhish", data_dir="homograph-perturbed-sentences", split="train", sep=";")
### Splitting TExtPhish Dataset
If you would like to load the dataset under cross validation setting,
you can load (train or test) which will be divided into k folds (example below k=10).
test_folds = load_dataset('TExtPhish/TExtPhish', split=[f"train[{k}%:{k+10}%]" for k in range(0, 100, 10)], data_dir="sentence-level", sep=';')
train_folds = load_dataset('TExtPhish/TExtPhish',split=[f"train[:{k}%]+train[{k+10}%:]" for k in range(0, 100, 10)], data_dir="sentence-level", sep=';')
This easy and ready-to-use divided folds consist of dividing randomly TExtPhish into k=10 parts.
Nine of these parts are used for training while one tenth is reserved for testing.
This procedure will be repeated k=10 times each time reserving a different tenth for testing. In other words, each testing set is a 10% chunk, and the training set makes up the remaining complementary 90% chunk.
### Binarize Labels
from sklearn.preprocessing import LabelEncoder
# Transforming text labels to encoded labels using the MultiLabelBinarizer
multibin = LabelEncoder()
Y_train = multibin.fit_transform(Y_train)
Y_test = multibin.fit_transform(Y_test)
### Personal and Sensitive Information
We ensure to remove any personal and sensitive information before uploading our dataset.
The emails provided in this corpus are stripped from sensitive information that are replaced with tokens (e.g., url_token), synthetically replaced, or originally obfuscated (***) in order to anonymize the data.
## Considerations for Using the Data
### Intended Uses
Our collection may only be used for linguistic non-profit research including but not limited to Information Retrieval, Text Classification, Natural Language Processing, Machine Learning, Phishing Detection, Data Privacy and Security, and like fields.
### Social Impact of Dataset
Users are totally responsible for any misuse of the dataset that goes against the original intended use of this dataset.
The extortion dataset should not be used for any harmful means to institute and propagate attacks.
*Positive Social Impact*
* Researchers can use **TExtPhish** to study the tactics and techniques used by attackers, identify vulnerabilities, and develop effective countermeasures against extortion.
* Educators can use **TExtPhish** to teach students about online safety, how to recognize phishing extortion attempts, and best practices for protecting personal information and financial loss.
* Cybersecurity professionals can use **TExtPhish** to train machine learning models to detect and block phishing emails with money extortion attempts, improving incident response strategies, and minimizing financial loss exposure.
*Negative Social Impact*
* Attackers might use **TExtPhish** to create automatic botnets that generate better extortion attacks.
* Attackers might use **TExtPhish** to propagate deception and propaganda online.
* Attackers might attempt to use **TExtPhish** as an initializing phase to perform malware, ransomware, or embed trojans within a targeted system to gain remote access.
## Additional Information
### Licensing Information
As the maintainers of this dataset, we choose to follow licensing Attribution- NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) to ensure that the dataset is non-commercial and it cannot be distributed or reproduced, in whole or in part, any document from the Collection.
A portion of our dataset was downloaded using Reddit's API Wrapper through the PRAW package for the python programming language. Re-use of this data is subject to Reddit API terms, which include:
* Users shall not encourage or promote illegal activity throughout the use of this dataset.
* Users shall not use this dataset with the intent of introducing any viruses, worms, defects, Trojan horses, malware, or any other items of a destructive nature.
* Users shall not sell, lease, or sublicense this data whether for direct commercial or monetary gain.
### Citation Information
Information about citation will soon be updated.
|
TExtPhish/TExtPhish
|
[
"task_categories:text-classification",
"task_categories:sentence-similarity",
"language:en",
"license:cc-by-nc-nd-4.0",
"security",
"ML",
"NLP",
"sentiment",
"region:us"
] |
2023-10-21T07:57:29+00:00
|
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["text-classification", "sentence-similarity"], "pretty_name": "TExtPhish", "tags": ["security", "ML", "NLP", "sentiment"], "extra_gated_heading": "You need to agree to share your contact information to access TExtPhish", "extra_gated_prompt": "The emails in the **TExtPhish** Email Collection Corpus are under the license of ***cc-by-nc-nd-4.0***, and their use is governed by the following agreements: \n - You agree to not distribute or reproduce any derivatives, in whole or in part, any document from the Collection. \n- You agree to not attempt to identify, or speculate on the identity of, any individual in **TExtPhish** Collection, even if that information is available from public sources.\n - Re-use of this data is also subject to Reddit API terms which includes: \n * not encouraging or promoting illegal activity. \n * not using this dataset with the intent of introducing any viruses, worms, defects, Trojan horses, malware, or any other items of a destructive nature. \n * no selling, leasing, or sublicensing this data whether for direct commercial or monetary gain. \n\nIn the event that End User violates the terms of this agreement, then upon notice from the dataset maintainers, end users shall cease use of the collection and destroy all copies of the collection and other documents that contain excerpts from the Collection.\n\nWe would like to keep track of this dataset users for statistics purposes (how many users and affiliations) and agreement only.", "I agree to use TExtPhish dataset for non-commercial intended use ONLY": "checkbox", "extra_gated_button_content": "Acknowledge License", "viewer": false}
|
2023-10-21T08:09:59+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #task_categories-sentence-similarity #language-English #license-cc-by-nc-nd-4.0 #security #ML #NLP #sentiment #region-us
|
Dataset Card for TExtPhish
==========================
Dataset Description
-------------------
### Dataset Summary
This dataset card aims to describe the TExtPhish collection and its intended use.
### Languages
The current version only includes data samples in English, as spoken partially by Reddit users on the r/Scams blackmail subreddits.
In the Future, we would like to explore more in different languages. Collaborators are encouraged to contact the authors to extend the current version with more diverse extortion emails in different languages.
Dataset Structure
-----------------
### Initial Data Collection and Sanitization
First, we select benign samples from the publicly available dataset, such as Enron and SpamAssassin.
We extract each email from email threads and tokenize personally sensitive information using name entity recognition, regular expression and synthetically replaced information.
Second, we collect extortion attacks from reddit posts |r/Scams and botnet ransomware emails from |Malware Traffic Analysis repository.
We remove unecessary comment from the reddit thread and we only keep extortion emails.
To make the dataset challenging, we keep only the most semantically similar benign emails to the extortion attacks.
For semantic textual similarity, we first applied sentence transformers (SBERT) to get contextual sentence embeddings of benign and extortion samples.
Then, we apply the Facebook AI Similarity Search (FAISS) measure to search for similar benign instances to extortion attacks.
### Data Instances
### Data Sources
The following tables describe the data sources used to generate this dataset.
* Extortion Data
Source: r/Scams Extortion Emails, Total number of Emails: 1,113, Total number of Sentences: 17,393
Source: Botnet Ransomware Emails, Total number of Emails: 150, Total number of Sentences: 1,510
* Benign Data
Source: Enron, Total number of Emails: 1,360, Total number of Sentences: 26,835
Source: SpamAssasin, Total number of Emails: 1,010, Total number of Sentences: 12,348
### Data Fields
The dataset is structered as follow:
```
list[{
"src": str, # Data source (e.g, SpamAssassin, Enron, Reddit)
"content": str, # Content (sentence-level or email-level)
"label": str, # Extortion label (blackmail, ransomware, sextortion) or benign label
}]
```
### Loading TExtPhish Dataset
To load the email-level subset, use the following instructions:
```
email_subset = load_dataset("TExtPhish/TExtPhish", data_dir="email-level", split="train", sep=";")
```
To load the sentence-level subset, use the following instructions:
```
sentence_subset = load_dataset("TExtPhish/TExtPhish", data_dir="sentence-level", split="train", sep=";")
```
To load the Homograph-Perturbed subset on sentences, use the following instructions:
```
homograph_subset = load_dataset("TExtPhish/TExtPhish", data_dir="homograph-perturbed-sentences", split="train", sep=";")
```
### Splitting TExtPhish Dataset
If you would like to load the dataset under cross validation setting,
you can load (train or test) which will be divided into k folds (example below k=10).
```
test_folds = load_dataset('TExtPhish/TExtPhish', split=[f"train[{k}%:{k+10}%]" for k in range(0, 100, 10)], data_dir="sentence-level", sep=';')
train_folds = load_dataset('TExtPhish/TExtPhish',split=[f"train[:{k}%]+train[{k+10}%:]" for k in range(0, 100, 10)], data_dir="sentence-level", sep=';')
```
This easy and ready-to-use divided folds consist of dividing randomly TExtPhish into k=10 parts.
Nine of these parts are used for training while one tenth is reserved for testing.
This procedure will be repeated k=10 times each time reserving a different tenth for testing. In other words, each testing set is a 10% chunk, and the training set makes up the remaining complementary 90% chunk.
### Binarize Labels
```
from sklearn.preprocessing import LabelEncoder
# Transforming text labels to encoded labels using the MultiLabelBinarizer
multibin = LabelEncoder()
Y_train = multibin.fit_transform(Y_train)
Y_test = multibin.fit_transform(Y_test)
```
### Personal and Sensitive Information
We ensure to remove any personal and sensitive information before uploading our dataset.
The emails provided in this corpus are stripped from sensitive information that are replaced with tokens (e.g., url\_token), synthetically replaced, or originally obfuscated (\*) in order to anonymize the data.
Considerations for Using the Data
---------------------------------
### Intended Uses
Our collection may only be used for linguistic non-profit research including but not limited to Information Retrieval, Text Classification, Natural Language Processing, Machine Learning, Phishing Detection, Data Privacy and Security, and like fields.
### Social Impact of Dataset
Users are totally responsible for any misuse of the dataset that goes against the original intended use of this dataset.
The extortion dataset should not be used for any harmful means to institute and propagate attacks.
*Positive Social Impact*
* Researchers can use TExtPhish to study the tactics and techniques used by attackers, identify vulnerabilities, and develop effective countermeasures against extortion.
* Educators can use TExtPhish to teach students about online safety, how to recognize phishing extortion attempts, and best practices for protecting personal information and financial loss.
* Cybersecurity professionals can use TExtPhish to train machine learning models to detect and block phishing emails with money extortion attempts, improving incident response strategies, and minimizing financial loss exposure.
*Negative Social Impact*
* Attackers might use TExtPhish to create automatic botnets that generate better extortion attacks.
* Attackers might use TExtPhish to propagate deception and propaganda online.
* Attackers might attempt to use TExtPhish as an initializing phase to perform malware, ransomware, or embed trojans within a targeted system to gain remote access.
Additional Information
----------------------
### Licensing Information
As the maintainers of this dataset, we choose to follow licensing Attribution- NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) to ensure that the dataset is non-commercial and it cannot be distributed or reproduced, in whole or in part, any document from the Collection.
A portion of our dataset was downloaded using Reddit's API Wrapper through the PRAW package for the python programming language. Re-use of this data is subject to Reddit API terms, which include:
* Users shall not encourage or promote illegal activity throughout the use of this dataset.
* Users shall not use this dataset with the intent of introducing any viruses, worms, defects, Trojan horses, malware, or any other items of a destructive nature.
* Users shall not sell, lease, or sublicense this data whether for direct commercial or monetary gain.
Information about citation will soon be updated.
|
[
"### Dataset Summary\n\n\nThis dataset card aims to describe the TExtPhish collection and its intended use.",
"### Languages\n\n\nThe current version only includes data samples in English, as spoken partially by Reddit users on the r/Scams blackmail subreddits.\nIn the Future, we would like to explore more in different languages. Collaborators are encouraged to contact the authors to extend the current version with more diverse extortion emails in different languages.\n\n\nDataset Structure\n-----------------",
"### Initial Data Collection and Sanitization\n\n\nFirst, we select benign samples from the publicly available dataset, such as Enron and SpamAssassin.\nWe extract each email from email threads and tokenize personally sensitive information using name entity recognition, regular expression and synthetically replaced information.\n\n\nSecond, we collect extortion attacks from reddit posts |r/Scams and botnet ransomware emails from |Malware Traffic Analysis repository.\nWe remove unecessary comment from the reddit thread and we only keep extortion emails.\n\n\nTo make the dataset challenging, we keep only the most semantically similar benign emails to the extortion attacks.\nFor semantic textual similarity, we first applied sentence transformers (SBERT) to get contextual sentence embeddings of benign and extortion samples.\nThen, we apply the Facebook AI Similarity Search (FAISS) measure to search for similar benign instances to extortion attacks.",
"### Data Instances",
"### Data Sources\n\n\nThe following tables describe the data sources used to generate this dataset.\n\n\n* Extortion Data\n\n\nSource: r/Scams Extortion Emails, Total number of Emails: 1,113, Total number of Sentences: 17,393\nSource: Botnet Ransomware Emails, Total number of Emails: 150, Total number of Sentences: 1,510\n\n\n* Benign Data\n\n\nSource: Enron, Total number of Emails: 1,360, Total number of Sentences: 26,835\nSource: SpamAssasin, Total number of Emails: 1,010, Total number of Sentences: 12,348",
"### Data Fields\n\n\nThe dataset is structered as follow:\n\n\n\n```\nlist[{\n \"src\": str, # Data source (e.g, SpamAssassin, Enron, Reddit)\n \"content\": str, # Content (sentence-level or email-level)\n \"label\": str, # Extortion label (blackmail, ransomware, sextortion) or benign label\n }]\n\n```",
"### Loading TExtPhish Dataset\n\n\nTo load the email-level subset, use the following instructions:\n\n\n\n```\nemail_subset = load_dataset(\"TExtPhish/TExtPhish\", data_dir=\"email-level\", split=\"train\", sep=\";\")\n\n```\n\nTo load the sentence-level subset, use the following instructions:\n\n\n\n```\nsentence_subset = load_dataset(\"TExtPhish/TExtPhish\", data_dir=\"sentence-level\", split=\"train\", sep=\";\")\n\n```\n\nTo load the Homograph-Perturbed subset on sentences, use the following instructions:\n\n\n\n```\nhomograph_subset = load_dataset(\"TExtPhish/TExtPhish\", data_dir=\"homograph-perturbed-sentences\", split=\"train\", sep=\";\")\n\n```",
"### Splitting TExtPhish Dataset\n\n\nIf you would like to load the dataset under cross validation setting,\nyou can load (train or test) which will be divided into k folds (example below k=10).\n\n\n\n```\ntest_folds = load_dataset('TExtPhish/TExtPhish', split=[f\"train[{k}%:{k+10}%]\" for k in range(0, 100, 10)], data_dir=\"sentence-level\", sep=';')\ntrain_folds = load_dataset('TExtPhish/TExtPhish',split=[f\"train[:{k}%]+train[{k+10}%:]\" for k in range(0, 100, 10)], data_dir=\"sentence-level\", sep=';')\n\n```\n\nThis easy and ready-to-use divided folds consist of dividing randomly TExtPhish into k=10 parts.\nNine of these parts are used for training while one tenth is reserved for testing.\nThis procedure will be repeated k=10 times each time reserving a different tenth for testing. In other words, each testing set is a 10% chunk, and the training set makes up the remaining complementary 90% chunk.",
"### Binarize Labels\n\n\n\n```\nfrom sklearn.preprocessing import LabelEncoder",
"# Transforming text labels to encoded labels using the MultiLabelBinarizer\nmultibin = LabelEncoder()\nY_train = multibin.fit_transform(Y_train)\nY_test = multibin.fit_transform(Y_test)\n\n```",
"### Personal and Sensitive Information\n\n\nWe ensure to remove any personal and sensitive information before uploading our dataset.\nThe emails provided in this corpus are stripped from sensitive information that are replaced with tokens (e.g., url\\_token), synthetically replaced, or originally obfuscated (\\*) in order to anonymize the data.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Intended Uses\n\n\nOur collection may only be used for linguistic non-profit research including but not limited to Information Retrieval, Text Classification, Natural Language Processing, Machine Learning, Phishing Detection, Data Privacy and Security, and like fields.",
"### Social Impact of Dataset\n\n\nUsers are totally responsible for any misuse of the dataset that goes against the original intended use of this dataset.\nThe extortion dataset should not be used for any harmful means to institute and propagate attacks.\n\n\n*Positive Social Impact*\n\n\n* Researchers can use TExtPhish to study the tactics and techniques used by attackers, identify vulnerabilities, and develop effective countermeasures against extortion.\n* Educators can use TExtPhish to teach students about online safety, how to recognize phishing extortion attempts, and best practices for protecting personal information and financial loss.\n* Cybersecurity professionals can use TExtPhish to train machine learning models to detect and block phishing emails with money extortion attempts, improving incident response strategies, and minimizing financial loss exposure.\n\n\n*Negative Social Impact*\n\n\n* Attackers might use TExtPhish to create automatic botnets that generate better extortion attacks.\n* Attackers might use TExtPhish to propagate deception and propaganda online.\n* Attackers might attempt to use TExtPhish as an initializing phase to perform malware, ransomware, or embed trojans within a targeted system to gain remote access.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nAs the maintainers of this dataset, we choose to follow licensing Attribution- NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) to ensure that the dataset is non-commercial and it cannot be distributed or reproduced, in whole or in part, any document from the Collection.\nA portion of our dataset was downloaded using Reddit's API Wrapper through the PRAW package for the python programming language. Re-use of this data is subject to Reddit API terms, which include:\n\n\n* Users shall not encourage or promote illegal activity throughout the use of this dataset.\n* Users shall not use this dataset with the intent of introducing any viruses, worms, defects, Trojan horses, malware, or any other items of a destructive nature.\n* Users shall not sell, lease, or sublicense this data whether for direct commercial or monetary gain.\n\n\nInformation about citation will soon be updated."
] |
[
"TAGS\n#task_categories-text-classification #task_categories-sentence-similarity #language-English #license-cc-by-nc-nd-4.0 #security #ML #NLP #sentiment #region-us \n",
"### Dataset Summary\n\n\nThis dataset card aims to describe the TExtPhish collection and its intended use.",
"### Languages\n\n\nThe current version only includes data samples in English, as spoken partially by Reddit users on the r/Scams blackmail subreddits.\nIn the Future, we would like to explore more in different languages. Collaborators are encouraged to contact the authors to extend the current version with more diverse extortion emails in different languages.\n\n\nDataset Structure\n-----------------",
"### Initial Data Collection and Sanitization\n\n\nFirst, we select benign samples from the publicly available dataset, such as Enron and SpamAssassin.\nWe extract each email from email threads and tokenize personally sensitive information using name entity recognition, regular expression and synthetically replaced information.\n\n\nSecond, we collect extortion attacks from reddit posts |r/Scams and botnet ransomware emails from |Malware Traffic Analysis repository.\nWe remove unecessary comment from the reddit thread and we only keep extortion emails.\n\n\nTo make the dataset challenging, we keep only the most semantically similar benign emails to the extortion attacks.\nFor semantic textual similarity, we first applied sentence transformers (SBERT) to get contextual sentence embeddings of benign and extortion samples.\nThen, we apply the Facebook AI Similarity Search (FAISS) measure to search for similar benign instances to extortion attacks.",
"### Data Instances",
"### Data Sources\n\n\nThe following tables describe the data sources used to generate this dataset.\n\n\n* Extortion Data\n\n\nSource: r/Scams Extortion Emails, Total number of Emails: 1,113, Total number of Sentences: 17,393\nSource: Botnet Ransomware Emails, Total number of Emails: 150, Total number of Sentences: 1,510\n\n\n* Benign Data\n\n\nSource: Enron, Total number of Emails: 1,360, Total number of Sentences: 26,835\nSource: SpamAssasin, Total number of Emails: 1,010, Total number of Sentences: 12,348",
"### Data Fields\n\n\nThe dataset is structered as follow:\n\n\n\n```\nlist[{\n \"src\": str, # Data source (e.g, SpamAssassin, Enron, Reddit)\n \"content\": str, # Content (sentence-level or email-level)\n \"label\": str, # Extortion label (blackmail, ransomware, sextortion) or benign label\n }]\n\n```",
"### Loading TExtPhish Dataset\n\n\nTo load the email-level subset, use the following instructions:\n\n\n\n```\nemail_subset = load_dataset(\"TExtPhish/TExtPhish\", data_dir=\"email-level\", split=\"train\", sep=\";\")\n\n```\n\nTo load the sentence-level subset, use the following instructions:\n\n\n\n```\nsentence_subset = load_dataset(\"TExtPhish/TExtPhish\", data_dir=\"sentence-level\", split=\"train\", sep=\";\")\n\n```\n\nTo load the Homograph-Perturbed subset on sentences, use the following instructions:\n\n\n\n```\nhomograph_subset = load_dataset(\"TExtPhish/TExtPhish\", data_dir=\"homograph-perturbed-sentences\", split=\"train\", sep=\";\")\n\n```",
"### Splitting TExtPhish Dataset\n\n\nIf you would like to load the dataset under cross validation setting,\nyou can load (train or test) which will be divided into k folds (example below k=10).\n\n\n\n```\ntest_folds = load_dataset('TExtPhish/TExtPhish', split=[f\"train[{k}%:{k+10}%]\" for k in range(0, 100, 10)], data_dir=\"sentence-level\", sep=';')\ntrain_folds = load_dataset('TExtPhish/TExtPhish',split=[f\"train[:{k}%]+train[{k+10}%:]\" for k in range(0, 100, 10)], data_dir=\"sentence-level\", sep=';')\n\n```\n\nThis easy and ready-to-use divided folds consist of dividing randomly TExtPhish into k=10 parts.\nNine of these parts are used for training while one tenth is reserved for testing.\nThis procedure will be repeated k=10 times each time reserving a different tenth for testing. In other words, each testing set is a 10% chunk, and the training set makes up the remaining complementary 90% chunk.",
"### Binarize Labels\n\n\n\n```\nfrom sklearn.preprocessing import LabelEncoder",
"# Transforming text labels to encoded labels using the MultiLabelBinarizer\nmultibin = LabelEncoder()\nY_train = multibin.fit_transform(Y_train)\nY_test = multibin.fit_transform(Y_test)\n\n```",
"### Personal and Sensitive Information\n\n\nWe ensure to remove any personal and sensitive information before uploading our dataset.\nThe emails provided in this corpus are stripped from sensitive information that are replaced with tokens (e.g., url\\_token), synthetically replaced, or originally obfuscated (\\*) in order to anonymize the data.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Intended Uses\n\n\nOur collection may only be used for linguistic non-profit research including but not limited to Information Retrieval, Text Classification, Natural Language Processing, Machine Learning, Phishing Detection, Data Privacy and Security, and like fields.",
"### Social Impact of Dataset\n\n\nUsers are totally responsible for any misuse of the dataset that goes against the original intended use of this dataset.\nThe extortion dataset should not be used for any harmful means to institute and propagate attacks.\n\n\n*Positive Social Impact*\n\n\n* Researchers can use TExtPhish to study the tactics and techniques used by attackers, identify vulnerabilities, and develop effective countermeasures against extortion.\n* Educators can use TExtPhish to teach students about online safety, how to recognize phishing extortion attempts, and best practices for protecting personal information and financial loss.\n* Cybersecurity professionals can use TExtPhish to train machine learning models to detect and block phishing emails with money extortion attempts, improving incident response strategies, and minimizing financial loss exposure.\n\n\n*Negative Social Impact*\n\n\n* Attackers might use TExtPhish to create automatic botnets that generate better extortion attacks.\n* Attackers might use TExtPhish to propagate deception and propaganda online.\n* Attackers might attempt to use TExtPhish as an initializing phase to perform malware, ransomware, or embed trojans within a targeted system to gain remote access.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nAs the maintainers of this dataset, we choose to follow licensing Attribution- NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) to ensure that the dataset is non-commercial and it cannot be distributed or reproduced, in whole or in part, any document from the Collection.\nA portion of our dataset was downloaded using Reddit's API Wrapper through the PRAW package for the python programming language. Re-use of this data is subject to Reddit API terms, which include:\n\n\n* Users shall not encourage or promote illegal activity throughout the use of this dataset.\n* Users shall not use this dataset with the intent of introducing any viruses, worms, defects, Trojan horses, malware, or any other items of a destructive nature.\n* Users shall not sell, lease, or sublicense this data whether for direct commercial or monetary gain.\n\n\nInformation about citation will soon be updated."
] |
[
57,
24,
87,
218,
6,
133,
95,
192,
302,
21,
64,
91,
57,
280,
219
] |
[
"passage: TAGS\n#task_categories-text-classification #task_categories-sentence-similarity #language-English #license-cc-by-nc-nd-4.0 #security #ML #NLP #sentiment #region-us \n### Dataset Summary\n\n\nThis dataset card aims to describe the TExtPhish collection and its intended use.### Languages\n\n\nThe current version only includes data samples in English, as spoken partially by Reddit users on the r/Scams blackmail subreddits.\nIn the Future, we would like to explore more in different languages. Collaborators are encouraged to contact the authors to extend the current version with more diverse extortion emails in different languages.\n\n\nDataset Structure\n-----------------### Initial Data Collection and Sanitization\n\n\nFirst, we select benign samples from the publicly available dataset, such as Enron and SpamAssassin.\nWe extract each email from email threads and tokenize personally sensitive information using name entity recognition, regular expression and synthetically replaced information.\n\n\nSecond, we collect extortion attacks from reddit posts |r/Scams and botnet ransomware emails from |Malware Traffic Analysis repository.\nWe remove unecessary comment from the reddit thread and we only keep extortion emails.\n\n\nTo make the dataset challenging, we keep only the most semantically similar benign emails to the extortion attacks.\nFor semantic textual similarity, we first applied sentence transformers (SBERT) to get contextual sentence embeddings of benign and extortion samples.\nThen, we apply the Facebook AI Similarity Search (FAISS) measure to search for similar benign instances to extortion attacks.### Data Instances",
"passage: ### Data Sources\n\n\nThe following tables describe the data sources used to generate this dataset.\n\n\n* Extortion Data\n\n\nSource: r/Scams Extortion Emails, Total number of Emails: 1,113, Total number of Sentences: 17,393\nSource: Botnet Ransomware Emails, Total number of Emails: 150, Total number of Sentences: 1,510\n\n\n* Benign Data\n\n\nSource: Enron, Total number of Emails: 1,360, Total number of Sentences: 26,835\nSource: SpamAssasin, Total number of Emails: 1,010, Total number of Sentences: 12,348### Data Fields\n\n\nThe dataset is structered as follow:\n\n\n\n```\nlist[{\n \"src\": str, # Data source (e.g, SpamAssassin, Enron, Reddit)\n \"content\": str, # Content (sentence-level or email-level)\n \"label\": str, # Extortion label (blackmail, ransomware, sextortion) or benign label\n }]\n\n```### Loading TExtPhish Dataset\n\n\nTo load the email-level subset, use the following instructions:\n\n\n\n```\nemail_subset = load_dataset(\"TExtPhish/TExtPhish\", data_dir=\"email-level\", split=\"train\", sep=\";\")\n\n```\n\nTo load the sentence-level subset, use the following instructions:\n\n\n\n```\nsentence_subset = load_dataset(\"TExtPhish/TExtPhish\", data_dir=\"sentence-level\", split=\"train\", sep=\";\")\n\n```\n\nTo load the Homograph-Perturbed subset on sentences, use the following instructions:\n\n\n\n```\nhomograph_subset = load_dataset(\"TExtPhish/TExtPhish\", data_dir=\"homograph-perturbed-sentences\", split=\"train\", sep=\";\")\n\n```",
"passage: ### Splitting TExtPhish Dataset\n\n\nIf you would like to load the dataset under cross validation setting,\nyou can load (train or test) which will be divided into k folds (example below k=10).\n\n\n\n```\ntest_folds = load_dataset('TExtPhish/TExtPhish', split=[f\"train[{k}%:{k+10}%]\" for k in range(0, 100, 10)], data_dir=\"sentence-level\", sep=';')\ntrain_folds = load_dataset('TExtPhish/TExtPhish',split=[f\"train[:{k}%]+train[{k+10}%:]\" for k in range(0, 100, 10)], data_dir=\"sentence-level\", sep=';')\n\n```\n\nThis easy and ready-to-use divided folds consist of dividing randomly TExtPhish into k=10 parts.\nNine of these parts are used for training while one tenth is reserved for testing.\nThis procedure will be repeated k=10 times each time reserving a different tenth for testing. In other words, each testing set is a 10% chunk, and the training set makes up the remaining complementary 90% chunk.### Binarize Labels\n\n\n\n```\nfrom sklearn.preprocessing import LabelEncoder# Transforming text labels to encoded labels using the MultiLabelBinarizer\nmultibin = LabelEncoder()\nY_train = multibin.fit_transform(Y_train)\nY_test = multibin.fit_transform(Y_test)\n\n```### Personal and Sensitive Information\n\n\nWe ensure to remove any personal and sensitive information before uploading our dataset.\nThe emails provided in this corpus are stripped from sensitive information that are replaced with tokens (e.g., url\\_token), synthetically replaced, or originally obfuscated (\\*) in order to anonymize the data.\n\n\nConsiderations for Using the Data\n---------------------------------### Intended Uses\n\n\nOur collection may only be used for linguistic non-profit research including but not limited to Information Retrieval, Text Classification, Natural Language Processing, Machine Learning, Phishing Detection, Data Privacy and Security, and like fields."
] |
35acaad53c719d31f339d74e7103632419969c42
|
# Dataset Card for "drawbench-sdxl"
The dataset was generated using https://github.com/sayakpaul/caption-upsampling. Refer to the repository for more details.
|
sayakpaul/drawbench-sdxl
|
[
"region:us"
] |
2023-10-21T08:00:44+00:00
|
{"dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}, {"name": "Image", "dtype": "image"}, {"name": "Upsampled_Prompt", "dtype": "string"}, {"name": "Image_With_Upsampled_Prompt", "dtype": "image"}, {"name": "model_name", "dtype": "string"}, {"name": "seed", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 625589974.0, "num_examples": 200}], "download_size": 625589110, "dataset_size": 625589974.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T08:08:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "drawbench-sdxl"
The dataset was generated using URL Refer to the repository for more details.
|
[
"# Dataset Card for \"drawbench-sdxl\"\n\nThe dataset was generated using URL Refer to the repository for more details."
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"drawbench-sdxl\"\n\nThe dataset was generated using URL Refer to the repository for more details."
] |
[
6,
32
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"drawbench-sdxl\"\n\nThe dataset was generated using URL Refer to the repository for more details."
] |
caf2601696110b1e463c779b9a3a71fb25d65671
|
# Dataset Card for "twolabels_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jay401521/twolabels_test
|
[
"region:us"
] |
2023-10-21T08:18:33+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "domain", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "rank", "dtype": "int64"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1845580.6666666667, "num_examples": 20014}], "download_size": 911747, "dataset_size": 1845580.6666666667}}
|
2023-10-21T08:26:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "twolabels_test"
More Information needed
|
[
"# Dataset Card for \"twolabels_test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"twolabels_test\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"twolabels_test\"\n\nMore Information needed"
] |
d313c39c643ac8d84a0031b1e7add1cdd9d482d9
|
Follow these steps to set up and upload your audio dataset to Hugging Face:
* **Create a Virtual Environment**
- Start by creating a virtual environment on your machine. Run the following commands:
# On Windows
```
python -m venv env
./env/Scripts/activate
```
# On macOS/Linux
```
source env/bin/activate
pip install -r requirements.txt
```
* **Generate a Hugging Face Token**
- To interact with Hugging Face and push datasets, you'll need a Hugging Face access token. Follow these steps to generate one:
- Go to [Hugging Face Settings](https://huggingface.co/settings/tokens).
- Click on "New Token."
- Give the token a name and select the Role as "Write."
- Copy the generated token.
* **Configure Your Token**
- Run the following command, replacing `'YOUR_TOKEN_HERE'` with the token you obtained from Hugging Face:
```bash
python -c "from huggingface_hub.hf_api import HfFolder; HfFolder.save_token('YOUR_TOKEN_HERE')"
```
This command will configure your environment with your Hugging Face token.
* **Modify `main.py`**
- In the `main.py` file, make the following changes:
- Replace `'Enter-Your-hub-name'` with the name of your dataset. For example, use `'AneeqMalik/test_audio_clips'`.
```python
audio_dataset.push_to_hub("Enter-Your-hub-name")
```
This line specifies where your dataset will be pushed on Hugging Face.
* **Run the Code**
- To push your audio dataset to Hugging Face, execute the following command:
```bash
python main.py
```
Your audio dataset will be uploaded to Hugging Face under the specified name.
|
AneeqMalik/test_audio_clips
|
[
"region:us"
] |
2023-10-21T08:24:04+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "audio_names", "dtype": "string"}, {"name": "class_label", "dtype": {"class_label": {"names": {"0": "bad", "1": "okay", "2": "good", "3": "great"}}}}], "splits": [{"name": "train", "num_bytes": 12388426.0, "num_examples": 6}], "download_size": 12391305, "dataset_size": 12388426.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-25T18:02:10+00:00
|
[] |
[] |
TAGS
#region-us
|
Follow these steps to set up and upload your audio dataset to Hugging Face:
* Create a Virtual Environment
- Start by creating a virtual environment on your machine. Run the following commands:
# On Windows
# On macOS/Linux
* Generate a Hugging Face Token
- To interact with Hugging Face and push datasets, you'll need a Hugging Face access token. Follow these steps to generate one:
- Go to Hugging Face Settings.
- Click on "New Token."
- Give the token a name and select the Role as "Write."
- Copy the generated token.
* Configure Your Token
- Run the following command, replacing ''YOUR_TOKEN_HERE'' with the token you obtained from Hugging Face:
This command will configure your environment with your Hugging Face token.
* Modify 'URL'
- In the 'URL' file, make the following changes:
- Replace ''Enter-Your-hub-name'' with the name of your dataset. For example, use ''AneeqMalik/test_audio_clips''.
This line specifies where your dataset will be pushed on Hugging Face.
* Run the Code
- To push your audio dataset to Hugging Face, execute the following command:
Your audio dataset will be uploaded to Hugging Face under the specified name.
|
[
"# On Windows\n \n # On macOS/Linux\n \n\n* Generate a Hugging Face Token\n - To interact with Hugging Face and push datasets, you'll need a Hugging Face access token. Follow these steps to generate one:\n - Go to Hugging Face Settings.\n - Click on \"New Token.\"\n - Give the token a name and select the Role as \"Write.\"\n - Copy the generated token.\n\n* Configure Your Token\n - Run the following command, replacing ''YOUR_TOKEN_HERE'' with the token you obtained from Hugging Face:\n \n This command will configure your environment with your Hugging Face token.\n\n* Modify 'URL'\n - In the 'URL' file, make the following changes:\n - Replace ''Enter-Your-hub-name'' with the name of your dataset. For example, use ''AneeqMalik/test_audio_clips''.\n \n This line specifies where your dataset will be pushed on Hugging Face.\n\n* Run the Code\n - To push your audio dataset to Hugging Face, execute the following command:\n \n Your audio dataset will be uploaded to Hugging Face under the specified name."
] |
[
"TAGS\n#region-us \n",
"# On Windows\n \n # On macOS/Linux\n \n\n* Generate a Hugging Face Token\n - To interact with Hugging Face and push datasets, you'll need a Hugging Face access token. Follow these steps to generate one:\n - Go to Hugging Face Settings.\n - Click on \"New Token.\"\n - Give the token a name and select the Role as \"Write.\"\n - Copy the generated token.\n\n* Configure Your Token\n - Run the following command, replacing ''YOUR_TOKEN_HERE'' with the token you obtained from Hugging Face:\n \n This command will configure your environment with your Hugging Face token.\n\n* Modify 'URL'\n - In the 'URL' file, make the following changes:\n - Replace ''Enter-Your-hub-name'' with the name of your dataset. For example, use ''AneeqMalik/test_audio_clips''.\n \n This line specifies where your dataset will be pushed on Hugging Face.\n\n* Run the Code\n - To push your audio dataset to Hugging Face, execute the following command:\n \n Your audio dataset will be uploaded to Hugging Face under the specified name."
] |
[
6,
262
] |
[
"passage: TAGS\n#region-us \n# On Windows\n \n # On macOS/Linux\n \n\n* Generate a Hugging Face Token\n - To interact with Hugging Face and push datasets, you'll need a Hugging Face access token. Follow these steps to generate one:\n - Go to Hugging Face Settings.\n - Click on \"New Token.\"\n - Give the token a name and select the Role as \"Write.\"\n - Copy the generated token.\n\n* Configure Your Token\n - Run the following command, replacing ''YOUR_TOKEN_HERE'' with the token you obtained from Hugging Face:\n \n This command will configure your environment with your Hugging Face token.\n\n* Modify 'URL'\n - In the 'URL' file, make the following changes:\n - Replace ''Enter-Your-hub-name'' with the name of your dataset. For example, use ''AneeqMalik/test_audio_clips''.\n \n This line specifies where your dataset will be pushed on Hugging Face.\n\n* Run the Code\n - To push your audio dataset to Hugging Face, execute the following command:\n \n Your audio dataset will be uploaded to Hugging Face under the specified name."
] |
2c2c84022a6b02ca7c2d094ab8577bd3345aa5ae
|
* 2023.12.18更新:增加了参考文档中不包含问题答案(问题无法回答)的多文档问答数据集,数量约100条。
* 2023.12.18更新:上传经过整合的chatml格式的中英长文本指令微调数据集
# 长文本指令微调数据
* 此数据集由多种长文本任务数据集组合而成。
* 包含中文和英文两部分
* [Paper](https://arxiv.org/abs/2312.11193)

## 源数据
此处给出各个数据集的链接集合。也可以直接点击我的个人主页查看所有数据集。
### 中文
1. [图书总结](https://huggingface.co/datasets/yuyijiong/Book_Summary_Chinese)
2. [论文摘要 ](https://huggingface.co/datasets/yuyijiong/Chinese_Paper_Abstract)
涉及到知网数据,受限访问。
3. [论文问答](https://huggingface.co/datasets/yuyijiong/Chinese_Paper_QA)
涉及到知网数据,受限访问。
4. [多文档问答(检索)](https://huggingface.co/datasets/yuyijiong/Multi-Doc-QA-Chinese)
### 英文
1. [多文档问答(检索)](https://huggingface.co/datasets/yuyijiong/Multi-Doc-QA-CommonCrawl)
### 中英
1. [长论文多任务](https://huggingface.co/datasets/yuyijiong/LongPaper_multitask)
2. [从ShareGPT中筛选的长对话(中英)](https://huggingface.co/datasets/yuyijiong/Sharegpt-long-conversation)
3. 预训练长文本语料库(中英)[LongData-Corpus](https://huggingface.co/datasets/yuyijiong/LongData-Corpus)
|
yuyijiong/Long-Instruction
|
[
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:translation",
"task_categories:summarization",
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"arxiv:2312.11193",
"region:us"
] |
2023-10-21T08:34:43+00:00
|
{"language": ["zh", "en"], "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "text-classification", "translation", "summarization", "conversational"]}
|
2024-01-16T07:57:15+00:00
|
[
"2312.11193"
] |
[
"zh",
"en"
] |
TAGS
#task_categories-text-generation #task_categories-text-classification #task_categories-translation #task_categories-summarization #task_categories-conversational #size_categories-10K<n<100K #language-Chinese #language-English #arxiv-2312.11193 #region-us
|
* 2023.12.18更新:增加了参考文档中不包含问题答案(问题无法回答)的多文档问答数据集,数量约100条。
* 2023.12.18更新:上传经过整合的chatml格式的中英长文本指令微调数据集
# 长文本指令微调数据
* 此数据集由多种长文本任务数据集组合而成。
* 包含中文和英文两部分
* Paper

## 源数据
此处给出各个数据集的链接集合。也可以直接点击我的个人主页查看所有数据集。
### 中文
1. 图书总结
2. 论文摘要
涉及到知网数据,受限访问。
3. 论文问答
涉及到知网数据,受限访问。
4. 多文档问答(检索)
### 英文
1. 多文档问答(检索)
### 中英
1. 长论文多任务
2. 从ShareGPT中筛选的长对话(中英)
3. 预训练长文本语料库(中英)LongData-Corpus
|
[
"# 长文本指令微调数据\n* 此数据集由多种长文本任务数据集组合而成。\n* 包含中文和英文两部分\n* Paper\n\n",
"## 源数据\n此处给出各个数据集的链接集合。也可以直接点击我的个人主页查看所有数据集。",
"### 中文\n1. 图书总结\n\n2. 论文摘要 \n涉及到知网数据,受限访问。\n3. 论文问答\n涉及到知网数据,受限访问。\n\n4. 多文档问答(检索)",
"### 英文\n1. 多文档问答(检索)",
"### 中英\n\n\n1. 长论文多任务\n\n2. 从ShareGPT中筛选的长对话(中英)\n\n3. 预训练长文本语料库(中英)LongData-Corpus"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-text-classification #task_categories-translation #task_categories-summarization #task_categories-conversational #size_categories-10K<n<100K #language-Chinese #language-English #arxiv-2312.11193 #region-us \n",
"# 长文本指令微调数据\n* 此数据集由多种长文本任务数据集组合而成。\n* 包含中文和英文两部分\n* Paper\n\n",
"## 源数据\n此处给出各个数据集的链接集合。也可以直接点击我的个人主页查看所有数据集。",
"### 中文\n1. 图书总结\n\n2. 论文摘要 \n涉及到知网数据,受限访问。\n3. 论文问答\n涉及到知网数据,受限访问。\n\n4. 多文档问答(检索)",
"### 英文\n1. 多文档问答(检索)",
"### 中英\n\n\n1. 长论文多任务\n\n2. 从ShareGPT中筛选的长对话(中英)\n\n3. 预训练长文本语料库(中英)LongData-Corpus"
] |
[
87,
54,
26,
48,
15,
45
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-text-classification #task_categories-translation #task_categories-summarization #task_categories-conversational #size_categories-10K<n<100K #language-Chinese #language-English #arxiv-2312.11193 #region-us \n# 长文本指令微调数据\n* 此数据集由多种长文本任务数据集组合而成。\n* 包含中文和英文两部分\n* Paper\n\n## 源数据\n此处给出各个数据集的链接集合。也可以直接点击我的个人主页查看所有数据集。### 中文\n1. 图书总结\n\n2. 论文摘要 \n涉及到知网数据,受限访问。\n3. 论文问答\n涉及到知网数据,受限访问。\n\n4. 多文档问答(检索)### 英文\n1. 多文档问答(检索)### 中英\n\n\n1. 长论文多任务\n\n2. 从ShareGPT中筛选的长对话(中英)\n\n3. 预训练长文本语料库(中英)LongData-Corpus"
] |
c2e1007612cc1c6e3418437acb9704059176cef4
|
# Dataset Card for "label0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jay401521/label0
|
[
"region:us"
] |
2023-10-21T08:55:49+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "domain", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "rank", "dtype": "int64"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 922790.3333333334, "num_examples": 10007}], "download_size": 441033, "dataset_size": 922790.3333333334}}
|
2023-10-23T11:25:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "label0"
More Information needed
|
[
"# Dataset Card for \"label0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"label0\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"label0\"\n\nMore Information needed"
] |
b98c0265f4f4e8618d599772be788a738cf3972f
|
# Dataset Card for "label1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jay401521/label1
|
[
"region:us"
] |
2023-10-21T08:55:54+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "domain", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "rank", "dtype": "int64"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 922790.3333333334, "num_examples": 10007}], "download_size": 475363, "dataset_size": 922790.3333333334}}
|
2023-10-23T11:26:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "label1"
More Information needed
|
[
"# Dataset Card for \"label1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"label1\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"label1\"\n\nMore Information needed"
] |
4f171c67df1cf30124c4aa913d73dd9fd152daf1
|
# Dataset Card for "label2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jay401521/label2
|
[
"region:us"
] |
2023-10-21T08:56:00+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "domain", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "rank", "dtype": "int64"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 922790.3333333334, "num_examples": 10007}], "download_size": 463496, "dataset_size": 922790.3333333334}}
|
2023-10-23T11:26:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "label2"
More Information needed
|
[
"# Dataset Card for \"label2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"label2\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"label2\"\n\nMore Information needed"
] |
95c41d06f907e3bff232b6f6be5b4d7b103402e3
|
# Dataset Card for "artistic_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/artistic_prompts
|
[
"region:us"
] |
2023-10-21T09:03:57+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7555056, "num_examples": 10000}], "download_size": 958517, "dataset_size": 7555056}}
|
2023-10-21T09:03:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "artistic_prompts"
More Information needed
|
[
"# Dataset Card for \"artistic_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"artistic_prompts\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"artistic_prompts\"\n\nMore Information needed"
] |
4d7e356f14c6930c5bd7adadbc5c342e3b4df53a
|
Still Collecting Dataset. Karambit Knife Object Raw Picture Google Drive [Link](https://drive.google.com/file/d/1fFRSxeTt9Tvj6d7PFdJWJ56LXR5shoH4/view?usp=share_link)
|
faizalnf1800/karambit-knife-object
|
[
"license:apache-2.0",
"region:us"
] |
2023-10-21T09:09:44+00:00
|
{"license": "apache-2.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 791480.0, "num_examples": 27}], "download_size": 767049, "dataset_size": 791480.0}}
|
2023-10-21T09:42:37+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
Still Collecting Dataset. Karambit Knife Object Raw Picture Google Drive Link
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
033bb5e4972cf1deff48cde16e2e6e636f85ad04
|
# Dataset Card for "commonsense-dialogues3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chrisgru/commonsense-dialogues3
|
[
"region:us"
] |
2023-10-21T09:32:31+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 8749177.011517944, "num_examples": 12597}, {"name": "test", "num_bytes": 509362.23957367934, "num_examples": 1159}], "download_size": 6452260, "dataset_size": 9258539.251091624}}
|
2023-10-21T09:32:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "commonsense-dialogues3"
More Information needed
|
[
"# Dataset Card for \"commonsense-dialogues3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"commonsense-dialogues3\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"commonsense-dialogues3\"\n\nMore Information needed"
] |
feb35e809deb69eeffeb1ef8b5cbcd8b5148fb3b
|
# Dataset Card for "covidQA_training_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
goodcoffee/covidQA_training_v2
|
[
"region:us"
] |
2023-10-21T09:50:59+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3651192, "num_examples": 1413}], "download_size": 0, "dataset_size": 3651192}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-11-01T13:28:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "covidQA_training_v2"
More Information needed
|
[
"# Dataset Card for \"covidQA_training_v2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"covidQA_training_v2\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"covidQA_training_v2\"\n\nMore Information needed"
] |
8ac6f88d883df8894a5422cd73cb28fb81fbafbb
|
# Dataset Card for "guanaco-spanish-dataset"
**CLEANING AND CURATION OF THE DATASET HAS BEEN PERFORMED. NOW IT IS FULLY IN SPANISH (Date:12/01/2024)**
This dataset is a subset of original timdettmers/openassistant-guanaco,which is also a subset o/f the Open Assistant dataset .You can find here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main/
This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 2,369 samples, translated with the help of GPT 3.5. turbo.
It represents the 40% and 41% of train and test from timdettmers/openassistant-guanaco respectively.
You can find the github repository for the code used here: https://github.com/Hector1993prog/guanaco_translation
For further information, please see the original dataset.
**CLEANING AND CURATION OF THE DATASET HAS BEEN PERFORMED. NOW IT IS FULLY IN SPANISH**
License: Apache 2.0
Dataset Details
Dataset Sources [Open Assistant](https://huggingface.co/datasets/OpenAssistant/oasst1)
Repository: [Link to Repository](https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main)
# Uses
## Direct Use
The dataset is suitable for training and evaluating models in the context of Open Assistant applications, focusing on the highest-rated paths in conversation trees.
## Out-of-Scope Use
Usage outside the scope of Open Assistant applications may not yield optimal results.
# Dataset Structure
The dataset is organized into conversation paths, each containing the highest-rated samples. Samples are translated versions generated with the assistance of GPT 3.5 turbo.
# Dataset Creation
Curation Rationale
This subset was created to provide a focused collection of the highest-rated conversation paths from the original Open Assistant dataset, with translations performed using GPT 3.5 turbo.
# Dataset Creation
Curation Rationale
This subset was created to provide a focused collection of the highest-rated conversation paths from the original Open Assistant dataset, with translations performed using GPT 3.5 turbo.
# Source Data
## Data Collection and Processing
The source data is a subset of the timdettmers/openassistant-guanaco dataset, itself a subset of the Open Assistant dataset. The translation process involved GPT 3.5 turbo.
# Who are the source data producers?
The original data producers include contributors to the Open Assistant dataset, and the translation process involved the use of GPT 3.5 turbo.
# Annotations [optional]
## Annotation process
The dataset includes translated samples, and annotations were generated through the translation process.
## Who are the annotators?
Annotations were generated through the translation process using GPT 3.5 turbo. Dataset needs to be curated yet.
# Personal and Sensitive Information
The dataset does not contain personal or sensitive information.
# Bias, Risks, and Limitations
Users should be aware of potential biases introduced during the translation process. Limitations include the focus on the highest-rated conversation paths.
# Recommendations
Users are encouraged to consider potential biases and limitations when utilizing the dataset for model training and applications.
[Contact information for dataset inquiries](https://www.linkedin.com/in/hlh-generative-ai/)
|
hlhdatscience/guanaco-spanish-dataset
|
[
"language:es",
"license:apache-2.0",
"region:us"
] |
2023-10-21T09:53:04+00:00
|
{"language": ["es"], "license": "apache-2.0", "pretty_name": "d", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "partition", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4071580, "num_examples": 2173}, {"name": "test", "num_bytes": 333135, "num_examples": 196}], "download_size": 2267485, "dataset_size": 4404715}}
|
2024-01-12T09:35:16+00:00
|
[] |
[
"es"
] |
TAGS
#language-Spanish #license-apache-2.0 #region-us
|
# Dataset Card for "guanaco-spanish-dataset"
CLEANING AND CURATION OF THE DATASET HAS BEEN PERFORMED. NOW IT IS FULLY IN SPANISH (Date:12/01/2024)
This dataset is a subset of original timdettmers/openassistant-guanaco,which is also a subset o/f the Open Assistant dataset .You can find here: URL
This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 2,369 samples, translated with the help of GPT 3.5. turbo.
It represents the 40% and 41% of train and test from timdettmers/openassistant-guanaco respectively.
You can find the github repository for the code used here: URL
For further information, please see the original dataset.
CLEANING AND CURATION OF THE DATASET HAS BEEN PERFORMED. NOW IT IS FULLY IN SPANISH
License: Apache 2.0
Dataset Details
Dataset Sources Open Assistant
Repository: Link to Repository
# Uses
## Direct Use
The dataset is suitable for training and evaluating models in the context of Open Assistant applications, focusing on the highest-rated paths in conversation trees.
## Out-of-Scope Use
Usage outside the scope of Open Assistant applications may not yield optimal results.
# Dataset Structure
The dataset is organized into conversation paths, each containing the highest-rated samples. Samples are translated versions generated with the assistance of GPT 3.5 turbo.
# Dataset Creation
Curation Rationale
This subset was created to provide a focused collection of the highest-rated conversation paths from the original Open Assistant dataset, with translations performed using GPT 3.5 turbo.
# Dataset Creation
Curation Rationale
This subset was created to provide a focused collection of the highest-rated conversation paths from the original Open Assistant dataset, with translations performed using GPT 3.5 turbo.
# Source Data
## Data Collection and Processing
The source data is a subset of the timdettmers/openassistant-guanaco dataset, itself a subset of the Open Assistant dataset. The translation process involved GPT 3.5 turbo.
# Who are the source data producers?
The original data producers include contributors to the Open Assistant dataset, and the translation process involved the use of GPT 3.5 turbo.
# Annotations [optional]
## Annotation process
The dataset includes translated samples, and annotations were generated through the translation process.
## Who are the annotators?
Annotations were generated through the translation process using GPT 3.5 turbo. Dataset needs to be curated yet.
# Personal and Sensitive Information
The dataset does not contain personal or sensitive information.
# Bias, Risks, and Limitations
Users should be aware of potential biases introduced during the translation process. Limitations include the focus on the highest-rated conversation paths.
# Recommendations
Users are encouraged to consider potential biases and limitations when utilizing the dataset for model training and applications.
Contact information for dataset inquiries
|
[
"# Dataset Card for \"guanaco-spanish-dataset\"\n\nCLEANING AND CURATION OF THE DATASET HAS BEEN PERFORMED. NOW IT IS FULLY IN SPANISH (Date:12/01/2024)\nThis dataset is a subset of original timdettmers/openassistant-guanaco,which is also a subset o/f the Open Assistant dataset .You can find here: URL\n\n\n\nThis subset of the data only contains the highest-rated paths in the conversation tree, with a total of 2,369 samples, translated with the help of GPT 3.5. turbo.\n\nIt represents the 40% and 41% of train and test from timdettmers/openassistant-guanaco respectively.\n\nYou can find the github repository for the code used here: URL\n\nFor further information, please see the original dataset.\n\nCLEANING AND CURATION OF THE DATASET HAS BEEN PERFORMED. NOW IT IS FULLY IN SPANISH\n\nLicense: Apache 2.0\n\nDataset Details\nDataset Sources Open Assistant\nRepository: Link to Repository",
"# Uses",
"## Direct Use\nThe dataset is suitable for training and evaluating models in the context of Open Assistant applications, focusing on the highest-rated paths in conversation trees.",
"## Out-of-Scope Use\nUsage outside the scope of Open Assistant applications may not yield optimal results.",
"# Dataset Structure\nThe dataset is organized into conversation paths, each containing the highest-rated samples. Samples are translated versions generated with the assistance of GPT 3.5 turbo.",
"# Dataset Creation\nCuration Rationale\nThis subset was created to provide a focused collection of the highest-rated conversation paths from the original Open Assistant dataset, with translations performed using GPT 3.5 turbo.",
"# Dataset Creation\nCuration Rationale\nThis subset was created to provide a focused collection of the highest-rated conversation paths from the original Open Assistant dataset, with translations performed using GPT 3.5 turbo.",
"# Source Data",
"## Data Collection and Processing\nThe source data is a subset of the timdettmers/openassistant-guanaco dataset, itself a subset of the Open Assistant dataset. The translation process involved GPT 3.5 turbo.",
"# Who are the source data producers?\nThe original data producers include contributors to the Open Assistant dataset, and the translation process involved the use of GPT 3.5 turbo.",
"# Annotations [optional]",
"## Annotation process\nThe dataset includes translated samples, and annotations were generated through the translation process.",
"## Who are the annotators?\nAnnotations were generated through the translation process using GPT 3.5 turbo. Dataset needs to be curated yet.",
"# Personal and Sensitive Information\nThe dataset does not contain personal or sensitive information.",
"# Bias, Risks, and Limitations\nUsers should be aware of potential biases introduced during the translation process. Limitations include the focus on the highest-rated conversation paths.",
"# Recommendations\nUsers are encouraged to consider potential biases and limitations when utilizing the dataset for model training and applications.\n\nContact information for dataset inquiries"
] |
[
"TAGS\n#language-Spanish #license-apache-2.0 #region-us \n",
"# Dataset Card for \"guanaco-spanish-dataset\"\n\nCLEANING AND CURATION OF THE DATASET HAS BEEN PERFORMED. NOW IT IS FULLY IN SPANISH (Date:12/01/2024)\nThis dataset is a subset of original timdettmers/openassistant-guanaco,which is also a subset o/f the Open Assistant dataset .You can find here: URL\n\n\n\nThis subset of the data only contains the highest-rated paths in the conversation tree, with a total of 2,369 samples, translated with the help of GPT 3.5. turbo.\n\nIt represents the 40% and 41% of train and test from timdettmers/openassistant-guanaco respectively.\n\nYou can find the github repository for the code used here: URL\n\nFor further information, please see the original dataset.\n\nCLEANING AND CURATION OF THE DATASET HAS BEEN PERFORMED. NOW IT IS FULLY IN SPANISH\n\nLicense: Apache 2.0\n\nDataset Details\nDataset Sources Open Assistant\nRepository: Link to Repository",
"# Uses",
"## Direct Use\nThe dataset is suitable for training and evaluating models in the context of Open Assistant applications, focusing on the highest-rated paths in conversation trees.",
"## Out-of-Scope Use\nUsage outside the scope of Open Assistant applications may not yield optimal results.",
"# Dataset Structure\nThe dataset is organized into conversation paths, each containing the highest-rated samples. Samples are translated versions generated with the assistance of GPT 3.5 turbo.",
"# Dataset Creation\nCuration Rationale\nThis subset was created to provide a focused collection of the highest-rated conversation paths from the original Open Assistant dataset, with translations performed using GPT 3.5 turbo.",
"# Dataset Creation\nCuration Rationale\nThis subset was created to provide a focused collection of the highest-rated conversation paths from the original Open Assistant dataset, with translations performed using GPT 3.5 turbo.",
"# Source Data",
"## Data Collection and Processing\nThe source data is a subset of the timdettmers/openassistant-guanaco dataset, itself a subset of the Open Assistant dataset. The translation process involved GPT 3.5 turbo.",
"# Who are the source data producers?\nThe original data producers include contributors to the Open Assistant dataset, and the translation process involved the use of GPT 3.5 turbo.",
"# Annotations [optional]",
"## Annotation process\nThe dataset includes translated samples, and annotations were generated through the translation process.",
"## Who are the annotators?\nAnnotations were generated through the translation process using GPT 3.5 turbo. Dataset needs to be curated yet.",
"# Personal and Sensitive Information\nThe dataset does not contain personal or sensitive information.",
"# Bias, Risks, and Limitations\nUsers should be aware of potential biases introduced during the translation process. Limitations include the focus on the highest-rated conversation paths.",
"# Recommendations\nUsers are encouraged to consider potential biases and limitations when utilizing the dataset for model training and applications.\n\nContact information for dataset inquiries"
] |
[
19,
248,
3,
37,
25,
47,
48,
48,
3,
50,
37,
8,
26,
33,
18,
42,
39
] |
[
"passage: TAGS\n#language-Spanish #license-apache-2.0 #region-us \n# Dataset Card for \"guanaco-spanish-dataset\"\n\nCLEANING AND CURATION OF THE DATASET HAS BEEN PERFORMED. NOW IT IS FULLY IN SPANISH (Date:12/01/2024)\nThis dataset is a subset of original timdettmers/openassistant-guanaco,which is also a subset o/f the Open Assistant dataset .You can find here: URL\n\n\n\nThis subset of the data only contains the highest-rated paths in the conversation tree, with a total of 2,369 samples, translated with the help of GPT 3.5. turbo.\n\nIt represents the 40% and 41% of train and test from timdettmers/openassistant-guanaco respectively.\n\nYou can find the github repository for the code used here: URL\n\nFor further information, please see the original dataset.\n\nCLEANING AND CURATION OF THE DATASET HAS BEEN PERFORMED. NOW IT IS FULLY IN SPANISH\n\nLicense: Apache 2.0\n\nDataset Details\nDataset Sources Open Assistant\nRepository: Link to Repository# Uses## Direct Use\nThe dataset is suitable for training and evaluating models in the context of Open Assistant applications, focusing on the highest-rated paths in conversation trees.## Out-of-Scope Use\nUsage outside the scope of Open Assistant applications may not yield optimal results.# Dataset Structure\nThe dataset is organized into conversation paths, each containing the highest-rated samples. Samples are translated versions generated with the assistance of GPT 3.5 turbo.# Dataset Creation\nCuration Rationale\nThis subset was created to provide a focused collection of the highest-rated conversation paths from the original Open Assistant dataset, with translations performed using GPT 3.5 turbo.# Dataset Creation\nCuration Rationale\nThis subset was created to provide a focused collection of the highest-rated conversation paths from the original Open Assistant dataset, with translations performed using GPT 3.5 turbo.# Source Data"
] |
04abd2ee34c4a6f879354b6a8a74a39f842ceaad
|
# Dataset Card for "my_voice_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aoome123/myvoice
|
[
"region:us"
] |
2023-10-21T10:13:01+00:00
|
{"dataset_info": {"config_name": "aoome123/voice", "features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 101968152, "num_examples": 104}, {"name": "test", "num_bytes": 12717600, "num_examples": 13}, {"name": "valid", "num_bytes": 12730056, "num_examples": 13}], "download_size": 126599658, "dataset_size": 127415808}, "configs": [{"config_name": "aoome123/voice", "data_files": [{"split": "train", "path": "aoome123/voice/train-*"}, {"split": "test", "path": "aoome123/voice/test-*"}, {"split": "valid", "path": "aoome123/voice/valid-*"}]}]}
|
2023-10-21T11:03:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "my_voice_dataset"
More Information needed
|
[
"# Dataset Card for \"my_voice_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"my_voice_dataset\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"my_voice_dataset\"\n\nMore Information needed"
] |
7e5ef9ac156baf804cdf2f5294731004accf6971
|
# Dataset Card for "kolizo-designs-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Tochi2023/kolizo-designs-dataset
|
[
"region:us"
] |
2023-10-21T10:35:40+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": " text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 200160.0, "num_examples": 10}], "download_size": 197786, "dataset_size": 200160.0}}
|
2023-10-21T10:35:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "kolizo-designs-dataset"
More Information needed
|
[
"# Dataset Card for \"kolizo-designs-dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"kolizo-designs-dataset\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"kolizo-designs-dataset\"\n\nMore Information needed"
] |
854a13a1cd5d95c564634a12776115d3a2e51e56
|
<div align="center">
<img width="640" alt="lipi17/building-cracks" src="https://huggingface.co/datasets/lipi17/building-cracks/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['crack']
```
### Number of Images
```json
{'valid': 433, 'test': 211, 'train': 1490}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("lipi17/building-cracks", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/antonio-raimundo/crack-detection-y5kyg/dataset/2](https://universe.roboflow.com/antonio-raimundo/crack-detection-y5kyg/dataset/2?ref=roboflow2huggingface)
### Citation
```
@misc{ crack-detection-y5kyg_dataset,
title = { Crack Detection Dataset },
type = { Open Source Dataset },
author = { António Raimundo },
howpublished = { \\url{ https://universe.roboflow.com/antonio-raimundo/crack-detection-y5kyg } },
url = { https://universe.roboflow.com/antonio-raimundo/crack-detection-y5kyg },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { feb },
note = { visited on 2023-10-21 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on February 10, 2023 at 3:51 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 2134 images.
Soil are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
|
lipi17/building-cracks
|
[
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"region:us"
] |
2023-10-21T10:46:19+00:00
|
{"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface"]}
|
2023-10-21T10:53:45+00:00
|
[] |
[] |
TAGS
#task_categories-object-detection #roboflow #roboflow2huggingface #region-us
|
<div align="center">
<img width="640" alt="lipi17/building-cracks" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on February 10, 2023 at 3:51 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit URL
To find over 100k other datasets and pre-trained models, visit URL
The dataset includes 2134 images.
Soil are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
|
[
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on February 10, 2023 at 3:51 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 2134 images.\nSoil are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
] |
[
"TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on February 10, 2023 at 3:51 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 2134 images.\nSoil are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
] |
[
27,
5,
5,
18,
8,
6,
201
] |
[
"passage: TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #region-us \n### Dataset Labels### Number of Images### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:### Roboflow Dataset Page\nURL### License\nCC BY 4.0### Dataset Summary\nThis dataset was exported via URL on February 10, 2023 at 3:51 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 2134 images.\nSoil are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
] |
1c2646ad7163b0debe5ef54d45a9cbce5b1348b4
|
# Dataset Card for "train_1000_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
adityarra07/train_1000_2
|
[
"region:us"
] |
2023-10-21T10:59:03+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 133278697.26379317, "num_examples": 1000}, {"name": "test", "num_bytes": 26655739.452758636, "num_examples": 200}], "download_size": 164191192, "dataset_size": 159934436.7165518}}
|
2023-10-21T10:59:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "train_1000_2"
More Information needed
|
[
"# Dataset Card for \"train_1000_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"train_1000_2\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"train_1000_2\"\n\nMore Information needed"
] |
b9e108335994fca6f69b7586feb3d2a9aa4d6906
|
# Dataset Card for "train_10000_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
adityarra07/train_10000_2
|
[
"region:us"
] |
2023-10-21T10:59:16+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1332786972.6379318, "num_examples": 10000}, {"name": "test", "num_bytes": 26655739.452758636, "num_examples": 200}], "download_size": 1340054284, "dataset_size": 1359442712.0906904}}
|
2023-10-21T11:00:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "train_10000_2"
More Information needed
|
[
"# Dataset Card for \"train_10000_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"train_10000_2\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"train_10000_2\"\n\nMore Information needed"
] |
792e791e800134168ed10027b71bb58b3124c9af
|
<p><strong>Supplement Name : <a href="https://sites.google.com/view/naturesremedyfungiremover/home" target="_blank"><span style="color: red;">Nature's Remedy Fungi Remover</span></a> (</strong><strong><a href="https://sites.google.com/view/fungi-remover/home" target="_blank"><span style="color: red;">Fungi Remover</span></a></strong><strong>)</strong></p>
<p><strong>Category : </strong>Nail Health Formula<strong><br /></strong></p>
<p><strong>Ingredients :</strong> Tea Tree Oil , Clove Bud Oil, Almond Oil, Flaxseed Oil,</p>
<p><strong>Customer Rating : </strong>4.5/5</p>
<p><strong>Base Price : <span style="color: red;">$39.95 Per Bottle </span></strong> (<a href="https://www.healthsupplement24x7.com/get-natures-remedy-fungi-remover" target="_blank"><span style="color: red;"><strong>Discounts Are Available on Bulk Order</strong></span></a>)</p>
<p><strong>Formulation: Pills</strong></p>
<p><strong>Benefits: </strong>The #1 Solution To Supporting the Healthy <strong>Toenail!<br /></strong></p>
<p><strong>Dosage: </strong>Read On Lable</p>
<p><strong>Side Effects: </strong>There are no significant side effects</p>
<p><strong>Official Website:- <a href="https://www.healthsupplement24x7.com/get-natures-remedy-fungi-remover" target="_blank"><span style="color: red;">Click Here to Visit Official Website</span></a></strong></p>
<p><strong>Nature's Remedy Fungi Remover (Medical Experts Reviews):</strong> Toenail fungus is a common problem that millions of people worldwide face. It causes discoloration, brittle and smelly nails, and sometimes loss of nails. Forget about homemade remedies and use <a href="https://natures-remedy-fungi-remover.clubeo.com/page/natures-remedy-fungi-remover-medical-experts-reviews-1-nail-health-spray.html">Nature's Remedy Fungi Remover</a>, an effective nail spray that eliminates the most aggressive toenail fungus.</p>
<p>The spray protects your skin and nails, strengthens the nails, improves growth, and promotes overall health.The following <a href="https://natures-remedy-fungi-remover.clubeo.com/page/natures-remedy-fungi-remover-1-unique-fungal-eliminator-spray-promotes-nails-and-foot-health.html">Nature's Remedy Fungi Remover</a> review will reveal every aspect of the product.</p>
<p style="text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-natures-remedy-fungi-remover" target="_blank"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-sZ8C5m9z2U1hUWitwQDmRypubbo-0-IoePS0a3c7xTN-KWz4lnOyV6OBwt3MKoypxHPKngdtcZFOwgMq5h6-yPnRsMdvyc9HpMxCVJQGzxqBt1duFRkPb7qb6XJQNVbFqbz1UArtMOmVTgUTcLA0qngWMrmfxuaQgnWbRmsmj8rzf01ZH8qOGoHv4Co/w640-h360/Nature's%20Remedy%20Fungi%20Remover%209.png" alt="" width="640" height="360" border="0" data-original-height="617" data-original-width="1095" /></a></p>
<h2>What is Nature's Remedy Fungi Remover?</h2>
<p><a href="https://groups.google.com/g/naturesremedyfungiremover/c/jFax3KkarYY">Nature's Remedy Fungi Remover</a> is a unique fungal eliminator spray that promotes nails and foot health. It improves nail growth and nourishes your nails and skin. The spray contains 20 natural ingredients that fight fungal infection and prevent future recurrence.</p>
<p>The nail spray is rich in essential vitamins, minerals, and nutrients, giving your nails and feet a clean and beautiful look. It improves the appearance of your nails by eliminating discoloration and dullness. <a href="https://natures-remedy-fungi-remover.clubeo.com/calendar/2023/10/20/natures-remedy-fungi-remover-au-nz-reviews-1-spray-for-permanent-solution?_ga=2.135930616.810308565.1697864054-1222946029.1697864051">Nature's Remedy Fungi Remover</a> protects the nails from damage, reduces brittleness, and gives long-lasting results.</p>
<p>The 20-in-1 formula eliminates the embarrassment of smelly and ugly feet. It helps nourish your nails and skin while enhancing nail growth. <a href="https://groups.google.com/g/natures-remedy-fungi-remover/c/Xkfr6Rq76v4">Nature's Remedy Fungi Remover</a> works on all nail types without causing sensitivity or allergies. It contains 100% natural and clinically proven ingredients that support overall nail and skin health.</p>
<p>The advanced nail spray is pure and free from GMOs, gluten, chemicals, additives, fillers, and toxins. It is the best healthy choice for your nails and skin, giving you a noticeable difference within weeks. <a href="https://grabcad.com/library/nature-s-remedy-fungi-remover-things-you-need-to-know-about-shocking-price-where-to-buy-1">Nature's Remedy Fungi Remover</a> is a premium formula produced in an FDA-compliant and GMP-certified facility that ensures quality and safety. The manufacturer ensures 100% secure encrypted transactions and provides a 60-day satisfaction guarantee on each package.</p>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-natures-remedy-fungi-remover"><span style="color: #ff0000;">BULK SAVINGS - BUY NATURE"S REMEDY FUNGI REMOVER BEFORE STOCK RUNS OUT</span></a></h2>
<h2>Nature's Remedy Fungi Remover Working Mechanism</h2>
<p><a href="https://nature-remedy-fungi-remover.unicornplatform.page/">Nature's Remedy Fungi Remover</a> works by addressing the root cause of toenail fungus. According to the website, the type of fungus that causes toenail fungus is T.Rubrum. The fungus grows in moist environments that penetrate your toes. If left untreated, it can affect your lungs, eyes, brain, and mouth.</p>
<p>The advanced spray has hydrating properties that help moisturize your nails and skin, preventing brittleness, breakage, and dryness. It contains Vitamin C, which is crucial in collagen production. Collagen protein promotes healthy hair, skin, and nail growth and improves bone structure.</p>
<p>The nail spray has antioxidants, including green tea, aloe vera, and Vitamin C, strengthening the skin barrier function and preventing future fungal attacks. Aloe vera is rich in anti-inflammatory properties that reduce inflammation and soothe skin.</p>
<h2>Nature's Remedy Fungi Remover Ingredients</h2>
<p><a href="https://doogeemall.com/community/xenforum/topic/111811/nature-s-remedy-fungi-remover-aunz-reviews-1-spray-for-permanent-solution">Nature's Remedy Fungi Remover</a> contains pure, science-backed ingredients that fight toenail fungus and improve skin barrier function. The compounds in the spray are 100% natural and do not cause any risk of side effects.</p>
<div class="bullseye_container-0-2-308"><strong>Tea Tree Oil</strong></div>
<ul>
<li>Fights existing fungal infection</li>
<li>Prevents re-infection</li>
<li>Reduces yellow discoloration</li>
</ul>
<p><strong>Clove Bud Oil</strong></p>
<ul>
<li>Fights nail fungus</li>
<li>Stimulates nail regrowth</li>
<li>Reduces burning & itching</li>
</ul>
<p><strong>Almond Oil</strong></p>
<ul>
<li>Nail & skin superfood</li>
<li>Anti-microbial properties</li>
<li>Nail-soothing benefits</li>
</ul>
<p><strong>Flaxseed Oil</strong></p>
<ul>
<li>Effective anti-fungal oil</li>
<li>Reduces dermal inflammation</li>
<li>Supports cuticle strengthening</li>
</ul>
<p><strong>Lemongrass Oil</strong></p>
<ul>
<li>Treats dermatophyte</li>
<li>Reduces nail discoloration</li>
<li>Prevents nail cracking</li>
</ul>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-natures-remedy-fungi-remover"><span style="color: #ff0000;">BULK SAVINGS - BUY NATURE"S REMEDY FUNGI REMOVER BEFORE STOCK RUNS OUT</span></a></h2>
<p style="text-align: center;"> <a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-natures-remedy-fungi-remover" target="_blank"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAVLtii8k1JEbJzb77QwKKt9CpckCYXtoiBBU68Js-h1Qln9g0oM3sD7V91t4XLwxMQAT2p_8ibfXBce6Q8CCPRo-IqE-QjQdtQuEOKz_BcfsQy71qRxwap5EjeGYxwfZdtNv4TqAhx-c8ZU-iDjizPpDN469she9v0QRwDyc2zxtgdLe-COiCpsuICDI/w640-h288/Nature's%20Remedy%20Fungi%20Remover%206.jpg" alt="" width="640" height="288" border="0" data-original-height="802" data-original-width="1778" /></a></p>
<h2>The Benefits of Nature's Remedy Fungi Remover</h2>
<ul>
<li><strong>Strengthens your nails-</strong> The spray contains keratin and biotin, which support the formation of strong nails. The ingredients work by increasing the nail plate’s thickness and reducing the risk of breakage and brittleness.</li>
<li><strong>Improve skin and nail appearance-</strong> fungus attacks cause your nails to look unpleasant. Nature's Remedy Fungi Remover gives your nails a new and healthy appearance, reducing discoloration and dullness.</li>
<li><strong>Enhance nail growth-</strong> Nature's Remedy Fungi Remover contains all the essential compounds that support nail growth. It improves the production of new nail cells and uses nutrients to nourish and repair damaged cells.</li>
<li><strong>Nourish and moisturize nails and cuticles-</strong> Nature's Remedy Fungi Remover contains powerful glycerin and hyaluronic acid that help hydrate your cuticles, making them softer. It is also rich in nutrients that nourish your nails and skin.</li>
<li><strong>Antioxidant support-</strong> The nail spray contains antioxidant-rich ingredients that help protect your nails from damage caused by environmental stressors. The compounds prevent premature aging and promote overall skin health. The antioxidants strengthen the skin barrier function, preventing fungal attacks.</li>
<li><strong>Improve nail and foot health-</strong> Nature's Remedy Fungi Remover contains 20 natural elements that protect your nails, reduce dryness, support nail growth, and nourish your nails and skin, thus contributing to overall nail and foot health.</li>
</ul>
<h2>How to use Nature's Remedy Fungi Remover</h2>
<p><a href="http://nature-remedy-fungi-remover.tilda.ws/">Nature's Remedy Fungi Remover</a> comes in a gel-like spray, allowing for easy application. Using the spray is straightforward. Apply Sray on the affected area. Ensure your skin or nail is clean before using the product. You can use the product twice daily to improve its effectiveness.</p>
<p>The nail spray suits individuals suffering from toenails, skin fungus, and those with unhealthy or brittle nails. It also benefits people with discolored, fragile, dry nails and damaged cuticles.</p>
<p>Nature's Remedy Fungi Remover is a natural formula containing all-natural and science-backed ingredients free from gluten, stimulants, toxins, and chemicals.Use Nature's Remedy Fungi Remover externally. Discontinue the fungus eliminator if you notice any redness or itching.</p>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-natures-remedy-fungi-remover"><span style="color: #ff0000;">BULK SAVINGS - BUY NATURE"S REMEDY FUNGI REMOVER BEFORE STOCK RUNS OUT</span></a></h2>
<p style="text-align: center;"> <a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-natures-remedy-fungi-remover" target="_blank"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnIVa_7plWtetl7lm2mLkFIG_Or1C3Vt3SnDAxngCSbzXQAmN9CsthjQ9IyqvdVLBe00jAiDIsSpUAJPc0bQHuxYcc8A6BSLzwOGs10uTmbiD89cJfElFgtYRf5qHSHz3IUiF0OeXju7HJlO0GxX7JaD0ACyTUJHlTi-K-8RDD_LMVLU-Rk0gJjRv4VO4/w640-h422/Screenshot%20(1295).png" alt="" width="640" height="422" border="0" data-original-height="696" data-original-width="1056" /></a></p>
<h2>Pricing and Money-Back Guarantee</h2>
<p><a href="https://pdfhost.io/v/HM0pM~QMU_Natures_Remedy_Fungi_Remover_AUNZ_Reviews_1_Spray_For_Permanent_Solution_">Nature's Remedy Fungi Remover</a> is exclusively available online on the official website at discounted prices. The special pricing is only for a limited period. The product’s price options are as follows:</p>
<ul>
<li>One Nature's Remedy Fungi Remover at $69.95 + small shipping fee;</li>
<li>Two Bottles + One Free Bottle at $49.95 free US shipping;</li>
<li>Three Bottles + Two Free Bottles at $39.95 free US shipping.</li>
</ul>
<p>A 60-day money-back guarantee covers each <a href="https://www.ivoox.com/nature-s-remedy-fungi-remover-audios-mp3_rf_118141375_1.html">Nature's Remedy Fungi Remover</a> package option to ensure customer satisfaction. If the spray doesn’t improve your nail’s health, the manufacturer will be more than willing to give a 100% refund.</p>
<h2>Conclusion</h2>
<p><a href="https://gamma.app/public/Natures-Remedy-Fungi-Remover-AUNZ-Reviews-1-Spray-For-Permanent-S-5w7jphrj9ew5aaf?mode=doc">Nature's Remedy Fungi Remover</a> is a natural solution that helps eliminate toenail fungus. It uses plant-based ingredients that are clinically proven to promote healthy skin and nails. The spray helps deal with brittle nails, dry cuticles, weak nails, and damaged nails.</p>
<p>The revolutionary nail spray improves the skin barrier function, thus preventing fungus from penetrating your skin and nails. It promotes the growth of stronger and healthier nails and improves collagen production for healthy skin and hair.</p>
<p><a href="https://sketchfab.com/3d-models/natures-remedy-fungi-remover-aunz-reviews-2d45d22c02ab426ea1d0c945f99e5fd7">Nature's Remedy Fungi Remover</a> is a comprehensive nail spray that promises long-term results without the risk of side effects. The fungus remover is safe and free from toxins, chemicals, or additives. Additionally, the manufacturer offers a 60-day satisfaction guarantee and free bonuses.</p>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-natures-remedy-fungi-remover"><span style="color: #ff0000;">BULK SAVINGS - BUY NATURE"S REMEDY FUNGI REMOVER BEFORE STOCK RUNS OUT</span></a></h2>
<h3>READ MORE ON OFFICIAL WEBSITE:</h3>
<p><a href="https://natures-remedy-fungi-remover.clubeo.com/calendar/2023/10/20/natures-remedy-fungi-remover-au-nz-reviews-1-spray-for-permanent-solution?_ga=2.135930616.810308565.1697864054-1222946029.1697864051">https://natures-remedy-fungi-remover.clubeo.com/calendar/2023/10/20/natures-remedy-fungi-remover-au-nz-reviews-1-spray-for-permanent-solution</a></p>
<p><a href="https://sites.google.com/view/naturesremedyfungiremover/home">https://sites.google.com/view/naturesremedyfungiremover/home</a></p>
<p><a href="https://groups.google.com/g/natures-remedy-fungi-remover/c/Xkfr6Rq76v4">https://groups.google.com/g/natures-remedy-fungi-remover/c/Xkfr6Rq76v4</a></p>
<p><a href="https://natures-remedy-fungi-remover.clubeo.com/page/natures-remedy-fungi-remover-medical-experts-reviews-1-nail-health-spray.html">https://natures-remedy-fungi-remover.clubeo.com/page/natures-remedy-fungi-remover-medical-experts-reviews-1-nail-health-spray.html</a></p>
<p><a href="https://sites.google.com/view/fungi-remover/home">https://sites.google.com/view/fungi-remover/home</a></p>
<p><a href="https://natures-remedy-fungi-remover.clubeo.com/page/natures-remedy-fungi-remover-1-unique-fungal-eliminator-spray-promotes-nails-and-foot-health.html">https://natures-remedy-fungi-remover.clubeo.com/page/natures-remedy-fungi-remover-1-unique-fungal-eliminator-spray-promotes-nails-and-foot-health.html</a></p>
<p><a href="https://groups.google.com/g/naturesremedyfungiremover/c/jFax3KkarYY">https://groups.google.com/g/naturesremedyfungiremover/c/jFax3KkarYY</a></p>
<p><a href="https://grabcad.com/library/nature-s-remedy-fungi-remover-things-you-need-to-know-about-shocking-price-where-to-buy-1">https://grabcad.com/library/nature-s-remedy-fungi-remover-things-you-need-to-know-about-shocking-price-where-to-buy-1</a></p>
<p><a href="https://nature-remedy-fungi-remover.unicornplatform.page/">https://nature-remedy-fungi-remover.unicornplatform.page/</a></p>
<p><a href="https://doogeemall.com/community/xenforum/topic/111811/nature-s-remedy-fungi-remover-aunz-reviews-1-spray-for-permanent-solution">https://doogeemall.com/community/xenforum/topic/111811/nature-s-remedy-fungi-remover-aunz-reviews-1-spray-for-permanent-solution</a></p>
<p><a href="https://www.ivoox.com/nature-s-remedy-fungi-remover-audios-mp3_rf_118141375_1.html">https://www.ivoox.com/nature-s-remedy-fungi-remover-audios-mp3_rf_118141375_1.html</a></p>
<p><a href="https://gamma.app/public/Natures-Remedy-Fungi-Remover-AUNZ-Reviews-1-Spray-For-Permanent-S-5w7jphrj9ew5aaf?mode=doc">https://gamma.app/public/Natures-Remedy-Fungi-Remover-AUNZ-Reviews-1-Spray-For-Permanent-S-5w7jphrj9ew5aaf</a></p>
<p><a href="https://pdfhost.io/v/HM0pM~QMU_Natures_Remedy_Fungi_Remover_AUNZ_Reviews_1_Spray_For_Permanent_Solution_">https://pdfhost.io/v/HM0pM~QMU_Natures_Remedy_Fungi_Remover_AUNZ_Reviews_1_Spray_For_Permanent_Solution_</a></p>
<p><a href="https://sketchfab.com/3d-models/natures-remedy-fungi-remover-aunz-reviews-2d45d22c02ab426ea1d0c945f99e5fd7">https://sketchfab.com/3d-models/natures-remedy-fungi-remover-aunz-reviews-2d45d22c02ab426ea1d0c945f99e5fd7</a></p>
<p><a href="http://nature-remedy-fungi-remover.tilda.ws/">http://nature-remedy-fungi-remover.tilda.ws/</a></p>
|
NaturesRemedyFungiRemover/Natures-Remedy-Fungi-Remover
|
[
"region:us"
] |
2023-10-21T11:19:05+00:00
|
{}
|
2023-10-21T11:21:02+00:00
|
[] |
[] |
TAGS
#region-us
|
<p><strong>Supplement Name : <a href="URL target="_blank"><span style="color: red;">Nature's Remedy Fungi Remover</span></a> (</strong><strong><a href="URL target="_blank"><span style="color: red;">Fungi Remover</span></a></strong><strong>)</strong></p>
<p><strong>Category : </strong>Nail Health Formula<strong><br /></strong></p>
<p><strong>Ingredients :</strong> Tea Tree Oil , Clove Bud Oil, Almond Oil, Flaxseed Oil,</p>
<p><strong>Customer Rating : </strong>4.5/5</p>
<p><strong>Base Price : <span style="color: red;">$39.95 Per Bottle </span></strong> (<a href="URL target="_blank"><span style="color: red;"><strong>Discounts Are Available on Bulk Order</strong></span></a>)</p>
<p><strong>Formulation: Pills</strong></p>
<p><strong>Benefits: </strong>The #1 Solution To Supporting the Healthy <strong>Toenail!<br /></strong></p>
<p><strong>Dosage: </strong>Read On Lable</p>
<p><strong>Side Effects: </strong>There are no significant side effects</p>
<p><strong>Official Website:- <a href="URL target="_blank"><span style="color: red;">Click Here to Visit Official Website</span></a></strong></p>
<p><strong>Nature's Remedy Fungi Remover (Medical Experts Reviews):</strong> Toenail fungus is a common problem that millions of people worldwide face. It causes discoloration, brittle and smelly nails, and sometimes loss of nails. Forget about homemade remedies and use <a href="URL Remedy Fungi Remover</a>, an effective nail spray that eliminates the most aggressive toenail fungus.</p>
<p>The spray protects your skin and nails, strengthens the nails, improves growth, and promotes overall health.The following <a href="URL Remedy Fungi Remover</a> review will reveal every aspect of the product.</p>
<p style="text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL target="_blank"><img src="URL alt="" width="640" height="360" border="0" data-original-height="617" data-original-width="1095" /></a></p>
<h2>What is Nature's Remedy Fungi Remover?</h2>
<p><a href="URL Remedy Fungi Remover</a> is a unique fungal eliminator spray that promotes nails and foot health. It improves nail growth and nourishes your nails and skin. The spray contains 20 natural ingredients that fight fungal infection and prevent future recurrence.</p>
<p>The nail spray is rich in essential vitamins, minerals, and nutrients, giving your nails and feet a clean and beautiful look. It improves the appearance of your nails by eliminating discoloration and dullness. <a href="URL Remedy Fungi Remover</a> protects the nails from damage, reduces brittleness, and gives long-lasting results.</p>
<p>The 20-in-1 formula eliminates the embarrassment of smelly and ugly feet. It helps nourish your nails and skin while enhancing nail growth. <a href="URL Remedy Fungi Remover</a> works on all nail types without causing sensitivity or allergies. It contains 100% natural and clinically proven ingredients that support overall nail and skin health.</p>
<p>The advanced nail spray is pure and free from GMOs, gluten, chemicals, additives, fillers, and toxins. It is the best healthy choice for your nails and skin, giving you a noticeable difference within weeks. <a href="URL Remedy Fungi Remover</a> is a premium formula produced in an FDA-compliant and GMP-certified facility that ensures quality and safety. The manufacturer ensures 100% secure encrypted transactions and provides a 60-day satisfaction guarantee on each package.</p>
<h2 style="text-align: center;"><a href="URL style="color: #ff0000;">BULK SAVINGS - BUY NATURE"S REMEDY FUNGI REMOVER BEFORE STOCK RUNS OUT</span></a></h2>
<h2>Nature's Remedy Fungi Remover Working Mechanism</h2>
<p><a href="URL Remedy Fungi Remover</a> works by addressing the root cause of toenail fungus. According to the website, the type of fungus that causes toenail fungus is T.Rubrum. The fungus grows in moist environments that penetrate your toes. If left untreated, it can affect your lungs, eyes, brain, and mouth.</p>
<p>The advanced spray has hydrating properties that help moisturize your nails and skin, preventing brittleness, breakage, and dryness. It contains Vitamin C, which is crucial in collagen production. Collagen protein promotes healthy hair, skin, and nail growth and improves bone structure.</p>
<p>The nail spray has antioxidants, including green tea, aloe vera, and Vitamin C, strengthening the skin barrier function and preventing future fungal attacks. Aloe vera is rich in anti-inflammatory properties that reduce inflammation and soothe skin.</p>
<h2>Nature's Remedy Fungi Remover Ingredients</h2>
<p><a href="URL Remedy Fungi Remover</a> contains pure, science-backed ingredients that fight toenail fungus and improve skin barrier function. The compounds in the spray are 100% natural and do not cause any risk of side effects.</p>
<div class="bullseye_container-0-2-308"><strong>Tea Tree Oil</strong></div>
<ul>
<li>Fights existing fungal infection</li>
<li>Prevents re-infection</li>
<li>Reduces yellow discoloration</li>
</ul>
<p><strong>Clove Bud Oil</strong></p>
<ul>
<li>Fights nail fungus</li>
<li>Stimulates nail regrowth</li>
<li>Reduces burning & itching</li>
</ul>
<p><strong>Almond Oil</strong></p>
<ul>
<li>Nail & skin superfood</li>
<li>Anti-microbial properties</li>
<li>Nail-soothing benefits</li>
</ul>
<p><strong>Flaxseed Oil</strong></p>
<ul>
<li>Effective anti-fungal oil</li>
<li>Reduces dermal inflammation</li>
<li>Supports cuticle strengthening</li>
</ul>
<p><strong>Lemongrass Oil</strong></p>
<ul>
<li>Treats dermatophyte</li>
<li>Reduces nail discoloration</li>
<li>Prevents nail cracking</li>
</ul>
<h2 style="text-align: center;"><a href="URL style="color: #ff0000;">BULK SAVINGS - BUY NATURE"S REMEDY FUNGI REMOVER BEFORE STOCK RUNS OUT</span></a></h2>
<p style="text-align: center;"> <a style="margin-left: 1em; margin-right: 1em;" href="URL target="_blank"><img src="URL alt="" width="640" height="288" border="0" data-original-height="802" data-original-width="1778" /></a></p>
<h2>The Benefits of Nature's Remedy Fungi Remover</h2>
<ul>
<li><strong>Strengthens your nails-</strong> The spray contains keratin and biotin, which support the formation of strong nails. The ingredients work by increasing the nail plate’s thickness and reducing the risk of breakage and brittleness.</li>
<li><strong>Improve skin and nail appearance-</strong> fungus attacks cause your nails to look unpleasant. Nature's Remedy Fungi Remover gives your nails a new and healthy appearance, reducing discoloration and dullness.</li>
<li><strong>Enhance nail growth-</strong> Nature's Remedy Fungi Remover contains all the essential compounds that support nail growth. It improves the production of new nail cells and uses nutrients to nourish and repair damaged cells.</li>
<li><strong>Nourish and moisturize nails and cuticles-</strong> Nature's Remedy Fungi Remover contains powerful glycerin and hyaluronic acid that help hydrate your cuticles, making them softer. It is also rich in nutrients that nourish your nails and skin.</li>
<li><strong>Antioxidant support-</strong> The nail spray contains antioxidant-rich ingredients that help protect your nails from damage caused by environmental stressors. The compounds prevent premature aging and promote overall skin health. The antioxidants strengthen the skin barrier function, preventing fungal attacks.</li>
<li><strong>Improve nail and foot health-</strong> Nature's Remedy Fungi Remover contains 20 natural elements that protect your nails, reduce dryness, support nail growth, and nourish your nails and skin, thus contributing to overall nail and foot health.</li>
</ul>
<h2>How to use Nature's Remedy Fungi Remover</h2>
<p><a href="URL Remedy Fungi Remover</a> comes in a gel-like spray, allowing for easy application. Using the spray is straightforward. Apply Sray on the affected area. Ensure your skin or nail is clean before using the product. You can use the product twice daily to improve its effectiveness.</p>
<p>The nail spray suits individuals suffering from toenails, skin fungus, and those with unhealthy or brittle nails. It also benefits people with discolored, fragile, dry nails and damaged cuticles.</p>
<p>Nature's Remedy Fungi Remover is a natural formula containing all-natural and science-backed ingredients free from gluten, stimulants, toxins, and chemicals.Use Nature's Remedy Fungi Remover externally. Discontinue the fungus eliminator if you notice any redness or itching.</p>
<h2 style="text-align: center;"><a href="URL style="color: #ff0000;">BULK SAVINGS - BUY NATURE"S REMEDY FUNGI REMOVER BEFORE STOCK RUNS OUT</span></a></h2>
<p style="text-align: center;"> <a style="margin-left: 1em; margin-right: 1em;" href="URL target="_blank"><img src="URL alt="" width="640" height="422" border="0" data-original-height="696" data-original-width="1056" /></a></p>
<h2>Pricing and Money-Back Guarantee</h2>
<p><a href="URL Remedy Fungi Remover</a> is exclusively available online on the official website at discounted prices. The special pricing is only for a limited period. The product’s price options are as follows:</p>
<ul>
<li>One Nature's Remedy Fungi Remover at $69.95 + small shipping fee;</li>
<li>Two Bottles + One Free Bottle at $49.95 free US shipping;</li>
<li>Three Bottles + Two Free Bottles at $39.95 free US shipping.</li>
</ul>
<p>A 60-day money-back guarantee covers each <a href="URL Remedy Fungi Remover</a> package option to ensure customer satisfaction. If the spray doesn’t improve your nail’s health, the manufacturer will be more than willing to give a 100% refund.</p>
<h2>Conclusion</h2>
<p><a href="URL Remedy Fungi Remover</a> is a natural solution that helps eliminate toenail fungus. It uses plant-based ingredients that are clinically proven to promote healthy skin and nails. The spray helps deal with brittle nails, dry cuticles, weak nails, and damaged nails.</p>
<p>The revolutionary nail spray improves the skin barrier function, thus preventing fungus from penetrating your skin and nails. It promotes the growth of stronger and healthier nails and improves collagen production for healthy skin and hair.</p>
<p><a href="URL Remedy Fungi Remover</a> is a comprehensive nail spray that promises long-term results without the risk of side effects. The fungus remover is safe and free from toxins, chemicals, or additives. Additionally, the manufacturer offers a 60-day satisfaction guarantee and free bonuses.</p>
<h2 style="text-align: center;"><a href="URL style="color: #ff0000;">BULK SAVINGS - BUY NATURE"S REMEDY FUNGI REMOVER BEFORE STOCK RUNS OUT</span></a></h2>
<h3>READ MORE ON OFFICIAL WEBSITE:</h3>
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
e1b9a8760eb6c89c26386aef2ac02b697ca924d7
|
<div align="center">
<img width="640" alt="lipi17/building-cracks-merged" src="https://huggingface.co/datasets/lipi17/building-cracks-merged/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['crack', 'stairstep_crack']
```
### Number of Images
```json
{'test': 11, 'valid': 433, 'train': 947}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("lipi17/building-cracks-merged", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/lipi-deepaakshi-patnaik-ktyz8/merged-building-cracks/dataset/1](https://universe.roboflow.com/lipi-deepaakshi-patnaik-ktyz8/merged-building-cracks/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ merged-building-cracks_dataset,
title = { Merged-Building-Cracks Dataset },
type = { Open Source Dataset },
author = { Lipi Deepaakshi Patnaik },
howpublished = { \\url{ https://universe.roboflow.com/lipi-deepaakshi-patnaik-ktyz8/merged-building-cracks } },
url = { https://universe.roboflow.com/lipi-deepaakshi-patnaik-ktyz8/merged-building-cracks },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { oct },
note = { visited on 2023-10-21 },
}
```
### License
MIT
### Dataset Summary
This dataset was exported via roboflow.com on October 21, 2023 at 12:21 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 1391 images.
Cracks are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
|
lipi17/Building-Cracks-Merged
|
[
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"region:us"
] |
2023-10-21T11:20:50+00:00
|
{"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface"]}
|
2023-10-21T11:21:54+00:00
|
[] |
[] |
TAGS
#task_categories-object-detection #roboflow #roboflow2huggingface #region-us
|
<div align="center">
<img width="640" alt="lipi17/building-cracks-merged" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
MIT
### Dataset Summary
This dataset was exported via URL on October 21, 2023 at 12:21 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit URL
To find over 100k other datasets and pre-trained models, visit URL
The dataset includes 1391 images.
Cracks are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
|
[
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nMIT",
"### Dataset Summary\nThis dataset was exported via URL on October 21, 2023 at 12:21 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 1391 images.\nCracks are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
] |
[
"TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nMIT",
"### Dataset Summary\nThis dataset was exported via URL on October 21, 2023 at 12:21 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 1391 images.\nCracks are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
] |
[
27,
5,
5,
18,
8,
4,
200
] |
[
"passage: TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #region-us \n### Dataset Labels### Number of Images### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:### Roboflow Dataset Page\nURL### License\nMIT### Dataset Summary\nThis dataset was exported via URL on October 21, 2023 at 12:21 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 1391 images.\nCracks are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
] |
cca354825e3d04f0e742565f6fe68214dd91c2ef
|
# Dataset Card for "salmon-asr-smj"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
NbAiLab/salmon-asr-smj
|
[
"region:us"
] |
2023-10-21T11:40:19+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}, {"name": "duration", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 3425289656.938, "num_examples": 18657}, {"name": "validation", "num_bytes": 20146487.0, "num_examples": 100}, {"name": "test", "num_bytes": 19303449.0, "num_examples": 100}], "download_size": 3896709446, "dataset_size": 3464739592.938}}
|
2023-10-21T11:43:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "salmon-asr-smj"
More Information needed
|
[
"# Dataset Card for \"salmon-asr-smj\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"salmon-asr-smj\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"salmon-asr-smj\"\n\nMore Information needed"
] |
b602db5da66b37f78032838b7aeedfee18e46492
|
# Internat training dataset
This is an internal training dataset
|
davidprinz/calltaker-dataset
|
[
"region:us"
] |
2023-10-21T12:39:51+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 52}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train-*"}]}]}
|
2023-10-28T15:31:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Internat training dataset
This is an internal training dataset
|
[
"# Internat training dataset\nThis is an internal training dataset"
] |
[
"TAGS\n#region-us \n",
"# Internat training dataset\nThis is an internal training dataset"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Internat training dataset\nThis is an internal training dataset"
] |
c0f33a480b141a1585671b9c93901a726909625f
|
# Dataset Card for Evaluation run of WizardLM/WizardLM-30B-V1.0
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/WizardLM/WizardLM-30B-V1.0
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [WizardLM/WizardLM-30B-V1.0](https://huggingface.co/WizardLM/WizardLM-30B-V1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_WizardLM__WizardLM-30B-V1.0",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T06:39:48.508245](https://huggingface.co/datasets/open-llm-leaderboard/details_WizardLM__WizardLM-30B-V1.0/blob/main/results_2023-12-03T06-39-48.508245.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_WizardLM__WizardLM-30B-V1.0
|
[
"region:us"
] |
2023-10-21T13:56:59+00:00
|
{"pretty_name": "Evaluation run of WizardLM/WizardLM-30B-V1.0", "dataset_summary": "Dataset automatically created during the evaluation run of model [WizardLM/WizardLM-30B-V1.0](https://huggingface.co/WizardLM/WizardLM-30B-V1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_WizardLM__WizardLM-30B-V1.0\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T06:39:48.508245](https://huggingface.co/datasets/open-llm-leaderboard/details_WizardLM__WizardLM-30B-V1.0/blob/main/results_2023-12-03T06-39-48.508245.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/WizardLM/WizardLM-30B-V1.0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_21T14_56_55.024771", "path": ["**/details_harness|drop|3_2023-10-21T14-56-55.024771.parquet"]}, {"split": "2023_10_21T17_46_17.809151", "path": ["**/details_harness|drop|3_2023-10-21T17-46-17.809151.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-21T17-46-17.809151.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_21T14_56_55.024771", "path": ["**/details_harness|gsm8k|5_2023-10-21T14-56-55.024771.parquet"]}, {"split": "2023_10_21T17_46_17.809151", "path": ["**/details_harness|gsm8k|5_2023-10-21T17-46-17.809151.parquet"]}, {"split": "2023_12_03T04_35_31.048043", "path": ["**/details_harness|gsm8k|5_2023-12-03T04-35-31.048043.parquet"]}, {"split": "2023_12_03T04_36_04.759843", "path": ["**/details_harness|gsm8k|5_2023-12-03T04-36-04.759843.parquet"]}, {"split": "2023_12_03T04_42_48.050570", "path": ["**/details_harness|gsm8k|5_2023-12-03T04-42-48.050570.parquet"]}, {"split": "2023_12_03T04_43_48.387414", "path": ["**/details_harness|gsm8k|5_2023-12-03T04-43-48.387414.parquet"]}, {"split": "2023_12_03T06_38_27.690404", "path": ["**/details_harness|gsm8k|5_2023-12-03T06-38-27.690404.parquet"]}, {"split": "2023_12_03T06_39_48.508245", "path": ["**/details_harness|gsm8k|5_2023-12-03T06-39-48.508245.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T06-39-48.508245.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_21T14_56_55.024771", "path": ["**/details_harness|winogrande|5_2023-10-21T14-56-55.024771.parquet"]}, {"split": "2023_10_21T17_46_17.809151", "path": ["**/details_harness|winogrande|5_2023-10-21T17-46-17.809151.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-21T17-46-17.809151.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_21T14_56_55.024771", "path": ["results_2023-10-21T14-56-55.024771.parquet"]}, {"split": "2023_10_21T17_46_17.809151", "path": ["results_2023-10-21T17-46-17.809151.parquet"]}, {"split": "2023_12_03T04_35_31.048043", "path": ["results_2023-12-03T04-35-31.048043.parquet"]}, {"split": "2023_12_03T04_36_04.759843", "path": ["results_2023-12-03T04-36-04.759843.parquet"]}, {"split": "2023_12_03T04_42_48.050570", "path": ["results_2023-12-03T04-42-48.050570.parquet"]}, {"split": "2023_12_03T04_43_48.387414", "path": ["results_2023-12-03T04-43-48.387414.parquet"]}, {"split": "2023_12_03T06_38_27.690404", "path": ["results_2023-12-03T06-38-27.690404.parquet"]}, {"split": "2023_12_03T06_39_48.508245", "path": ["results_2023-12-03T06-39-48.508245.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T06-39-48.508245.parquet"]}]}]}
|
2023-12-03T06:39:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of WizardLM/WizardLM-30B-V1.0
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model WizardLM/WizardLM-30B-V1.0 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T06:39:48.508245(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of WizardLM/WizardLM-30B-V1.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model WizardLM/WizardLM-30B-V1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T06:39:48.508245(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of WizardLM/WizardLM-30B-V1.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model WizardLM/WizardLM-30B-V1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T06:39:48.508245(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
21,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of WizardLM/WizardLM-30B-V1.0## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model WizardLM/WizardLM-30B-V1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 8 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T06:39:48.508245(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
909a173ac3706b1c3581aaa87d5015c1e52acfc8
|
# Dataset Card for "b0d16951"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/b0d16951
|
[
"region:us"
] |
2023-10-21T14:08:00+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 168, "num_examples": 10}], "download_size": 1367, "dataset_size": 168}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T14:08:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "b0d16951"
More Information needed
|
[
"# Dataset Card for \"b0d16951\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"b0d16951\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"b0d16951\"\n\nMore Information needed"
] |
b55c358eb2a66b7ff198a2893b97237401ba62de
|
# シンプルずんだもんデータセット

## はじめに
ずんだもんの設定が詰まったシンプルなデータセットです。
作者がインターネットで調べたり、運営の人からもらったデータから作成しました。
キャラクターLLMを作るための動作確認にお使いください。
ただし、可能な限り動作確認でもライセンスをよく読んでください。
他の用途はライセンスをよく読んでください。
## 各種フォーマット
- LLM-jp: [zmnjp.jsonl](zmnjp.jsonl)
- ChatGPT: [zmn.jsonl](zmn.jsonl)
## ライセンス
- [(ず・ω・きょ)](https://zunko.jp/guideline.html)
|
alfredplpl/simple-zundamon
|
[
"language:ja",
"license:other",
"region:us"
] |
2023-10-21T14:16:58+00:00
|
{"language": ["ja"], "license": "other", "license_name": "view-read-more", "license_link": "https://zunko.jp/guideline.html"}
|
2023-10-21T15:10:17+00:00
|
[] |
[
"ja"
] |
TAGS
#language-Japanese #license-other #region-us
|
# シンプルずんだもんデータセット
!ずっきょ
## はじめに
ずんだもんの設定が詰まったシンプルなデータセットです。
作者がインターネットで調べたり、運営の人からもらったデータから作成しました。
キャラクターLLMを作るための動作確認にお使いください。
ただし、可能な限り動作確認でもライセンスをよく読んでください。
他の用途はライセンスをよく読んでください。
## 各種フォーマット
- LLM-jp: URL
- ChatGPT: URL
## ライセンス
- (ず・ω・きょ)
|
[
"# シンプルずんだもんデータセット\n\n!ずっきょ",
"## はじめに\nずんだもんの設定が詰まったシンプルなデータセットです。\n作者がインターネットで調べたり、運営の人からもらったデータから作成しました。\nキャラクターLLMを作るための動作確認にお使いください。\nただし、可能な限り動作確認でもライセンスをよく読んでください。\n他の用途はライセンスをよく読んでください。",
"## 各種フォーマット\n- LLM-jp: URL\n- ChatGPT: URL",
"## ライセンス\n- (ず・ω・きょ)"
] |
[
"TAGS\n#language-Japanese #license-other #region-us \n",
"# シンプルずんだもんデータセット\n\n!ずっきょ",
"## はじめに\nずんだもんの設定が詰まったシンプルなデータセットです。\n作者がインターネットで調べたり、運営の人からもらったデータから作成しました。\nキャラクターLLMを作るための動作確認にお使いください。\nただし、可能な限り動作確認でもライセンスをよく読んでください。\n他の用途はライセンスをよく読んでください。",
"## 各種フォーマット\n- LLM-jp: URL\n- ChatGPT: URL",
"## ライセンス\n- (ず・ω・きょ)"
] |
[
17,
13,
76,
19,
14
] |
[
"passage: TAGS\n#language-Japanese #license-other #region-us \n# シンプルずんだもんデータセット\n\n!ずっきょ## はじめに\nずんだもんの設定が詰まったシンプルなデータセットです。\n作者がインターネットで調べたり、運営の人からもらったデータから作成しました。\nキャラクターLLMを作るための動作確認にお使いください。\nただし、可能な限り動作確認でもライセンスをよく読んでください。\n他の用途はライセンスをよく読んでください。## 各種フォーマット\n- LLM-jp: URL\n- ChatGPT: URL## ライセンス\n- (ず・ω・きょ)"
] |
17806c0ee4bc72f5b5bd4439c4a7eef2f18f8ea4
|
# Dataset Card for Evaluation run of TheTravellingEngineer/llama2-7b-chat-hf-dpo
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheTravellingEngineer/llama2-7b-chat-hf-dpo
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheTravellingEngineer/llama2-7b-chat-hf-dpo](https://huggingface.co/TheTravellingEngineer/llama2-7b-chat-hf-dpo) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheTravellingEngineer__llama2-7b-chat-hf-dpo",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-21T15:24:24.824403](https://huggingface.co/datasets/open-llm-leaderboard/details_TheTravellingEngineer__llama2-7b-chat-hf-dpo/blob/main/results_2023-10-21T15-24-24.824403.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.06763842281879194,
"em_stderr": 0.0025717489509556085,
"f1": 0.13085570469798627,
"f1_stderr": 0.0028825856446422905,
"acc": 0.39549166962367155,
"acc_stderr": 0.009921949302668327
},
"harness|drop|3": {
"em": 0.06763842281879194,
"em_stderr": 0.0025717489509556085,
"f1": 0.13085570469798627,
"f1_stderr": 0.0028825856446422905
},
"harness|gsm8k|5": {
"acc": 0.07354056103108415,
"acc_stderr": 0.0071898357543652685
},
"harness|winogrande|5": {
"acc": 0.7174427782162589,
"acc_stderr": 0.012654062850971384
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_TheTravellingEngineer__llama2-7b-chat-hf-dpo
|
[
"region:us"
] |
2023-10-21T14:24:28+00:00
|
{"pretty_name": "Evaluation run of TheTravellingEngineer/llama2-7b-chat-hf-dpo", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheTravellingEngineer/llama2-7b-chat-hf-dpo](https://huggingface.co/TheTravellingEngineer/llama2-7b-chat-hf-dpo) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheTravellingEngineer__llama2-7b-chat-hf-dpo\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-21T15:24:24.824403](https://huggingface.co/datasets/open-llm-leaderboard/details_TheTravellingEngineer__llama2-7b-chat-hf-dpo/blob/main/results_2023-10-21T15-24-24.824403.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.06763842281879194,\n \"em_stderr\": 0.0025717489509556085,\n \"f1\": 0.13085570469798627,\n \"f1_stderr\": 0.0028825856446422905,\n \"acc\": 0.39549166962367155,\n \"acc_stderr\": 0.009921949302668327\n },\n \"harness|drop|3\": {\n \"em\": 0.06763842281879194,\n \"em_stderr\": 0.0025717489509556085,\n \"f1\": 0.13085570469798627,\n \"f1_stderr\": 0.0028825856446422905\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07354056103108415,\n \"acc_stderr\": 0.0071898357543652685\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7174427782162589,\n \"acc_stderr\": 0.012654062850971384\n }\n}\n```", "repo_url": "https://huggingface.co/TheTravellingEngineer/llama2-7b-chat-hf-dpo", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_21T15_24_24.824403", "path": ["**/details_harness|drop|3_2023-10-21T15-24-24.824403.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-21T15-24-24.824403.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_21T15_24_24.824403", "path": ["**/details_harness|gsm8k|5_2023-10-21T15-24-24.824403.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-21T15-24-24.824403.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_21T15_24_24.824403", "path": ["**/details_harness|winogrande|5_2023-10-21T15-24-24.824403.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-21T15-24-24.824403.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_21T15_24_24.824403", "path": ["results_2023-10-21T15-24-24.824403.parquet"]}, {"split": "latest", "path": ["results_2023-10-21T15-24-24.824403.parquet"]}]}]}
|
2023-10-21T14:24:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of TheTravellingEngineer/llama2-7b-chat-hf-dpo
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TheTravellingEngineer/llama2-7b-chat-hf-dpo on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-21T15:24:24.824403(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of TheTravellingEngineer/llama2-7b-chat-hf-dpo",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheTravellingEngineer/llama2-7b-chat-hf-dpo on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-21T15:24:24.824403(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TheTravellingEngineer/llama2-7b-chat-hf-dpo",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheTravellingEngineer/llama2-7b-chat-hf-dpo on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-21T15:24:24.824403(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
29,
31,
177,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheTravellingEngineer/llama2-7b-chat-hf-dpo## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheTravellingEngineer/llama2-7b-chat-hf-dpo on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-21T15:24:24.824403(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7660e4175d9a0f0ffe3d1440829cbf4b32527bd9
|
# 2D-ATOMS: 2D Abilities in Theory of Mind Space dataset
Official dataset for [**Towards A Holistic Landscape of Situated Theory of Mind in Large Language Models**](https://arxiv.org/abs/2310.19619). Ziqiao Ma, Jacob Sansom, Run Peng, Joyce Chai. EMNLP Findings, 2023.
## Overview

We introduce **2D-ATOMS** dataset, a novel text-based dataset that evaluates a machine's reasoning process under a situated theory-of-mind setting.
Our dataset includes 9 different ToM evaluation tasks for each mental state under ATOMS[1], and 1 reality-checking task to test LLMs’ understanding of the world. It is important to acknowledge that our experiment serves as a proof of concept and does not aim to cover the entire spectrum of machine ToM, as our case studies are far from being exhaustive or systematic. Here we release the zero-shot version of our dataset, which is used in our paper.
If you find our work useful, please give us credit by citing:
```bibtex
@inproceedings{ma2023towards,
title={Towards A Holistic Landscape of Situated Theory of Mind in Large Language Models},
author={Ma, Ziqiao and Sansom, Jacob and Peng, Run and Chai, Joyce},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2023},
year={2023}
}
```
## Download
```python
from datasets import load_dataset
dataset = load_dataset("sled-umich/2D-ATOMS")
```
## Reference
[1] C. Beaudoin, É. Leblanc, C. Gagner, and M. H. Beauchamp, ‘Systematic review and inventory of theory of mind measures for young children’, Frontiers in psychology, vol. 10, p. 2905, 2020.
|
sled-umich/2D-ATOMS
|
[
"task_categories:zero-shot-classification",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"LLM",
"Theory-Of-Mind",
"arxiv:2310.19619",
"region:us"
] |
2023-10-21T15:04:58+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["zero-shot-classification"], "tags": ["LLM", "Theory-Of-Mind"]}
|
2023-10-31T17:15:19+00:00
|
[
"2310.19619"
] |
[
"en"
] |
TAGS
#task_categories-zero-shot-classification #size_categories-1K<n<10K #language-English #license-mit #LLM #Theory-Of-Mind #arxiv-2310.19619 #region-us
|
# 2D-ATOMS: 2D Abilities in Theory of Mind Space dataset
Official dataset for Towards A Holistic Landscape of Situated Theory of Mind in Large Language Models. Ziqiao Ma, Jacob Sansom, Run Peng, Joyce Chai. EMNLP Findings, 2023.
## Overview
!image
We introduce 2D-ATOMS dataset, a novel text-based dataset that evaluates a machine's reasoning process under a situated theory-of-mind setting.
Our dataset includes 9 different ToM evaluation tasks for each mental state under ATOMS[1], and 1 reality-checking task to test LLMs’ understanding of the world. It is important to acknowledge that our experiment serves as a proof of concept and does not aim to cover the entire spectrum of machine ToM, as our case studies are far from being exhaustive or systematic. Here we release the zero-shot version of our dataset, which is used in our paper.
If you find our work useful, please give us credit by citing:
## Download
## Reference
[1] C. Beaudoin, É. Leblanc, C. Gagner, and M. H. Beauchamp, ‘Systematic review and inventory of theory of mind measures for young children’, Frontiers in psychology, vol. 10, p. 2905, 2020.
|
[
"# 2D-ATOMS: 2D Abilities in Theory of Mind Space dataset\n\nOfficial dataset for Towards A Holistic Landscape of Situated Theory of Mind in Large Language Models. Ziqiao Ma, Jacob Sansom, Run Peng, Joyce Chai. EMNLP Findings, 2023.",
"## Overview\n\n!image\n\nWe introduce 2D-ATOMS dataset, a novel text-based dataset that evaluates a machine's reasoning process under a situated theory-of-mind setting.\n\nOur dataset includes 9 different ToM evaluation tasks for each mental state under ATOMS[1], and 1 reality-checking task to test LLMs’ understanding of the world. It is important to acknowledge that our experiment serves as a proof of concept and does not aim to cover the entire spectrum of machine ToM, as our case studies are far from being exhaustive or systematic. Here we release the zero-shot version of our dataset, which is used in our paper.\n\nIf you find our work useful, please give us credit by citing:",
"## Download",
"## Reference\n\n\n[1] C. Beaudoin, É. Leblanc, C. Gagner, and M. H. Beauchamp, ‘Systematic review and inventory of theory of mind measures for young children’, Frontiers in psychology, vol. 10, p. 2905, 2020."
] |
[
"TAGS\n#task_categories-zero-shot-classification #size_categories-1K<n<10K #language-English #license-mit #LLM #Theory-Of-Mind #arxiv-2310.19619 #region-us \n",
"# 2D-ATOMS: 2D Abilities in Theory of Mind Space dataset\n\nOfficial dataset for Towards A Holistic Landscape of Situated Theory of Mind in Large Language Models. Ziqiao Ma, Jacob Sansom, Run Peng, Joyce Chai. EMNLP Findings, 2023.",
"## Overview\n\n!image\n\nWe introduce 2D-ATOMS dataset, a novel text-based dataset that evaluates a machine's reasoning process under a situated theory-of-mind setting.\n\nOur dataset includes 9 different ToM evaluation tasks for each mental state under ATOMS[1], and 1 reality-checking task to test LLMs’ understanding of the world. It is important to acknowledge that our experiment serves as a proof of concept and does not aim to cover the entire spectrum of machine ToM, as our case studies are far from being exhaustive or systematic. Here we release the zero-shot version of our dataset, which is used in our paper.\n\nIf you find our work useful, please give us credit by citing:",
"## Download",
"## Reference\n\n\n[1] C. Beaudoin, É. Leblanc, C. Gagner, and M. H. Beauchamp, ‘Systematic review and inventory of theory of mind measures for young children’, Frontiers in psychology, vol. 10, p. 2905, 2020."
] |
[
59,
70,
160,
2,
62
] |
[
"passage: TAGS\n#task_categories-zero-shot-classification #size_categories-1K<n<10K #language-English #license-mit #LLM #Theory-Of-Mind #arxiv-2310.19619 #region-us \n# 2D-ATOMS: 2D Abilities in Theory of Mind Space dataset\n\nOfficial dataset for Towards A Holistic Landscape of Situated Theory of Mind in Large Language Models. Ziqiao Ma, Jacob Sansom, Run Peng, Joyce Chai. EMNLP Findings, 2023.## Overview\n\n!image\n\nWe introduce 2D-ATOMS dataset, a novel text-based dataset that evaluates a machine's reasoning process under a situated theory-of-mind setting.\n\nOur dataset includes 9 different ToM evaluation tasks for each mental state under ATOMS[1], and 1 reality-checking task to test LLMs’ understanding of the world. It is important to acknowledge that our experiment serves as a proof of concept and does not aim to cover the entire spectrum of machine ToM, as our case studies are far from being exhaustive or systematic. Here we release the zero-shot version of our dataset, which is used in our paper.\n\nIf you find our work useful, please give us credit by citing:## Download## Reference\n\n\n[1] C. Beaudoin, É. Leblanc, C. Gagner, and M. H. Beauchamp, ‘Systematic review and inventory of theory of mind measures for young children’, Frontiers in psychology, vol. 10, p. 2905, 2020."
] |
c6fc1043dfbf61db6a178ab07baf67dad93552cc
|
## Kazakh Paraphrasing Dataset
This dataset is specifically designed for the paraphrasing task in the Kazakh language. It offers a unique resource for natural language processing applications, focusing on the development and evaluation of paraphrasing models.
### Source and Translation Process
Originally sourced from [humarin/chatgpt-paraphrases](https://huggingface.co/datasets/humarin/chatgpt-paraphrases), this dataset has been carefully translated using Google Translate, followed by a meticulous review by human experts to ensure accuracy and contextual relevance in the Kazakh language.
### Dataset Content and Structure
The dataset comprises 130k of phrases or sentence pairs, each consisting of an original sentence and its paraphrased counterpart in Kazakh. This structure is particularly beneficial for training algorithms to understand and generate paraphrased content while maintaining the original sentence's meaning.
### Usage and Application
Ideal for researchers and developers in the field of computational linguistics, this dataset serves as a robust tool for training and evaluating paraphrasing models in the Kazakh language. It can significantly contribute to advancements in language technologies for Kazakh.
### Acknowledgments and References
We extend our gratitude to the original dataset providers at [humarin/chatgpt-paraphrases](https://huggingface.co/datasets/humarin/chatgpt-paraphrases) and the team of linguists and translators who contributed to the adaptation of this dataset for the Kazakh language.
|
CCRss/small-chatgpt-paraphrases-kz
|
[
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:kk",
"license:mit",
"region:us"
] |
2023-10-21T15:08:42+00:00
|
{"language": ["kk"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text2text-generation"]}
|
2023-12-21T08:37:48+00:00
|
[] |
[
"kk"
] |
TAGS
#task_categories-text2text-generation #size_categories-100K<n<1M #language-Kazakh #license-mit #region-us
|
## Kazakh Paraphrasing Dataset
This dataset is specifically designed for the paraphrasing task in the Kazakh language. It offers a unique resource for natural language processing applications, focusing on the development and evaluation of paraphrasing models.
### Source and Translation Process
Originally sourced from humarin/chatgpt-paraphrases, this dataset has been carefully translated using Google Translate, followed by a meticulous review by human experts to ensure accuracy and contextual relevance in the Kazakh language.
### Dataset Content and Structure
The dataset comprises 130k of phrases or sentence pairs, each consisting of an original sentence and its paraphrased counterpart in Kazakh. This structure is particularly beneficial for training algorithms to understand and generate paraphrased content while maintaining the original sentence's meaning.
### Usage and Application
Ideal for researchers and developers in the field of computational linguistics, this dataset serves as a robust tool for training and evaluating paraphrasing models in the Kazakh language. It can significantly contribute to advancements in language technologies for Kazakh.
### Acknowledgments and References
We extend our gratitude to the original dataset providers at humarin/chatgpt-paraphrases and the team of linguists and translators who contributed to the adaptation of this dataset for the Kazakh language.
|
[
"## Kazakh Paraphrasing Dataset\n\nThis dataset is specifically designed for the paraphrasing task in the Kazakh language. It offers a unique resource for natural language processing applications, focusing on the development and evaluation of paraphrasing models.",
"### Source and Translation Process\n\nOriginally sourced from humarin/chatgpt-paraphrases, this dataset has been carefully translated using Google Translate, followed by a meticulous review by human experts to ensure accuracy and contextual relevance in the Kazakh language.",
"### Dataset Content and Structure\n\nThe dataset comprises 130k of phrases or sentence pairs, each consisting of an original sentence and its paraphrased counterpart in Kazakh. This structure is particularly beneficial for training algorithms to understand and generate paraphrased content while maintaining the original sentence's meaning.",
"### Usage and Application\n\nIdeal for researchers and developers in the field of computational linguistics, this dataset serves as a robust tool for training and evaluating paraphrasing models in the Kazakh language. It can significantly contribute to advancements in language technologies for Kazakh.",
"### Acknowledgments and References\n\nWe extend our gratitude to the original dataset providers at humarin/chatgpt-paraphrases and the team of linguists and translators who contributed to the adaptation of this dataset for the Kazakh language."
] |
[
"TAGS\n#task_categories-text2text-generation #size_categories-100K<n<1M #language-Kazakh #license-mit #region-us \n",
"## Kazakh Paraphrasing Dataset\n\nThis dataset is specifically designed for the paraphrasing task in the Kazakh language. It offers a unique resource for natural language processing applications, focusing on the development and evaluation of paraphrasing models.",
"### Source and Translation Process\n\nOriginally sourced from humarin/chatgpt-paraphrases, this dataset has been carefully translated using Google Translate, followed by a meticulous review by human experts to ensure accuracy and contextual relevance in the Kazakh language.",
"### Dataset Content and Structure\n\nThe dataset comprises 130k of phrases or sentence pairs, each consisting of an original sentence and its paraphrased counterpart in Kazakh. This structure is particularly beneficial for training algorithms to understand and generate paraphrased content while maintaining the original sentence's meaning.",
"### Usage and Application\n\nIdeal for researchers and developers in the field of computational linguistics, this dataset serves as a robust tool for training and evaluating paraphrasing models in the Kazakh language. It can significantly contribute to advancements in language technologies for Kazakh.",
"### Acknowledgments and References\n\nWe extend our gratitude to the original dataset providers at humarin/chatgpt-paraphrases and the team of linguists and translators who contributed to the adaptation of this dataset for the Kazakh language."
] |
[
42,
51,
62,
70,
60,
60
] |
[
"passage: TAGS\n#task_categories-text2text-generation #size_categories-100K<n<1M #language-Kazakh #license-mit #region-us \n## Kazakh Paraphrasing Dataset\n\nThis dataset is specifically designed for the paraphrasing task in the Kazakh language. It offers a unique resource for natural language processing applications, focusing on the development and evaluation of paraphrasing models.### Source and Translation Process\n\nOriginally sourced from humarin/chatgpt-paraphrases, this dataset has been carefully translated using Google Translate, followed by a meticulous review by human experts to ensure accuracy and contextual relevance in the Kazakh language.### Dataset Content and Structure\n\nThe dataset comprises 130k of phrases or sentence pairs, each consisting of an original sentence and its paraphrased counterpart in Kazakh. This structure is particularly beneficial for training algorithms to understand and generate paraphrased content while maintaining the original sentence's meaning.### Usage and Application\n\nIdeal for researchers and developers in the field of computational linguistics, this dataset serves as a robust tool for training and evaluating paraphrasing models in the Kazakh language. It can significantly contribute to advancements in language technologies for Kazakh.### Acknowledgments and References\n\nWe extend our gratitude to the original dataset providers at humarin/chatgpt-paraphrases and the team of linguists and translators who contributed to the adaptation of this dataset for the Kazakh language."
] |
bc87e8865ac75515f93a6922afb1428ed7d256cd
|
CS Test Data
|
dsatya6/cstestdata
|
[
"region:us"
] |
2023-10-21T15:16:10+00:00
|
{}
|
2023-10-27T22:43:56+00:00
|
[] |
[] |
TAGS
#region-us
|
CS Test Data
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
38bfc428012d79ce1a81d639a8166137e5027478
|
# Dataset Card for Evaluation run of KoboldAI/OPT-350M-Nerys-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/KoboldAI/OPT-350M-Nerys-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [KoboldAI/OPT-350M-Nerys-v2](https://huggingface.co/KoboldAI/OPT-350M-Nerys-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_KoboldAI__OPT-350M-Nerys-v2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-21T16:22:23.406290](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__OPT-350M-Nerys-v2/blob/main/results_2023-10-21T16-22-23.406290.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0014681208053691276,
"em_stderr": 0.0003921042190298392,
"f1": 0.041601300335570565,
"f1_stderr": 0.001164099674986064,
"acc": 0.26150165183377183,
"acc_stderr": 0.008156331616616547
},
"harness|drop|3": {
"em": 0.0014681208053691276,
"em_stderr": 0.0003921042190298392,
"f1": 0.041601300335570565,
"f1_stderr": 0.001164099674986064
},
"harness|gsm8k|5": {
"acc": 0.006823351023502654,
"acc_stderr": 0.0022675371022544935
},
"harness|winogrande|5": {
"acc": 0.516179952644041,
"acc_stderr": 0.0140451261309786
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_KoboldAI__OPT-350M-Nerys-v2
|
[
"region:us"
] |
2023-10-21T15:22:26+00:00
|
{"pretty_name": "Evaluation run of KoboldAI/OPT-350M-Nerys-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [KoboldAI/OPT-350M-Nerys-v2](https://huggingface.co/KoboldAI/OPT-350M-Nerys-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_KoboldAI__OPT-350M-Nerys-v2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-21T16:22:23.406290](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__OPT-350M-Nerys-v2/blob/main/results_2023-10-21T16-22-23.406290.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298392,\n \"f1\": 0.041601300335570565,\n \"f1_stderr\": 0.001164099674986064,\n \"acc\": 0.26150165183377183,\n \"acc_stderr\": 0.008156331616616547\n },\n \"harness|drop|3\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298392,\n \"f1\": 0.041601300335570565,\n \"f1_stderr\": 0.001164099674986064\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \"acc_stderr\": 0.0022675371022544935\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.516179952644041,\n \"acc_stderr\": 0.0140451261309786\n }\n}\n```", "repo_url": "https://huggingface.co/KoboldAI/OPT-350M-Nerys-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_21T16_22_23.406290", "path": ["**/details_harness|drop|3_2023-10-21T16-22-23.406290.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-21T16-22-23.406290.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_21T16_22_23.406290", "path": ["**/details_harness|gsm8k|5_2023-10-21T16-22-23.406290.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-21T16-22-23.406290.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_21T16_22_23.406290", "path": ["**/details_harness|winogrande|5_2023-10-21T16-22-23.406290.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-21T16-22-23.406290.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_21T16_22_23.406290", "path": ["results_2023-10-21T16-22-23.406290.parquet"]}, {"split": "latest", "path": ["results_2023-10-21T16-22-23.406290.parquet"]}]}]}
|
2023-10-21T15:22:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of KoboldAI/OPT-350M-Nerys-v2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model KoboldAI/OPT-350M-Nerys-v2 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-21T16:22:23.406290(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of KoboldAI/OPT-350M-Nerys-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model KoboldAI/OPT-350M-Nerys-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-21T16:22:23.406290(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of KoboldAI/OPT-350M-Nerys-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model KoboldAI/OPT-350M-Nerys-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-21T16:22:23.406290(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
172,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of KoboldAI/OPT-350M-Nerys-v2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model KoboldAI/OPT-350M-Nerys-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-21T16:22:23.406290(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
a2e32778b06d78febff1e9421c6584f3b3436abe
|
# Dataset Card for "impressionist_paintings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chashaotm/impressionist_paintings
|
[
"region:us"
] |
2023-10-21T15:30:22+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 963894728.236, "num_examples": 2018}], "download_size": 972521426, "dataset_size": 963894728.236}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T15:36:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "impressionist_paintings"
More Information needed
|
[
"# Dataset Card for \"impressionist_paintings\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"impressionist_paintings\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"impressionist_paintings\"\n\nMore Information needed"
] |
b3d932453034c8e64a3d9a565e2ff7735ac38f69
|
# Dataset Card for "Img2Spec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Woleek/Img2Spec
|
[
"region:us"
] |
2023-10-21T16:05:29+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "spec", "dtype": "image"}, {"name": "sample_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3537373012.5, "num_examples": 10738}], "download_size": 2171045369, "dataset_size": 3537373012.5}}
|
2023-10-21T18:55:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Img2Spec"
More Information needed
|
[
"# Dataset Card for \"Img2Spec\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Img2Spec\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Img2Spec\"\n\nMore Information needed"
] |
7ee6cb7fa82a3ce9867e6a46f3c04393058a3b04
|
# Dataset Card for CORE: A Few-Shot Company Relation Classification Dataset for Robust Domain Adaptation.
<!-- Provide a quick summary of the dataset. -->
CORE includes 4,708 instances of 12 relation types with corresponding textual evidence extracted from company Wikipedia pages. It contains an annotated NOTA (none-of-the-above) category.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
We introduce CORE, a dataset for few-shot relation classification (RC) focused on company relations and business entities. CORE includes 4,708 instances of 12 relation types with corresponding textual evidence extracted from company Wikipedia pages. Company names and business entities pose a challenge for few-shot RC models due to the rich and diverse information associated with them. For example, a company name may represent the legal entity, products, people, or business divisions depending on the context. Therefore, deriving the relation type between entities is highly dependent on textual context. To evaluate the performance of state-of-the-art RC models on the CORE dataset, we conduct experiments in the few-shot domain adaptation setting. Our results reveal substantial performance gaps, confirming that models trained on different domains struggle to adapt to CORE. Interestingly, we find that models trained on CORE showcase improved out-of-domain performance, which highlights the importance of high-quality data for robust domain adaptation. Specifically, the information richness embedded in business entities allows models to focus on contextual nuances, reducing their reliance on superficial clues such as relation-specific verbs. In addition to the dataset, we provide relevant code snippets to facilitate reproducibility and encourage further research in the field.
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/pnborchert/CORE
- **Paper:** https://arxiv.org/abs/2310.12024
## Dataset Structure
The dataset is split in training and test instances with **overlapping relation types**. Relation types inlcuded in the test set should be excluded from the training set in the episode sampling procedure [sample_configuration.py](https://github.com/pnborchert/CORE/blob/master/benchmark/fs/sample_configuration.py).
- `train`: Contains 4000 training instances and 12 relation types.
- `test`: Contains 708 instances and 12 relation types.
- `relation_description`: Textual descriptions of the relation types.
## Citation
```bibtex
@misc{borchert2023core,
title={CORE: A Few-Shot Company Relation Classification Dataset for Robust Domain Adaptation},
author={Philipp Borchert and Jochen De Weerdt and Kristof Coussement and Arno De Caigny and Marie-Francine Moens},
year={2023},
eprint={2310.12024},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
pborchert/CORE
|
[
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"relation-classification",
"relation-extraction",
"few-shot",
"domain-adaptation",
"business",
"finance",
"arxiv:2310.12024",
"region:us"
] |
2023-10-21T16:45:21+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification", "zero-shot-classification"], "tags": ["relation-classification", "relation-extraction", "few-shot", "domain-adaptation", "business", "finance"]}
|
2023-10-21T17:12:18+00:00
|
[
"2310.12024"
] |
[
"en"
] |
TAGS
#task_categories-text-classification #task_categories-zero-shot-classification #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #relation-classification #relation-extraction #few-shot #domain-adaptation #business #finance #arxiv-2310.12024 #region-us
|
# Dataset Card for CORE: A Few-Shot Company Relation Classification Dataset for Robust Domain Adaptation.
CORE includes 4,708 instances of 12 relation types with corresponding textual evidence extracted from company Wikipedia pages. It contains an annotated NOTA (none-of-the-above) category.
## Dataset Details
### Dataset Description
We introduce CORE, a dataset for few-shot relation classification (RC) focused on company relations and business entities. CORE includes 4,708 instances of 12 relation types with corresponding textual evidence extracted from company Wikipedia pages. Company names and business entities pose a challenge for few-shot RC models due to the rich and diverse information associated with them. For example, a company name may represent the legal entity, products, people, or business divisions depending on the context. Therefore, deriving the relation type between entities is highly dependent on textual context. To evaluate the performance of state-of-the-art RC models on the CORE dataset, we conduct experiments in the few-shot domain adaptation setting. Our results reveal substantial performance gaps, confirming that models trained on different domains struggle to adapt to CORE. Interestingly, we find that models trained on CORE showcase improved out-of-domain performance, which highlights the importance of high-quality data for robust domain adaptation. Specifically, the information richness embedded in business entities allows models to focus on contextual nuances, reducing their reliance on superficial clues such as relation-specific verbs. In addition to the dataset, we provide relevant code snippets to facilitate reproducibility and encourage further research in the field.
### Dataset Sources [optional]
- Repository: URL
- Paper: URL
## Dataset Structure
The dataset is split in training and test instances with overlapping relation types. Relation types inlcuded in the test set should be excluded from the training set in the episode sampling procedure sample_configuration.py.
- 'train': Contains 4000 training instances and 12 relation types.
- 'test': Contains 708 instances and 12 relation types.
- 'relation_description': Textual descriptions of the relation types.
|
[
"# Dataset Card for CORE: A Few-Shot Company Relation Classification Dataset for Robust Domain Adaptation.\n\n\n\nCORE includes 4,708 instances of 12 relation types with corresponding textual evidence extracted from company Wikipedia pages. It contains an annotated NOTA (none-of-the-above) category.",
"## Dataset Details",
"### Dataset Description\n\n\nWe introduce CORE, a dataset for few-shot relation classification (RC) focused on company relations and business entities. CORE includes 4,708 instances of 12 relation types with corresponding textual evidence extracted from company Wikipedia pages. Company names and business entities pose a challenge for few-shot RC models due to the rich and diverse information associated with them. For example, a company name may represent the legal entity, products, people, or business divisions depending on the context. Therefore, deriving the relation type between entities is highly dependent on textual context. To evaluate the performance of state-of-the-art RC models on the CORE dataset, we conduct experiments in the few-shot domain adaptation setting. Our results reveal substantial performance gaps, confirming that models trained on different domains struggle to adapt to CORE. Interestingly, we find that models trained on CORE showcase improved out-of-domain performance, which highlights the importance of high-quality data for robust domain adaptation. Specifically, the information richness embedded in business entities allows models to focus on contextual nuances, reducing their reliance on superficial clues such as relation-specific verbs. In addition to the dataset, we provide relevant code snippets to facilitate reproducibility and encourage further research in the field.",
"### Dataset Sources [optional]\n\n\n\n- Repository: URL\n- Paper: URL",
"## Dataset Structure\n\nThe dataset is split in training and test instances with overlapping relation types. Relation types inlcuded in the test set should be excluded from the training set in the episode sampling procedure sample_configuration.py.\n\n\n- 'train': Contains 4000 training instances and 12 relation types.\n- 'test': Contains 708 instances and 12 relation types.\n- 'relation_description': Textual descriptions of the relation types."
] |
[
"TAGS\n#task_categories-text-classification #task_categories-zero-shot-classification #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #relation-classification #relation-extraction #few-shot #domain-adaptation #business #finance #arxiv-2310.12024 #region-us \n",
"# Dataset Card for CORE: A Few-Shot Company Relation Classification Dataset for Robust Domain Adaptation.\n\n\n\nCORE includes 4,708 instances of 12 relation types with corresponding textual evidence extracted from company Wikipedia pages. It contains an annotated NOTA (none-of-the-above) category.",
"## Dataset Details",
"### Dataset Description\n\n\nWe introduce CORE, a dataset for few-shot relation classification (RC) focused on company relations and business entities. CORE includes 4,708 instances of 12 relation types with corresponding textual evidence extracted from company Wikipedia pages. Company names and business entities pose a challenge for few-shot RC models due to the rich and diverse information associated with them. For example, a company name may represent the legal entity, products, people, or business divisions depending on the context. Therefore, deriving the relation type between entities is highly dependent on textual context. To evaluate the performance of state-of-the-art RC models on the CORE dataset, we conduct experiments in the few-shot domain adaptation setting. Our results reveal substantial performance gaps, confirming that models trained on different domains struggle to adapt to CORE. Interestingly, we find that models trained on CORE showcase improved out-of-domain performance, which highlights the importance of high-quality data for robust domain adaptation. Specifically, the information richness embedded in business entities allows models to focus on contextual nuances, reducing their reliance on superficial clues such as relation-specific verbs. In addition to the dataset, we provide relevant code snippets to facilitate reproducibility and encourage further research in the field.",
"### Dataset Sources [optional]\n\n\n\n- Repository: URL\n- Paper: URL",
"## Dataset Structure\n\nThe dataset is split in training and test instances with overlapping relation types. Relation types inlcuded in the test set should be excluded from the training set in the episode sampling procedure sample_configuration.py.\n\n\n- 'train': Contains 4000 training instances and 12 relation types.\n- 'test': Contains 708 instances and 12 relation types.\n- 'relation_description': Textual descriptions of the relation types."
] |
[
91,
73,
4,
301,
20,
108
] |
[
"passage: TAGS\n#task_categories-text-classification #task_categories-zero-shot-classification #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #relation-classification #relation-extraction #few-shot #domain-adaptation #business #finance #arxiv-2310.12024 #region-us \n# Dataset Card for CORE: A Few-Shot Company Relation Classification Dataset for Robust Domain Adaptation.\n\n\n\nCORE includes 4,708 instances of 12 relation types with corresponding textual evidence extracted from company Wikipedia pages. It contains an annotated NOTA (none-of-the-above) category.## Dataset Details### Dataset Description\n\n\nWe introduce CORE, a dataset for few-shot relation classification (RC) focused on company relations and business entities. CORE includes 4,708 instances of 12 relation types with corresponding textual evidence extracted from company Wikipedia pages. Company names and business entities pose a challenge for few-shot RC models due to the rich and diverse information associated with them. For example, a company name may represent the legal entity, products, people, or business divisions depending on the context. Therefore, deriving the relation type between entities is highly dependent on textual context. To evaluate the performance of state-of-the-art RC models on the CORE dataset, we conduct experiments in the few-shot domain adaptation setting. Our results reveal substantial performance gaps, confirming that models trained on different domains struggle to adapt to CORE. Interestingly, we find that models trained on CORE showcase improved out-of-domain performance, which highlights the importance of high-quality data for robust domain adaptation. Specifically, the information richness embedded in business entities allows models to focus on contextual nuances, reducing their reliance on superficial clues such as relation-specific verbs. In addition to the dataset, we provide relevant code snippets to facilitate reproducibility and encourage further research in the field.### Dataset Sources [optional]\n\n\n\n- Repository: URL\n- Paper: URL"
] |
344a681e6e453ae5c306d0b3cdb6166c52f329fe
|
# Dataset Card for "drawbench-kandinsky-v22"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sayakpaul/drawbench-kandinsky-v22
|
[
"region:us"
] |
2023-10-21T16:59:05+00:00
|
{"dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}, {"name": "Image", "dtype": "image"}, {"name": "Upsampled_Prompt", "dtype": "string"}, {"name": "Image_With_Upsampled_Prompt", "dtype": "image"}, {"name": "model_name", "dtype": "string"}, {"name": "seed", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 157049386.0, "num_examples": 200}], "download_size": 157025231, "dataset_size": 157049386.0}}
|
2023-10-21T16:59:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "drawbench-kandinsky-v22"
More Information needed
|
[
"# Dataset Card for \"drawbench-kandinsky-v22\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"drawbench-kandinsky-v22\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"drawbench-kandinsky-v22\"\n\nMore Information needed"
] |
35ecfdcc8e8893a3deeb5588a2fc7865bc30563e
|
# Dataset Card for "drawbench-sdxl-refiner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sayakpaul/drawbench-sdxl-refiner
|
[
"region:us"
] |
2023-10-21T17:39:20+00:00
|
{"dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}, {"name": "Image", "dtype": "image"}, {"name": "Upsampled_Prompt", "dtype": "string"}, {"name": "Image_With_Upsampled_Prompt", "dtype": "image"}, {"name": "model_name", "dtype": "string"}, {"name": "seed", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 619027012.0, "num_examples": 200}], "download_size": 619026117, "dataset_size": 619027012.0}}
|
2023-10-21T17:39:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "drawbench-sdxl-refiner"
More Information needed
|
[
"# Dataset Card for \"drawbench-sdxl-refiner\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"drawbench-sdxl-refiner\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"drawbench-sdxl-refiner\"\n\nMore Information needed"
] |
af6e89cff1c97593252a60a0604754a3ad1f20d1
|
# Dataset Card for "58bc4cd4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/58bc4cd4
|
[
"region:us"
] |
2023-10-21T17:49:33+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 168, "num_examples": 10}], "download_size": 1342, "dataset_size": 168}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T17:49:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "58bc4cd4"
More Information needed
|
[
"# Dataset Card for \"58bc4cd4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"58bc4cd4\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"58bc4cd4\"\n\nMore Information needed"
] |
84e93875529332e93a9799ea813abb7d0533eced
|
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
abfhad/guanaco-llama2-1k
|
[
"region:us"
] |
2023-10-21T17:53:36+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1654448, "num_examples": 1000}], "download_size": 966693, "dataset_size": 1654448}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T17:53:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "guanaco-llama2-1k"
More Information needed
|
[
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
831a7241a430ff2b210485de6afb863e9e513051
|
# Dataset Card for "488ac4b8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/488ac4b8
|
[
"region:us"
] |
2023-10-21T17:57:30+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 159, "num_examples": 10}], "download_size": 1330, "dataset_size": 159}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T17:57:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "488ac4b8"
More Information needed
|
[
"# Dataset Card for \"488ac4b8\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"488ac4b8\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"488ac4b8\"\n\nMore Information needed"
] |
afb8d054f7853a8487850645120ea47efec5376f
|
# Dataset Card for "c4-subset-for-humaneval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
crumb/c4-subset-for-humaneval
|
[
"region:us"
] |
2023-10-21T18:06:56+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 411199548, "num_examples": 302361}], "download_size": 245218649, "dataset_size": 411199548}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T23:27:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "c4-subset-for-humaneval"
More Information needed
|
[
"# Dataset Card for \"c4-subset-for-humaneval\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"c4-subset-for-humaneval\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"c4-subset-for-humaneval\"\n\nMore Information needed"
] |
0b7340fcb3de7b2814311f81b9cfb3fcac9b5825
|
# Dataset Card for Evaluation run of dvruette/oasst-pythia-12b-reference
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/dvruette/oasst-pythia-12b-reference
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [dvruette/oasst-pythia-12b-reference](https://huggingface.co/dvruette/oasst-pythia-12b-reference) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_dvruette__oasst-pythia-12b-reference",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-21T19:14:07.226959](https://huggingface.co/datasets/open-llm-leaderboard/details_dvruette__oasst-pythia-12b-reference/blob/main/results_2023-10-21T19-14-07.226959.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001363255033557047,
"em_stderr": 0.00037786091964608703,
"f1": 0.05910759228187943,
"f1_stderr": 0.0013983745600314773,
"acc": 0.3308481527645552,
"acc_stderr": 0.008212170959780564
},
"harness|drop|3": {
"em": 0.001363255033557047,
"em_stderr": 0.00037786091964608703,
"f1": 0.05910759228187943,
"f1_stderr": 0.0013983745600314773
},
"harness|gsm8k|5": {
"acc": 0.012130401819560273,
"acc_stderr": 0.0030152942428909465
},
"harness|winogrande|5": {
"acc": 0.6495659037095501,
"acc_stderr": 0.013409047676670182
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_dvruette__oasst-pythia-12b-reference
|
[
"region:us"
] |
2023-10-21T18:14:10+00:00
|
{"pretty_name": "Evaluation run of dvruette/oasst-pythia-12b-reference", "dataset_summary": "Dataset automatically created during the evaluation run of model [dvruette/oasst-pythia-12b-reference](https://huggingface.co/dvruette/oasst-pythia-12b-reference) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_dvruette__oasst-pythia-12b-reference\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-21T19:14:07.226959](https://huggingface.co/datasets/open-llm-leaderboard/details_dvruette__oasst-pythia-12b-reference/blob/main/results_2023-10-21T19-14-07.226959.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001363255033557047,\n \"em_stderr\": 0.00037786091964608703,\n \"f1\": 0.05910759228187943,\n \"f1_stderr\": 0.0013983745600314773,\n \"acc\": 0.3308481527645552,\n \"acc_stderr\": 0.008212170959780564\n },\n \"harness|drop|3\": {\n \"em\": 0.001363255033557047,\n \"em_stderr\": 0.00037786091964608703,\n \"f1\": 0.05910759228187943,\n \"f1_stderr\": 0.0013983745600314773\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.012130401819560273,\n \"acc_stderr\": 0.0030152942428909465\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6495659037095501,\n \"acc_stderr\": 0.013409047676670182\n }\n}\n```", "repo_url": "https://huggingface.co/dvruette/oasst-pythia-12b-reference", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_21T19_14_07.226959", "path": ["**/details_harness|drop|3_2023-10-21T19-14-07.226959.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-21T19-14-07.226959.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_21T19_14_07.226959", "path": ["**/details_harness|gsm8k|5_2023-10-21T19-14-07.226959.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-21T19-14-07.226959.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_21T19_14_07.226959", "path": ["**/details_harness|winogrande|5_2023-10-21T19-14-07.226959.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-21T19-14-07.226959.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_21T19_14_07.226959", "path": ["results_2023-10-21T19-14-07.226959.parquet"]}, {"split": "latest", "path": ["results_2023-10-21T19-14-07.226959.parquet"]}]}]}
|
2023-10-21T18:14:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of dvruette/oasst-pythia-12b-reference
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model dvruette/oasst-pythia-12b-reference on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-21T19:14:07.226959(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of dvruette/oasst-pythia-12b-reference",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model dvruette/oasst-pythia-12b-reference on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-21T19:14:07.226959(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of dvruette/oasst-pythia-12b-reference",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model dvruette/oasst-pythia-12b-reference on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-21T19:14:07.226959(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of dvruette/oasst-pythia-12b-reference## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model dvruette/oasst-pythia-12b-reference on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-21T19:14:07.226959(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
8912c9b722137a88810cda24f05717495532ca44
|
# Dataset Card for "ubuntu_question_answer_jsonl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mugithi/ubuntu_question_answer_jsonl
|
[
"region:us"
] |
2023-10-21T18:23:03+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2073677, "num_examples": 12100}, {"name": "test", "num_bytes": 882250, "num_examples": 5186}], "download_size": 0, "dataset_size": 2955927}}
|
2023-10-21T18:29:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ubuntu_question_answer_jsonl"
More Information needed
|
[
"# Dataset Card for \"ubuntu_question_answer_jsonl\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ubuntu_question_answer_jsonl\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ubuntu_question_answer_jsonl\"\n\nMore Information needed"
] |
691a821eaa8ddfe7fb95b943cedbc4044b3c90bf
|
# Dataset Card for "c4-subset-for-truthfulqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
crumb/c4-subset-for-truthfulqa
|
[
"region:us"
] |
2023-10-21T18:23:23+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 577836714, "num_examples": 321153}], "download_size": 352256147, "dataset_size": 577836714}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T23:27:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "c4-subset-for-truthfulqa"
More Information needed
|
[
"# Dataset Card for \"c4-subset-for-truthfulqa\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"c4-subset-for-truthfulqa\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"c4-subset-for-truthfulqa\"\n\nMore Information needed"
] |
c223b336cc0a9a05312b46264e147ba69aea1685
|
# Dataset Card for "processed_librispeech_pt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pedropauletti/processed_librispeech_pt
|
[
"region:us"
] |
2023-10-21T19:58:21+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": {"sequence": "float32"}}, {"name": "speaker_embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1448647649.3426037, "num_examples": 4648}, {"name": "test", "num_bytes": 161134000.58307362, "num_examples": 517}], "download_size": 1435028022, "dataset_size": 1609781649.9256773}}
|
2023-10-22T00:28:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "processed_librispeech_pt"
More Information needed
|
[
"# Dataset Card for \"processed_librispeech_pt\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_librispeech_pt\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"processed_librispeech_pt\"\n\nMore Information needed"
] |
eec024966ce53ec339bd54545dc9a2c9e1bbc096
|
# Dataset Card for "covidQA_eval_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
goodcoffee/covidQA_eval_v2
|
[
"region:us"
] |
2023-10-21T19:59:24+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 782952, "num_examples": 303}], "download_size": 0, "dataset_size": 782952}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-11-01T13:28:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "covidQA_eval_v2"
More Information needed
|
[
"# Dataset Card for \"covidQA_eval_v2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"covidQA_eval_v2\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"covidQA_eval_v2\"\n\nMore Information needed"
] |
266545ac57b5fc504a8a40d2e891ba4b158ef7a6
|
# Dataset Card for Evaluation run of TheBloke/WizardLM-7B-uncensored-GPTQ
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/WizardLM-7B-uncensored-GPTQ](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__WizardLM-7B-uncensored-GPTQ",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T12:59:15.195874](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__WizardLM-7B-uncensored-GPTQ/blob/main/results_2023-12-02T12-59-15.195874.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_TheBloke__WizardLM-7B-uncensored-GPTQ
|
[
"region:us"
] |
2023-10-21T20:04:30+00:00
|
{"pretty_name": "Evaluation run of TheBloke/WizardLM-7B-uncensored-GPTQ", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/WizardLM-7B-uncensored-GPTQ](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__WizardLM-7B-uncensored-GPTQ\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T12:59:15.195874](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__WizardLM-7B-uncensored-GPTQ/blob/main/results_2023-12-02T12-59-15.195874.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_21T21_04_26.590858", "path": ["**/details_harness|drop|3_2023-10-21T21-04-26.590858.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-21T21-04-26.590858.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_21T21_04_26.590858", "path": ["**/details_harness|gsm8k|5_2023-10-21T21-04-26.590858.parquet"]}, {"split": "2023_12_02T12_59_15.195874", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-59-15.195874.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T12-59-15.195874.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_21T21_04_26.590858", "path": ["**/details_harness|winogrande|5_2023-10-21T21-04-26.590858.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-21T21-04-26.590858.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_21T21_04_26.590858", "path": ["results_2023-10-21T21-04-26.590858.parquet"]}, {"split": "2023_12_02T12_59_15.195874", "path": ["results_2023-12-02T12-59-15.195874.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T12-59-15.195874.parquet"]}]}]}
|
2023-12-02T12:59:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of TheBloke/WizardLM-7B-uncensored-GPTQ
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TheBloke/WizardLM-7B-uncensored-GPTQ on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T12:59:15.195874(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of TheBloke/WizardLM-7B-uncensored-GPTQ",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/WizardLM-7B-uncensored-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:59:15.195874(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TheBloke/WizardLM-7B-uncensored-GPTQ",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/WizardLM-7B-uncensored-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T12:59:15.195874(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
27,
31,
176,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/WizardLM-7B-uncensored-GPTQ## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/WizardLM-7B-uncensored-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T12:59:15.195874(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
760548fa9c45f1712138be21f2ca60d34fa3852c
|
# Dataset Card for MuLMS
<p>
<img src="teaser.png">
<em>Example annotation in the Multi-Layer Materials Science Corpus (image source: <a href="https://arxiv.org/abs/2310.15569"> MuLMS: A Multi-Layer Annotated Text Corpus for Information Extraction in the Materials Science Domain</a>)<em>
</p>
### Dataset Description
The Multi-Layer Materials Science corpus (MuLMS) consists of 50 documents (licensed CC BY) from the materials science domain, spanning across the following 7 subareas:
"Electrolysis", "Graphene", "Polymer Electrolyte Fuel Cell (PEMFC)", "Solid Oxide Fuel Cell (SOFC)", "Polymers", "Semiconductors" and "Steel".
It was exhaustively annotated by domain experts. There are annotations on sentence-level and token-level for the following NLP tasks:
* __Measurement Frames__: Measurement annotations are treated in a frame-like fashion, using the span type MEASUREMENT to mark the triggers (e.g.,
was measured, is plotted) that introduce the Measurement frame to the discourse. Deciding whether a sentence contains a measurement trigger is treated as a
sentence-level task, determining the span that triggers the measurement frame is treated as named entity recognition.
* __Named Entities__: There are 12 token-level named entities (+ Measurement trigger) available in MuLMS. Named entities can span across multiple tokens.
* __Relations__: MuLMS provides relations between pairs of entities. There are two types of relations: measurement-related relations and further relations.
The first type always starts at Measurement trigger spans, the scond type does not start at a specific Measurement annotation.
* __Argumentative Zones__: Each sentence in MuLMS is assigned a rhetorical function in the discourse (e.g., _Background_ or _Experiment_Preparation_). There are 12 argumentative
zones in MuLMS, which leads to a sentence-level classification task.
You can find all experiment code files and further information in the [MuLMS-AZ Repo](https://github.com/boschresearch/mulms-az-codi2023) and [MuLMS Repo](https://github.com/boschresearch/mulms-wiesp2023).
For dataset statistics, please refer to both papers listed below. There you can also find detailed explanation of all parts of MuLMS in very detail.
- **Curated by:** [Bosch Center for AI](https://www.bosch-ai.com/) and [Bosch Research](https://www.bosch.com/research/)
- **Funded by**: [Robert Bosch GmbH](https://www.bosch.de/)
- **Language(s) (NLP):** English
- **License:** [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode.txt)
## Dataset Details
MuLMS provides all annotated files in UIMA CAS XMI format that can be used with annotation tools that can read these files such as [INCEpTION](https://inception-project.github.io/).
__Important:__ To use the dataset reader, please install the UIMA CAS Python reader _puima_ using the following command: `pip install git+https://github.com/annefried/puima.git`.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/boschresearch/mulms-az-codi2023, https://github.com/boschresearch/mulms-wiesp2023
- **Paper:** https://aclanthology.org/2023.codi-1.1/, https://arxiv.org/abs/2310.15569
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
This dataset aims at information extraction from materials science documents. It enables the training of (neural) classifiers that can be used for downstream tasks such as NER and relation extraction.
Please refer to both repos linked above for training BERT-like models on all NLP tasks provided in MuLMS.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
MuLMS offers two configs: _MuLMS_Corpus_, which loads the entire MuLMS dataset, and _NER_Dependecies_, which loads only Named Entities in CONLL format in order
to train models in the _NER_as_dependency_parsing_ setting.
MuLMS is divided into three split: _train_, _validation_, and _test_. Furthermore, _train_ is divided into five sub-splits, namely _tune1_,...,_tune5_.
This allows for model training on four splits, early stopping on the fivth and remaining split, model picking on validation and evaluation only once on test.
HuggingFace datasets do not support these sub-splits, hence they must be loaded as _train_ and post-processed and filtered afterward in a custom dataset loader.
### Dataset Config _MuLMS_Corpus_
- `doc_id`: ID of the source document that can be used to lookup the metadata of the paper in [MuLMS_Corpus_Metadata.csv](MuLMS_Corpus_Metadata.csv).
- `sentence`: Each instance in the dataset corresponds to one sentence extracted from scientic papers. These sentences are listed in this field.
- `tokens`: Pre-tokenized sentences. Each instance is a list of tokens.
- `begin_offset`: Offset of the beginning of each sentence within the full text of the document.
- `end_offset`: Offset of the end of each sentence within the full text of the document.
- `AZ_labels`: The argumentative zone (= rhetorical function) of each sentence in the discourse of a materials science publication.
- `Measurement_label`: Labels each sentence whether it contains a measurement description, i.e., measurement frame evoking trigger word, or not.
- `NER_labels`: Contains lists with named entities (NEs) per instance. Every named entity uses one of n indices in these lists, i.e., every 0-th element belong to each other, ...
- `text`: List of tokens that are contained in the current sentence instance.
- `id`: Unique ID for each named entity
- `value`: The named entity class
- `begin`: Character offsets of the begin tokens of each NE
- `end`: Character offsets of the end tokens of each NE
- `tokenIndices`: Token index in the list of tokens
- `NER_labels_BILOU`: BILOU tag sequence per token in the sentence (B = begin, I = inside, L = last, O = none, U = unit).
- `relations`: Lists of relations between pair-wise entities. As with the named entities, each relation corresponds to the same index in all three lists (_ne_id_gov_, _ne_id_dep_, _label_)
- `ne_id_gov`: List of NE entity IDs that act as head of the relation
- `ne_id_dep`: List of NE entity IDs that are the tail of the relation
- `label`: Relation label between both entities
- `docFileName`: Name of the source document in the corpus
- `data_split`: Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)
- `category`: One of 7 materials science sub-domains in MuLMS (SOFC, graphene, electrolysis, PEMFC, )
### Dataset Config _NER_Dependencies_
Each instance in this config refers to one token and carries a copy of the entire sentence, i.e., for _n_ tokens in a sentence, the text of the sentence is given _n_ times.
- `index`: Unique instance ID for each token.
- `ID`: Sentence ID. As opposed to the other config, the sentences here are not sorted by document and provided in their full form for every token they belong to.
- `Sentence`: Sentence string
- `Token_ID`: Unique ID for each token within each sentence. ID is resetted for each new sentence.
- `Token`: Token string
- `NE_Dependencies`: The named entity tag of form _k:LABEL_ where _k_ refers to the ID of the begin token and _LABEL_ to the named entity. The entity ends at the token holding this
- label.
- `data_split`: Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)
### Labels
For the different layers, the following labels are available:
* __Measurement Frames__:
* `Measurement`
* `Qual_Measurement`
* __Named Entities__:
* `MAT`
* `NUM`
* `VALUE`
* `UNIT`
* `PROPERTY`
* `FORM`
* `MEASUREMENT` (measurement frame-evoking trigger)
* `CITE`
* `SAMPLE`
* `TECHNIQUE`
* `DEV`
* `RANGE`
* `INSTRUMENT`
* __Relations__:
* `hasForm`
* `measuresProperty`
* `usedAs`
* `propertyValue`
* `conditionProperty`
* `conditionSample`
* `conditionPropertyValue`
* `usesTechnique`
* `measuresPropertyValue`
* `usedTogether`
* `conditionEnv`
* `usedIn`
* `conditionInstrument`
* `takenFrom`
* `dopedBy`
* __Argumentative Zones__:
* `Motivation`
* `Background`
* `PriorWork`
* `Experiment`
* `Preparation`
* `Characterization`
* `Explanation`
* `Results`
* `Conclusion`
* `Heading`
* `Caption`
* `Metadata`
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
Keeping track of all relevant recent publications and experimental results for a research area is a challenging task. MuLMS addresses this problem by
providing a large set of annotated documents that allow for training models that can be used for automated information extraction and answering search queries
in materials science documents.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
You can find all the details for every document in this corpus in [MuLMS_Corpus_Metadata.csv](MuLMS_Corpus_Metadata.csv).
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
You can find all the authors for every document in this corpus in [MuLMS_Corpus_Metadata.csv](MuLMS_Corpus_Metadata.csv).
#### Annotation process
The annotation process included guideline design in dedicated discussion sessions. Afterward, the text files were annotated
using [INCEpTION](https://inception-project.github.io/).
#### Who are the annotators?
The annotators worked collaboratively to annotate the dataset in the best possible way. All people in this project either have background in materials science or computer
science. This synergy enables to incorporate both views, the materials scientist view that has a deep knowledge about the topics themselves as well as the CS view that
always looks at processing text data automatically in a structured fashion.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
This dataset does not contain any personal, sensitive or private data. MuLMS builds upon publicly available scientific publications and all authors are credited accordingly.
## Citation
If you use our software or dataset in your scientific work, please cite both papers:
**BibTeX:**
```
@misc{schrader2023mulms,
title={MuLMS: A Multi-Layer Annotated Text Corpus for Information Extraction in the Materials Science Domain},
author={Timo Pierre Schrader and Matteo Finco and Stefan Grünewald and Felix Hildebrand and Annemarie Friedrich},
year={2023},
eprint={2310.15569},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{schrader-etal-2023-mulms,
title = "{M}u{LMS}-{AZ}: An Argumentative Zoning Dataset for the Materials Science Domain",
author = {Schrader, Timo and
B{\"u}rkle, Teresa and
Henning, Sophie and
Tan, Sherry and
Finco, Matteo and
Gr{\"u}newald, Stefan and
Indrikova, Maira and
Hildebrand, Felix and
Friedrich, Annemarie},
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.1",
doi = "10.18653/v1/2023.codi-1.1",
pages = "1--15",
}
```
## Changes
Changes to the source code from the original repo are listed in the [CHANGELOG](CHANGELOG) file.
## Copyright
```
Experiment resources related to the MuLMS corpus.
Copyright (c) 2023 Robert Bosch GmbH
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
```
## License
This software is open-sourced under the AGPL-3.0 license. See the
[LICENSE_CODE](LICENSE_CODE) file for details.
The MuLMS corpus is released under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode.txt) license. See the [LICENSE_CORPUS](LICENSE_CORPUS) file for details.
## Dataset Card Authors
* Timo Pierre Schrader (Bosch Center for AI, University of Augsburg)
* Matteo Finco (Bosch Research)
* Stefan Grünewald (Bosch Center for AI, University of Stuttgart)
* Felix Hildebrand (Bosch Research)
* Annemarie Friedrich (University of Augsburg)
## Dataset Card Contact
For all questions, please contact [Timo Schrader](mailto:[email protected]).
|
timo-pierre-schrader/MuLMS
|
[
"task_categories:fill-mask",
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:named-entity-recognition",
"task_ids:slot-filling",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2310.15569",
"region:us"
] |
2023-10-21T20:12:11+00:00
|
{"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["fill-mask", "token-classification", "text-classification"], "task_ids": ["named-entity-recognition", "slot-filling"], "pretty_name": "Multi-Layer Materials Science Corpus"}
|
2023-11-01T13:41:32+00:00
|
[
"2310.15569"
] |
[
"en"
] |
TAGS
#task_categories-fill-mask #task_categories-token-classification #task_categories-text-classification #task_ids-named-entity-recognition #task_ids-slot-filling #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-2310.15569 #region-us
|
# Dataset Card for MuLMS
<p>
<img src="URL">
<em>Example annotation in the Multi-Layer Materials Science Corpus (image source: <a href="URL MuLMS: A Multi-Layer Annotated Text Corpus for Information Extraction in the Materials Science Domain</a>)<em>
</p>
### Dataset Description
The Multi-Layer Materials Science corpus (MuLMS) consists of 50 documents (licensed CC BY) from the materials science domain, spanning across the following 7 subareas:
"Electrolysis", "Graphene", "Polymer Electrolyte Fuel Cell (PEMFC)", "Solid Oxide Fuel Cell (SOFC)", "Polymers", "Semiconductors" and "Steel".
It was exhaustively annotated by domain experts. There are annotations on sentence-level and token-level for the following NLP tasks:
* __Measurement Frames__: Measurement annotations are treated in a frame-like fashion, using the span type MEASUREMENT to mark the triggers (e.g.,
was measured, is plotted) that introduce the Measurement frame to the discourse. Deciding whether a sentence contains a measurement trigger is treated as a
sentence-level task, determining the span that triggers the measurement frame is treated as named entity recognition.
* __Named Entities__: There are 12 token-level named entities (+ Measurement trigger) available in MuLMS. Named entities can span across multiple tokens.
* __Relations__: MuLMS provides relations between pairs of entities. There are two types of relations: measurement-related relations and further relations.
The first type always starts at Measurement trigger spans, the scond type does not start at a specific Measurement annotation.
* __Argumentative Zones__: Each sentence in MuLMS is assigned a rhetorical function in the discourse (e.g., _Background_ or _Experiment_Preparation_). There are 12 argumentative
zones in MuLMS, which leads to a sentence-level classification task.
You can find all experiment code files and further information in the MuLMS-AZ Repo and MuLMS Repo.
For dataset statistics, please refer to both papers listed below. There you can also find detailed explanation of all parts of MuLMS in very detail.
- Curated by: Bosch Center for AI and Bosch Research
- Funded by: Robert Bosch GmbH
- Language(s) (NLP): English
- License: CC BY-SA 4.0
## Dataset Details
MuLMS provides all annotated files in UIMA CAS XMI format that can be used with annotation tools that can read these files such as INCEpTION.
__Important:__ To use the dataset reader, please install the UIMA CAS Python reader _puima_ using the following command: 'pip install git+URL
### Dataset Sources
- Repository: URL URL
- Paper: URL URL
## Uses
### Direct Use
This dataset aims at information extraction from materials science documents. It enables the training of (neural) classifiers that can be used for downstream tasks such as NER and relation extraction.
Please refer to both repos linked above for training BERT-like models on all NLP tasks provided in MuLMS.
## Dataset Structure
MuLMS offers two configs: _MuLMS_Corpus_, which loads the entire MuLMS dataset, and _NER_Dependecies_, which loads only Named Entities in CONLL format in order
to train models in the _NER_as_dependency_parsing_ setting.
MuLMS is divided into three split: _train_, _validation_, and _test_. Furthermore, _train_ is divided into five sub-splits, namely _tune1_,...,_tune5_.
This allows for model training on four splits, early stopping on the fivth and remaining split, model picking on validation and evaluation only once on test.
HuggingFace datasets do not support these sub-splits, hence they must be loaded as _train_ and post-processed and filtered afterward in a custom dataset loader.
### Dataset Config _MuLMS_Corpus_
- 'doc_id': ID of the source document that can be used to lookup the metadata of the paper in MuLMS_Corpus_Metadata.csv.
- 'sentence': Each instance in the dataset corresponds to one sentence extracted from scientic papers. These sentences are listed in this field.
- 'tokens': Pre-tokenized sentences. Each instance is a list of tokens.
- 'begin_offset': Offset of the beginning of each sentence within the full text of the document.
- 'end_offset': Offset of the end of each sentence within the full text of the document.
- 'AZ_labels': The argumentative zone (= rhetorical function) of each sentence in the discourse of a materials science publication.
- 'Measurement_label': Labels each sentence whether it contains a measurement description, i.e., measurement frame evoking trigger word, or not.
- 'NER_labels': Contains lists with named entities (NEs) per instance. Every named entity uses one of n indices in these lists, i.e., every 0-th element belong to each other, ...
- 'text': List of tokens that are contained in the current sentence instance.
- 'id': Unique ID for each named entity
- 'value': The named entity class
- 'begin': Character offsets of the begin tokens of each NE
- 'end': Character offsets of the end tokens of each NE
- 'tokenIndices': Token index in the list of tokens
- 'NER_labels_BILOU': BILOU tag sequence per token in the sentence (B = begin, I = inside, L = last, O = none, U = unit).
- 'relations': Lists of relations between pair-wise entities. As with the named entities, each relation corresponds to the same index in all three lists (_ne_id_gov_, _ne_id_dep_, _label_)
- 'ne_id_gov': List of NE entity IDs that act as head of the relation
- 'ne_id_dep': List of NE entity IDs that are the tail of the relation
- 'label': Relation label between both entities
- 'docFileName': Name of the source document in the corpus
- 'data_split': Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)
- 'category': One of 7 materials science sub-domains in MuLMS (SOFC, graphene, electrolysis, PEMFC, )
### Dataset Config _NER_Dependencies_
Each instance in this config refers to one token and carries a copy of the entire sentence, i.e., for _n_ tokens in a sentence, the text of the sentence is given _n_ times.
- 'index': Unique instance ID for each token.
- 'ID': Sentence ID. As opposed to the other config, the sentences here are not sorted by document and provided in their full form for every token they belong to.
- 'Sentence': Sentence string
- 'Token_ID': Unique ID for each token within each sentence. ID is resetted for each new sentence.
- 'Token': Token string
- 'NE_Dependencies': The named entity tag of form _k:LABEL_ where _k_ refers to the ID of the begin token and _LABEL_ to the named entity. The entity ends at the token holding this
- label.
- 'data_split': Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)
### Labels
For the different layers, the following labels are available:
* __Measurement Frames__:
* 'Measurement'
* 'Qual_Measurement'
* __Named Entities__:
* 'MAT'
* 'NUM'
* 'VALUE'
* 'UNIT'
* 'PROPERTY'
* 'FORM'
* 'MEASUREMENT' (measurement frame-evoking trigger)
* 'CITE'
* 'SAMPLE'
* 'TECHNIQUE'
* 'DEV'
* 'RANGE'
* 'INSTRUMENT'
* __Relations__:
* 'hasForm'
* 'measuresProperty'
* 'usedAs'
* 'propertyValue'
* 'conditionProperty'
* 'conditionSample'
* 'conditionPropertyValue'
* 'usesTechnique'
* 'measuresPropertyValue'
* 'usedTogether'
* 'conditionEnv'
* 'usedIn'
* 'conditionInstrument'
* 'takenFrom'
* 'dopedBy'
* __Argumentative Zones__:
* 'Motivation'
* 'Background'
* 'PriorWork'
* 'Experiment'
* 'Preparation'
* 'Characterization'
* 'Explanation'
* 'Results'
* 'Conclusion'
* 'Heading'
* 'Caption'
* 'Metadata'
## Dataset Creation
### Curation Rationale
Keeping track of all relevant recent publications and experimental results for a research area is a challenging task. MuLMS addresses this problem by
providing a large set of annotated documents that allow for training models that can be used for automated information extraction and answering search queries
in materials science documents.
### Source Data
You can find all the details for every document in this corpus in MuLMS_Corpus_Metadata.csv.
#### Who are the source data producers?
You can find all the authors for every document in this corpus in MuLMS_Corpus_Metadata.csv.
#### Annotation process
The annotation process included guideline design in dedicated discussion sessions. Afterward, the text files were annotated
using INCEpTION.
#### Who are the annotators?
The annotators worked collaboratively to annotate the dataset in the best possible way. All people in this project either have background in materials science or computer
science. This synergy enables to incorporate both views, the materials scientist view that has a deep knowledge about the topics themselves as well as the CS view that
always looks at processing text data automatically in a structured fashion.
#### Personal and Sensitive Information
This dataset does not contain any personal, sensitive or private data. MuLMS builds upon publicly available scientific publications and all authors are credited accordingly.
If you use our software or dataset in your scientific work, please cite both papers:
BibTeX:
## Changes
Changes to the source code from the original repo are listed in the CHANGELOG file.
## Copyright
## License
This software is open-sourced under the AGPL-3.0 license. See the
LICENSE_CODE file for details.
The MuLMS corpus is released under the CC BY-SA 4.0 license. See the LICENSE_CORPUS file for details.
## Dataset Card Authors
* Timo Pierre Schrader (Bosch Center for AI, University of Augsburg)
* Matteo Finco (Bosch Research)
* Stefan Grünewald (Bosch Center for AI, University of Stuttgart)
* Felix Hildebrand (Bosch Research)
* Annemarie Friedrich (University of Augsburg)
## Dataset Card Contact
For all questions, please contact Timo Schrader.
|
[
"# Dataset Card for MuLMS\n\n<p>\n<img src=\"URL\">\n<em>Example annotation in the Multi-Layer Materials Science Corpus (image source: <a href=\"URL MuLMS: A Multi-Layer Annotated Text Corpus for Information Extraction in the Materials Science Domain</a>)<em>\n</p>",
"### Dataset Description\n\nThe Multi-Layer Materials Science corpus (MuLMS) consists of 50 documents (licensed CC BY) from the materials science domain, spanning across the following 7 subareas: \n\"Electrolysis\", \"Graphene\", \"Polymer Electrolyte Fuel Cell (PEMFC)\", \"Solid Oxide Fuel Cell (SOFC)\", \"Polymers\", \"Semiconductors\" and \"Steel\". \nIt was exhaustively annotated by domain experts. There are annotations on sentence-level and token-level for the following NLP tasks:\n\n* __Measurement Frames__: Measurement annotations are treated in a frame-like fashion, using the span type MEASUREMENT to mark the triggers (e.g.,\nwas measured, is plotted) that introduce the Measurement frame to the discourse. Deciding whether a sentence contains a measurement trigger is treated as a\nsentence-level task, determining the span that triggers the measurement frame is treated as named entity recognition.\n* __Named Entities__: There are 12 token-level named entities (+ Measurement trigger) available in MuLMS. Named entities can span across multiple tokens.\n* __Relations__: MuLMS provides relations between pairs of entities. There are two types of relations: measurement-related relations and further relations.\nThe first type always starts at Measurement trigger spans, the scond type does not start at a specific Measurement annotation.\n* __Argumentative Zones__: Each sentence in MuLMS is assigned a rhetorical function in the discourse (e.g., _Background_ or _Experiment_Preparation_). There are 12 argumentative\n zones in MuLMS, which leads to a sentence-level classification task.\n\nYou can find all experiment code files and further information in the MuLMS-AZ Repo and MuLMS Repo.\nFor dataset statistics, please refer to both papers listed below. There you can also find detailed explanation of all parts of MuLMS in very detail.\n\n- Curated by: Bosch Center for AI and Bosch Research\n- Funded by: Robert Bosch GmbH\n- Language(s) (NLP): English\n- License: CC BY-SA 4.0",
"## Dataset Details\n\nMuLMS provides all annotated files in UIMA CAS XMI format that can be used with annotation tools that can read these files such as INCEpTION.\n\n__Important:__ To use the dataset reader, please install the UIMA CAS Python reader _puima_ using the following command: 'pip install git+URL",
"### Dataset Sources\n\n\n\n- Repository: URL URL\n- Paper: URL URL",
"## Uses",
"### Direct Use\n\nThis dataset aims at information extraction from materials science documents. It enables the training of (neural) classifiers that can be used for downstream tasks such as NER and relation extraction.\nPlease refer to both repos linked above for training BERT-like models on all NLP tasks provided in MuLMS.",
"## Dataset Structure\n\n\n\nMuLMS offers two configs: _MuLMS_Corpus_, which loads the entire MuLMS dataset, and _NER_Dependecies_, which loads only Named Entities in CONLL format in order\nto train models in the _NER_as_dependency_parsing_ setting.\n\nMuLMS is divided into three split: _train_, _validation_, and _test_. Furthermore, _train_ is divided into five sub-splits, namely _tune1_,...,_tune5_.\nThis allows for model training on four splits, early stopping on the fivth and remaining split, model picking on validation and evaluation only once on test.\nHuggingFace datasets do not support these sub-splits, hence they must be loaded as _train_ and post-processed and filtered afterward in a custom dataset loader.",
"### Dataset Config _MuLMS_Corpus_\n\n- 'doc_id': ID of the source document that can be used to lookup the metadata of the paper in MuLMS_Corpus_Metadata.csv.\n- 'sentence': Each instance in the dataset corresponds to one sentence extracted from scientic papers. These sentences are listed in this field.\n- 'tokens': Pre-tokenized sentences. Each instance is a list of tokens.\n- 'begin_offset': Offset of the beginning of each sentence within the full text of the document.\n- 'end_offset': Offset of the end of each sentence within the full text of the document.\n- 'AZ_labels': The argumentative zone (= rhetorical function) of each sentence in the discourse of a materials science publication.\n- 'Measurement_label': Labels each sentence whether it contains a measurement description, i.e., measurement frame evoking trigger word, or not.\n- 'NER_labels': Contains lists with named entities (NEs) per instance. Every named entity uses one of n indices in these lists, i.e., every 0-th element belong to each other, ...\n - 'text': List of tokens that are contained in the current sentence instance.\n - 'id': Unique ID for each named entity\n - 'value': The named entity class\n - 'begin': Character offsets of the begin tokens of each NE\n - 'end': Character offsets of the end tokens of each NE\n - 'tokenIndices': Token index in the list of tokens\n- 'NER_labels_BILOU': BILOU tag sequence per token in the sentence (B = begin, I = inside, L = last, O = none, U = unit).\n- 'relations': Lists of relations between pair-wise entities. As with the named entities, each relation corresponds to the same index in all three lists (_ne_id_gov_, _ne_id_dep_, _label_)\n - 'ne_id_gov': List of NE entity IDs that act as head of the relation\n - 'ne_id_dep': List of NE entity IDs that are the tail of the relation\n - 'label': Relation label between both entities\n- 'docFileName': Name of the source document in the corpus\n- 'data_split': Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)\n- 'category': One of 7 materials science sub-domains in MuLMS (SOFC, graphene, electrolysis, PEMFC, )",
"### Dataset Config _NER_Dependencies_\n\nEach instance in this config refers to one token and carries a copy of the entire sentence, i.e., for _n_ tokens in a sentence, the text of the sentence is given _n_ times.\n\n- 'index': Unique instance ID for each token. \n- 'ID': Sentence ID. As opposed to the other config, the sentences here are not sorted by document and provided in their full form for every token they belong to.\n- 'Sentence': Sentence string\n- 'Token_ID': Unique ID for each token within each sentence. ID is resetted for each new sentence.\n- 'Token': Token string\n- 'NE_Dependencies': The named entity tag of form _k:LABEL_ where _k_ refers to the ID of the begin token and _LABEL_ to the named entity. The entity ends at the token holding this\n- label.\n- 'data_split': Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)",
"### Labels\n\nFor the different layers, the following labels are available:\n\n* __Measurement Frames__:\n * 'Measurement'\n * 'Qual_Measurement'\n* __Named Entities__:\n * 'MAT'\n * 'NUM'\n * 'VALUE'\n * 'UNIT'\n * 'PROPERTY'\n * 'FORM'\n * 'MEASUREMENT' (measurement frame-evoking trigger)\n * 'CITE'\n * 'SAMPLE'\n * 'TECHNIQUE'\n * 'DEV'\n * 'RANGE'\n * 'INSTRUMENT'\n* __Relations__:\n * 'hasForm'\n * 'measuresProperty'\n * 'usedAs'\n * 'propertyValue'\n * 'conditionProperty'\n * 'conditionSample'\n * 'conditionPropertyValue'\n * 'usesTechnique'\n * 'measuresPropertyValue'\n * 'usedTogether'\n * 'conditionEnv'\n * 'usedIn'\n * 'conditionInstrument'\n * 'takenFrom'\n * 'dopedBy'\n* __Argumentative Zones__:\n * 'Motivation'\n * 'Background'\n * 'PriorWork'\n * 'Experiment'\n * 'Preparation'\n * 'Characterization'\n * 'Explanation'\n * 'Results'\n * 'Conclusion'\n * 'Heading'\n * 'Caption'\n * 'Metadata'",
"## Dataset Creation",
"### Curation Rationale\n\n\n\nKeeping track of all relevant recent publications and experimental results for a research area is a challenging task. MuLMS addresses this problem by\nproviding a large set of annotated documents that allow for training models that can be used for automated information extraction and answering search queries\nin materials science documents.",
"### Source Data\n\n\n\nYou can find all the details for every document in this corpus in MuLMS_Corpus_Metadata.csv.",
"#### Who are the source data producers?\n\n\n\nYou can find all the authors for every document in this corpus in MuLMS_Corpus_Metadata.csv.",
"#### Annotation process\n\nThe annotation process included guideline design in dedicated discussion sessions. Afterward, the text files were annotated \nusing INCEpTION.",
"#### Who are the annotators?\n\nThe annotators worked collaboratively to annotate the dataset in the best possible way. All people in this project either have background in materials science or computer\nscience. This synergy enables to incorporate both views, the materials scientist view that has a deep knowledge about the topics themselves as well as the CS view that\nalways looks at processing text data automatically in a structured fashion.",
"#### Personal and Sensitive Information\n\n\n\nThis dataset does not contain any personal, sensitive or private data. MuLMS builds upon publicly available scientific publications and all authors are credited accordingly.\n\nIf you use our software or dataset in your scientific work, please cite both papers:\n\nBibTeX:",
"## Changes\n\nChanges to the source code from the original repo are listed in the CHANGELOG file.",
"## Copyright",
"## License\n\nThis software is open-sourced under the AGPL-3.0 license. See the\nLICENSE_CODE file for details.\nThe MuLMS corpus is released under the CC BY-SA 4.0 license. See the LICENSE_CORPUS file for details.",
"## Dataset Card Authors\n\n* Timo Pierre Schrader (Bosch Center for AI, University of Augsburg)\n* Matteo Finco (Bosch Research)\n* Stefan Grünewald (Bosch Center for AI, University of Stuttgart)\n* Felix Hildebrand (Bosch Research)\n* Annemarie Friedrich (University of Augsburg)",
"## Dataset Card Contact\n\nFor all questions, please contact Timo Schrader."
] |
[
"TAGS\n#task_categories-fill-mask #task_categories-token-classification #task_categories-text-classification #task_ids-named-entity-recognition #task_ids-slot-filling #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-2310.15569 #region-us \n",
"# Dataset Card for MuLMS\n\n<p>\n<img src=\"URL\">\n<em>Example annotation in the Multi-Layer Materials Science Corpus (image source: <a href=\"URL MuLMS: A Multi-Layer Annotated Text Corpus for Information Extraction in the Materials Science Domain</a>)<em>\n</p>",
"### Dataset Description\n\nThe Multi-Layer Materials Science corpus (MuLMS) consists of 50 documents (licensed CC BY) from the materials science domain, spanning across the following 7 subareas: \n\"Electrolysis\", \"Graphene\", \"Polymer Electrolyte Fuel Cell (PEMFC)\", \"Solid Oxide Fuel Cell (SOFC)\", \"Polymers\", \"Semiconductors\" and \"Steel\". \nIt was exhaustively annotated by domain experts. There are annotations on sentence-level and token-level for the following NLP tasks:\n\n* __Measurement Frames__: Measurement annotations are treated in a frame-like fashion, using the span type MEASUREMENT to mark the triggers (e.g.,\nwas measured, is plotted) that introduce the Measurement frame to the discourse. Deciding whether a sentence contains a measurement trigger is treated as a\nsentence-level task, determining the span that triggers the measurement frame is treated as named entity recognition.\n* __Named Entities__: There are 12 token-level named entities (+ Measurement trigger) available in MuLMS. Named entities can span across multiple tokens.\n* __Relations__: MuLMS provides relations between pairs of entities. There are two types of relations: measurement-related relations and further relations.\nThe first type always starts at Measurement trigger spans, the scond type does not start at a specific Measurement annotation.\n* __Argumentative Zones__: Each sentence in MuLMS is assigned a rhetorical function in the discourse (e.g., _Background_ or _Experiment_Preparation_). There are 12 argumentative\n zones in MuLMS, which leads to a sentence-level classification task.\n\nYou can find all experiment code files and further information in the MuLMS-AZ Repo and MuLMS Repo.\nFor dataset statistics, please refer to both papers listed below. There you can also find detailed explanation of all parts of MuLMS in very detail.\n\n- Curated by: Bosch Center for AI and Bosch Research\n- Funded by: Robert Bosch GmbH\n- Language(s) (NLP): English\n- License: CC BY-SA 4.0",
"## Dataset Details\n\nMuLMS provides all annotated files in UIMA CAS XMI format that can be used with annotation tools that can read these files such as INCEpTION.\n\n__Important:__ To use the dataset reader, please install the UIMA CAS Python reader _puima_ using the following command: 'pip install git+URL",
"### Dataset Sources\n\n\n\n- Repository: URL URL\n- Paper: URL URL",
"## Uses",
"### Direct Use\n\nThis dataset aims at information extraction from materials science documents. It enables the training of (neural) classifiers that can be used for downstream tasks such as NER and relation extraction.\nPlease refer to both repos linked above for training BERT-like models on all NLP tasks provided in MuLMS.",
"## Dataset Structure\n\n\n\nMuLMS offers two configs: _MuLMS_Corpus_, which loads the entire MuLMS dataset, and _NER_Dependecies_, which loads only Named Entities in CONLL format in order\nto train models in the _NER_as_dependency_parsing_ setting.\n\nMuLMS is divided into three split: _train_, _validation_, and _test_. Furthermore, _train_ is divided into five sub-splits, namely _tune1_,...,_tune5_.\nThis allows for model training on four splits, early stopping on the fivth and remaining split, model picking on validation and evaluation only once on test.\nHuggingFace datasets do not support these sub-splits, hence they must be loaded as _train_ and post-processed and filtered afterward in a custom dataset loader.",
"### Dataset Config _MuLMS_Corpus_\n\n- 'doc_id': ID of the source document that can be used to lookup the metadata of the paper in MuLMS_Corpus_Metadata.csv.\n- 'sentence': Each instance in the dataset corresponds to one sentence extracted from scientic papers. These sentences are listed in this field.\n- 'tokens': Pre-tokenized sentences. Each instance is a list of tokens.\n- 'begin_offset': Offset of the beginning of each sentence within the full text of the document.\n- 'end_offset': Offset of the end of each sentence within the full text of the document.\n- 'AZ_labels': The argumentative zone (= rhetorical function) of each sentence in the discourse of a materials science publication.\n- 'Measurement_label': Labels each sentence whether it contains a measurement description, i.e., measurement frame evoking trigger word, or not.\n- 'NER_labels': Contains lists with named entities (NEs) per instance. Every named entity uses one of n indices in these lists, i.e., every 0-th element belong to each other, ...\n - 'text': List of tokens that are contained in the current sentence instance.\n - 'id': Unique ID for each named entity\n - 'value': The named entity class\n - 'begin': Character offsets of the begin tokens of each NE\n - 'end': Character offsets of the end tokens of each NE\n - 'tokenIndices': Token index in the list of tokens\n- 'NER_labels_BILOU': BILOU tag sequence per token in the sentence (B = begin, I = inside, L = last, O = none, U = unit).\n- 'relations': Lists of relations between pair-wise entities. As with the named entities, each relation corresponds to the same index in all three lists (_ne_id_gov_, _ne_id_dep_, _label_)\n - 'ne_id_gov': List of NE entity IDs that act as head of the relation\n - 'ne_id_dep': List of NE entity IDs that are the tail of the relation\n - 'label': Relation label between both entities\n- 'docFileName': Name of the source document in the corpus\n- 'data_split': Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)\n- 'category': One of 7 materials science sub-domains in MuLMS (SOFC, graphene, electrolysis, PEMFC, )",
"### Dataset Config _NER_Dependencies_\n\nEach instance in this config refers to one token and carries a copy of the entire sentence, i.e., for _n_ tokens in a sentence, the text of the sentence is given _n_ times.\n\n- 'index': Unique instance ID for each token. \n- 'ID': Sentence ID. As opposed to the other config, the sentences here are not sorted by document and provided in their full form for every token they belong to.\n- 'Sentence': Sentence string\n- 'Token_ID': Unique ID for each token within each sentence. ID is resetted for each new sentence.\n- 'Token': Token string\n- 'NE_Dependencies': The named entity tag of form _k:LABEL_ where _k_ refers to the ID of the begin token and _LABEL_ to the named entity. The entity ends at the token holding this\n- label.\n- 'data_split': Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)",
"### Labels\n\nFor the different layers, the following labels are available:\n\n* __Measurement Frames__:\n * 'Measurement'\n * 'Qual_Measurement'\n* __Named Entities__:\n * 'MAT'\n * 'NUM'\n * 'VALUE'\n * 'UNIT'\n * 'PROPERTY'\n * 'FORM'\n * 'MEASUREMENT' (measurement frame-evoking trigger)\n * 'CITE'\n * 'SAMPLE'\n * 'TECHNIQUE'\n * 'DEV'\n * 'RANGE'\n * 'INSTRUMENT'\n* __Relations__:\n * 'hasForm'\n * 'measuresProperty'\n * 'usedAs'\n * 'propertyValue'\n * 'conditionProperty'\n * 'conditionSample'\n * 'conditionPropertyValue'\n * 'usesTechnique'\n * 'measuresPropertyValue'\n * 'usedTogether'\n * 'conditionEnv'\n * 'usedIn'\n * 'conditionInstrument'\n * 'takenFrom'\n * 'dopedBy'\n* __Argumentative Zones__:\n * 'Motivation'\n * 'Background'\n * 'PriorWork'\n * 'Experiment'\n * 'Preparation'\n * 'Characterization'\n * 'Explanation'\n * 'Results'\n * 'Conclusion'\n * 'Heading'\n * 'Caption'\n * 'Metadata'",
"## Dataset Creation",
"### Curation Rationale\n\n\n\nKeeping track of all relevant recent publications and experimental results for a research area is a challenging task. MuLMS addresses this problem by\nproviding a large set of annotated documents that allow for training models that can be used for automated information extraction and answering search queries\nin materials science documents.",
"### Source Data\n\n\n\nYou can find all the details for every document in this corpus in MuLMS_Corpus_Metadata.csv.",
"#### Who are the source data producers?\n\n\n\nYou can find all the authors for every document in this corpus in MuLMS_Corpus_Metadata.csv.",
"#### Annotation process\n\nThe annotation process included guideline design in dedicated discussion sessions. Afterward, the text files were annotated \nusing INCEpTION.",
"#### Who are the annotators?\n\nThe annotators worked collaboratively to annotate the dataset in the best possible way. All people in this project either have background in materials science or computer\nscience. This synergy enables to incorporate both views, the materials scientist view that has a deep knowledge about the topics themselves as well as the CS view that\nalways looks at processing text data automatically in a structured fashion.",
"#### Personal and Sensitive Information\n\n\n\nThis dataset does not contain any personal, sensitive or private data. MuLMS builds upon publicly available scientific publications and all authors are credited accordingly.\n\nIf you use our software or dataset in your scientific work, please cite both papers:\n\nBibTeX:",
"## Changes\n\nChanges to the source code from the original repo are listed in the CHANGELOG file.",
"## Copyright",
"## License\n\nThis software is open-sourced under the AGPL-3.0 license. See the\nLICENSE_CODE file for details.\nThe MuLMS corpus is released under the CC BY-SA 4.0 license. See the LICENSE_CORPUS file for details.",
"## Dataset Card Authors\n\n* Timo Pierre Schrader (Bosch Center for AI, University of Augsburg)\n* Matteo Finco (Bosch Research)\n* Stefan Grünewald (Bosch Center for AI, University of Stuttgart)\n* Felix Hildebrand (Bosch Research)\n* Annemarie Friedrich (University of Augsburg)",
"## Dataset Card Contact\n\nFor all questions, please contact Timo Schrader."
] |
[
110,
82,
524,
77,
18,
3,
75,
219,
634,
259,
318,
5,
71,
31,
38,
34,
91,
68,
23,
2,
57,
68,
16
] |
[
"passage: TAGS\n#task_categories-fill-mask #task_categories-token-classification #task_categories-text-classification #task_ids-named-entity-recognition #task_ids-slot-filling #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-2310.15569 #region-us \n# Dataset Card for MuLMS\n\n<p>\n<img src=\"URL\">\n<em>Example annotation in the Multi-Layer Materials Science Corpus (image source: <a href=\"URL MuLMS: A Multi-Layer Annotated Text Corpus for Information Extraction in the Materials Science Domain</a>)<em>\n</p>",
"passage: ### Dataset Description\n\nThe Multi-Layer Materials Science corpus (MuLMS) consists of 50 documents (licensed CC BY) from the materials science domain, spanning across the following 7 subareas: \n\"Electrolysis\", \"Graphene\", \"Polymer Electrolyte Fuel Cell (PEMFC)\", \"Solid Oxide Fuel Cell (SOFC)\", \"Polymers\", \"Semiconductors\" and \"Steel\". \nIt was exhaustively annotated by domain experts. There are annotations on sentence-level and token-level for the following NLP tasks:\n\n* __Measurement Frames__: Measurement annotations are treated in a frame-like fashion, using the span type MEASUREMENT to mark the triggers (e.g.,\nwas measured, is plotted) that introduce the Measurement frame to the discourse. Deciding whether a sentence contains a measurement trigger is treated as a\nsentence-level task, determining the span that triggers the measurement frame is treated as named entity recognition.\n* __Named Entities__: There are 12 token-level named entities (+ Measurement trigger) available in MuLMS. Named entities can span across multiple tokens.\n* __Relations__: MuLMS provides relations between pairs of entities. There are two types of relations: measurement-related relations and further relations.\nThe first type always starts at Measurement trigger spans, the scond type does not start at a specific Measurement annotation.\n* __Argumentative Zones__: Each sentence in MuLMS is assigned a rhetorical function in the discourse (e.g., _Background_ or _Experiment_Preparation_). There are 12 argumentative\n zones in MuLMS, which leads to a sentence-level classification task.\n\nYou can find all experiment code files and further information in the MuLMS-AZ Repo and MuLMS Repo.\nFor dataset statistics, please refer to both papers listed below. There you can also find detailed explanation of all parts of MuLMS in very detail.\n\n- Curated by: Bosch Center for AI and Bosch Research\n- Funded by: Robert Bosch GmbH\n- Language(s) (NLP): English\n- License: CC BY-SA 4.0## Dataset Details\n\nMuLMS provides all annotated files in UIMA CAS XMI format that can be used with annotation tools that can read these files such as INCEpTION.\n\n__Important:__ To use the dataset reader, please install the UIMA CAS Python reader _puima_ using the following command: 'pip install git+URL### Dataset Sources\n\n\n\n- Repository: URL URL\n- Paper: URL URL## Uses### Direct Use\n\nThis dataset aims at information extraction from materials science documents. It enables the training of (neural) classifiers that can be used for downstream tasks such as NER and relation extraction.\nPlease refer to both repos linked above for training BERT-like models on all NLP tasks provided in MuLMS.## Dataset Structure\n\n\n\nMuLMS offers two configs: _MuLMS_Corpus_, which loads the entire MuLMS dataset, and _NER_Dependecies_, which loads only Named Entities in CONLL format in order\nto train models in the _NER_as_dependency_parsing_ setting.\n\nMuLMS is divided into three split: _train_, _validation_, and _test_. Furthermore, _train_ is divided into five sub-splits, namely _tune1_,...,_tune5_.\nThis allows for model training on four splits, early stopping on the fivth and remaining split, model picking on validation and evaluation only once on test.\nHuggingFace datasets do not support these sub-splits, hence they must be loaded as _train_ and post-processed and filtered afterward in a custom dataset loader.",
"passage: ### Dataset Config _MuLMS_Corpus_\n\n- 'doc_id': ID of the source document that can be used to lookup the metadata of the paper in MuLMS_Corpus_Metadata.csv.\n- 'sentence': Each instance in the dataset corresponds to one sentence extracted from scientic papers. These sentences are listed in this field.\n- 'tokens': Pre-tokenized sentences. Each instance is a list of tokens.\n- 'begin_offset': Offset of the beginning of each sentence within the full text of the document.\n- 'end_offset': Offset of the end of each sentence within the full text of the document.\n- 'AZ_labels': The argumentative zone (= rhetorical function) of each sentence in the discourse of a materials science publication.\n- 'Measurement_label': Labels each sentence whether it contains a measurement description, i.e., measurement frame evoking trigger word, or not.\n- 'NER_labels': Contains lists with named entities (NEs) per instance. Every named entity uses one of n indices in these lists, i.e., every 0-th element belong to each other, ...\n - 'text': List of tokens that are contained in the current sentence instance.\n - 'id': Unique ID for each named entity\n - 'value': The named entity class\n - 'begin': Character offsets of the begin tokens of each NE\n - 'end': Character offsets of the end tokens of each NE\n - 'tokenIndices': Token index in the list of tokens\n- 'NER_labels_BILOU': BILOU tag sequence per token in the sentence (B = begin, I = inside, L = last, O = none, U = unit).\n- 'relations': Lists of relations between pair-wise entities. As with the named entities, each relation corresponds to the same index in all three lists (_ne_id_gov_, _ne_id_dep_, _label_)\n - 'ne_id_gov': List of NE entity IDs that act as head of the relation\n - 'ne_id_dep': List of NE entity IDs that are the tail of the relation\n - 'label': Relation label between both entities\n- 'docFileName': Name of the source document in the corpus\n- 'data_split': Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)\n- 'category': One of 7 materials science sub-domains in MuLMS (SOFC, graphene, electrolysis, PEMFC, )### Dataset Config _NER_Dependencies_\n\nEach instance in this config refers to one token and carries a copy of the entire sentence, i.e., for _n_ tokens in a sentence, the text of the sentence is given _n_ times.\n\n- 'index': Unique instance ID for each token. \n- 'ID': Sentence ID. As opposed to the other config, the sentences here are not sorted by document and provided in their full form for every token they belong to.\n- 'Sentence': Sentence string\n- 'Token_ID': Unique ID for each token within each sentence. ID is resetted for each new sentence.\n- 'Token': Token string\n- 'NE_Dependencies': The named entity tag of form _k:LABEL_ where _k_ refers to the ID of the begin token and _LABEL_ to the named entity. The entity ends at the token holding this\n- label.\n- 'data_split': Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)"
] |
5b94bc9194c44d8414863812837fecded1a979f2
|
# Dataset Card for Evaluation run of lmsys/vicuna-13b-v1.3
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/lmsys/vicuna-13b-v1.3
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [lmsys/vicuna-13b-v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lmsys__vicuna-13b-v1.3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-21T21:13:42.887863](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__vicuna-13b-v1.3/blob/main/results_2023-10-21T21-13-42.887863.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.016673657718120804,
"em_stderr": 0.0013113056398144408,
"f1": 0.08239932885906037,
"f1_stderr": 0.0019047507167675335,
"acc": 0.427940733843833,
"acc_stderr": 0.010367986551641579
},
"harness|drop|3": {
"em": 0.016673657718120804,
"em_stderr": 0.0013113056398144408,
"f1": 0.08239932885906037,
"f1_stderr": 0.0019047507167675335
},
"harness|gsm8k|5": {
"acc": 0.10765731614859743,
"acc_stderr": 0.008537484003023366
},
"harness|winogrande|5": {
"acc": 0.7482241515390686,
"acc_stderr": 0.012198489100259792
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_lmsys__vicuna-13b-v1.3
|
[
"region:us"
] |
2023-10-21T20:13:46+00:00
|
{"pretty_name": "Evaluation run of lmsys/vicuna-13b-v1.3", "dataset_summary": "Dataset automatically created during the evaluation run of model [lmsys/vicuna-13b-v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lmsys__vicuna-13b-v1.3\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-21T21:13:42.887863](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__vicuna-13b-v1.3/blob/main/results_2023-10-21T21-13-42.887863.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.016673657718120804,\n \"em_stderr\": 0.0013113056398144408,\n \"f1\": 0.08239932885906037,\n \"f1_stderr\": 0.0019047507167675335,\n \"acc\": 0.427940733843833,\n \"acc_stderr\": 0.010367986551641579\n },\n \"harness|drop|3\": {\n \"em\": 0.016673657718120804,\n \"em_stderr\": 0.0013113056398144408,\n \"f1\": 0.08239932885906037,\n \"f1_stderr\": 0.0019047507167675335\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10765731614859743,\n \"acc_stderr\": 0.008537484003023366\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7482241515390686,\n \"acc_stderr\": 0.012198489100259792\n }\n}\n```", "repo_url": "https://huggingface.co/lmsys/vicuna-13b-v1.3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_21T21_13_42.887863", "path": ["**/details_harness|drop|3_2023-10-21T21-13-42.887863.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-21T21-13-42.887863.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_21T21_13_42.887863", "path": ["**/details_harness|gsm8k|5_2023-10-21T21-13-42.887863.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-21T21-13-42.887863.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_21T21_13_42.887863", "path": ["**/details_harness|winogrande|5_2023-10-21T21-13-42.887863.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-21T21-13-42.887863.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_21T21_13_42.887863", "path": ["results_2023-10-21T21-13-42.887863.parquet"]}, {"split": "latest", "path": ["results_2023-10-21T21-13-42.887863.parquet"]}]}]}
|
2023-10-21T20:13:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of lmsys/vicuna-13b-v1.3
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model lmsys/vicuna-13b-v1.3 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-21T21:13:42.887863(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of lmsys/vicuna-13b-v1.3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model lmsys/vicuna-13b-v1.3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-21T21:13:42.887863(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of lmsys/vicuna-13b-v1.3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model lmsys/vicuna-13b-v1.3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-21T21:13:42.887863(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
20,
31,
168,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of lmsys/vicuna-13b-v1.3## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model lmsys/vicuna-13b-v1.3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-21T21:13:42.887863(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
5f9ad478e63f5b6696c81c867351eca14acd41b5
|
# Dataset Card for "AR-dotted-tokenized-mediumPlus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dot-ammar/AR-dotted-tokenized-mediumPlus
|
[
"region:us"
] |
2023-10-21T20:26:15+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 216608904, "num_examples": 334273}, {"name": "test", "num_bytes": 54145584, "num_examples": 83558}], "download_size": 133446513, "dataset_size": 270754488}}
|
2023-10-22T22:36:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AR-dotted-tokenized-mediumPlus"
More Information needed
|
[
"# Dataset Card for \"AR-dotted-tokenized-mediumPlus\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AR-dotted-tokenized-mediumPlus\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AR-dotted-tokenized-mediumPlus\"\n\nMore Information needed"
] |
fe795295e6f9e336a42374b66f57069777dd3507
|
CleanFID:
```
FID: 18.9796, KID: 0.0145976
```
Then another measurement (torchmetrics FID and CleanFID simultaneously):
```
torchmetrics FID: 19.3133
CleanFID FID: 19.1283, KID: 0.0147355
```
Read from the webdataset (after saving it somewhere on your disk) like this:
```python
from webdataset import WebDataset
from typing import TypedDict, Iterable
from PIL import Image
from PIL.PngImagePlugin import PngImageFile
from io import BytesIO
from os import makedirs
Example = TypedDict('Example', {
'__key__': str,
'__url__': str,
'img.png': bytes,
})
dataset = WebDataset('./openai-guided-diffusion-256-classcond-unguided-samples-50k/{00000..00004}.tar')
out_root = 'out'
makedirs(out_root, exist_ok=True)
it: Iterable[Example] = iter(dataset)
for ix, item in enumerate(it):
with BytesIO(item['img.png']) as stream:
img: PngImageFile = Image.open(stream)
img.load()
img.save(f'{out_root}/{ix}.png')
```
Or from the HF dataset like this:
```python
from datasets import load_dataset
from datasets.dataset_dict import DatasetDict
from datasets.arrow_dataset import Dataset
from PIL.PngImagePlugin import PngImageFile
from typing import TypedDict, Iterable
from os import makedirs
class Item(TypedDict):
index: int
tar: str
tar_path: str
img: PngImageFile
dataset: DatasetDict = load_dataset('Birchlabs/openai-guided-diffusion-256-classcond-unguided-samples-50k')
train: Dataset = dataset['train']
out_root = 'out'
makedirs(out_root, exist_ok=True)
it: Iterable[Item] = iter(train)
for item in it:
item['img'].save(f'{out_root}/{item["index"]}.png')
```
|
Birchlabs/openai-guided-diffusion-256-classcond-unguided-samples-50k
|
[
"size_categories:10K<n<100K",
"license:apache-2.0",
"region:us"
] |
2023-10-21T20:37:17+00:00
|
{"license": "apache-2.0", "size_categories": ["10K<n<100K"], "pretty_name": "OpenAI guided-diffusion 256px class-conditional unguided samples (50k)"}
|
2023-12-09T22:03:14+00:00
|
[] |
[] |
TAGS
#size_categories-10K<n<100K #license-apache-2.0 #region-us
|
CleanFID:
Then another measurement (torchmetrics FID and CleanFID simultaneously):
Read from the webdataset (after saving it somewhere on your disk) like this:
Or from the HF dataset like this:
|
[] |
[
"TAGS\n#size_categories-10K<n<100K #license-apache-2.0 #region-us \n"
] |
[
26
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #license-apache-2.0 #region-us \n"
] |
b016e12cc9b782f00da7b5ad56f69908a7685e29
|
Read from the webdataset (after saving it somewhere on your disk) like this:
```python
from webdataset import WebDataset
from typing import TypedDict, Iterable
from PIL import Image
from PIL.PngImagePlugin import PngImageFile
from io import BytesIO
from os import makedirs
Example = TypedDict('Example', {
'__key__': str,
'__url__': str,
'img.png': bytes,
})
dataset = WebDataset('./wds-dataset-viewer-test/{00000..00001}.tar')
out_root = 'out'
makedirs(out_root, exist_ok=True)
it: Iterable[Example] = iter(dataset)
for ix, item in enumerate(it):
with BytesIO(item['img.png']) as stream:
img: PngImageFile = Image.open(stream)
img.load()
img.save(f'{out_root}/{ix}.png')
```
Or from the HF dataset like this:
```python
from datasets import load_dataset
from datasets.dataset_dict import DatasetDict
from datasets.arrow_dataset import Dataset
from PIL.PngImagePlugin import PngImageFile
from typing import TypedDict, Iterable
from os import makedirs
class Item(TypedDict):
index: int
tar: str
tar_path: str
img: PngImageFile
dataset: DatasetDict = load_dataset('Birchlabs/wds-dataset-viewer-test')
train: Dataset = dataset['train']
out_root = 'out'
makedirs(out_root, exist_ok=True)
it: Iterable[Item] = iter(train)
for item in it:
item['img'].save(f'{out_root}/{item["index"]}.png')
```
|
Birchlabs/wds-dataset-viewer-test
|
[
"size_categories:n<1K",
"license:apache-2.0",
"region:us"
] |
2023-10-21T21:00:42+00:00
|
{"license": "apache-2.0", "size_categories": ["n<1K"], "pretty_name": "OpenAI guided-diffusion 256px class-conditional unguided samples (20 samples)"}
|
2023-10-22T00:08:42+00:00
|
[] |
[] |
TAGS
#size_categories-n<1K #license-apache-2.0 #region-us
|
Read from the webdataset (after saving it somewhere on your disk) like this:
Or from the HF dataset like this:
|
[] |
[
"TAGS\n#size_categories-n<1K #license-apache-2.0 #region-us \n"
] |
[
24
] |
[
"passage: TAGS\n#size_categories-n<1K #license-apache-2.0 #region-us \n"
] |
83ec2388c7f20fb84ece9d7c01b9dfb1cc3a2ffb
|
# Dataset Card for "ubuntu_question_answer_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mugithi/ubuntu_question_answer_llama2
|
[
"region:us"
] |
2023-10-21T21:09:17+00:00
|
{"dataset_info": {"features": [{"name": "###question", "dtype": "string"}, {"name": "###answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2047241, "num_examples": 12024}, {"name": "test", "num_bytes": 887478, "num_examples": 5154}], "download_size": 1926803, "dataset_size": 2934719}}
|
2023-10-21T21:09:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ubuntu_question_answer_llama2"
More Information needed
|
[
"# Dataset Card for \"ubuntu_question_answer_llama2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ubuntu_question_answer_llama2\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ubuntu_question_answer_llama2\"\n\nMore Information needed"
] |
192fc4f83391d63aec2dc7910ccbc9f5dde81479
|
# Dataset Card for "AR-dotted-mediumPlus-arrow"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dot-ammar/AR-dotted-mediumPlus
|
[
"region:us"
] |
2023-10-21T21:18:57+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "clean", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 387187864, "num_examples": 1625508}], "download_size": 214233397, "dataset_size": 387187864}}
|
2023-10-24T01:32:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AR-dotted-mediumPlus-arrow"
More Information needed
|
[
"# Dataset Card for \"AR-dotted-mediumPlus-arrow\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AR-dotted-mediumPlus-arrow\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AR-dotted-mediumPlus-arrow\"\n\nMore Information needed"
] |
08da9210bc56df9d407918fc13e937b7e18989f2
|
# Dataset Card for "seizure_eeg_greyscale_224x224_6secWindow"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JLB-JLB/seizure_eeg_greyscale_224x224_6secWindow
|
[
"region:us"
] |
2023-10-21T21:29:33+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "dev", "path": "data/dev-*"}, {"split": "eval", "path": "data/eval-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "epoch", "dtype": "int64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "seiz", "1": "bckg"}}}}], "splits": [{"name": "train", "num_bytes": 23735631090.792, "num_examples": 814568}, {"name": "dev", "num_bytes": 12051655546.53, "num_examples": 390190}, {"name": "eval", "num_bytes": 3322082528.975, "num_examples": 114035}], "download_size": 39216537180, "dataset_size": 39109369166.297}}
|
2023-10-21T22:42:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "seizure_eeg_greyscale_224x224_6secWindow"
More Information needed
|
[
"# Dataset Card for \"seizure_eeg_greyscale_224x224_6secWindow\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"seizure_eeg_greyscale_224x224_6secWindow\"\n\nMore Information needed"
] |
[
6,
31
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"seizure_eeg_greyscale_224x224_6secWindow\"\n\nMore Information needed"
] |
4595cfa2483abc76330aeb12931a4760ad87b5a9
|
# Dataset Card for "sk-review-dataset-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
savaskaplan/sk-review-dataset-sample
|
[
"region:us"
] |
2023-10-21T21:30:44+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1336616.5478017514, "num_examples": 3600}, {"name": "validation", "num_bytes": 148512.94975575016, "num_examples": 400}], "download_size": 951377, "dataset_size": 1485129.4975575015}}
|
2023-10-21T21:30:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sk-review-dataset-sample"
More Information needed
|
[
"# Dataset Card for \"sk-review-dataset-sample\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sk-review-dataset-sample\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sk-review-dataset-sample\"\n\nMore Information needed"
] |
6c9b263ee4c682c9727dd4c0c7fc5fc12540d10c
|
# Dataset Card for "sk-review-dataset-full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
savaskaplan/sk-review-dataset-full
|
[
"region:us"
] |
2023-10-21T21:30:46+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 133251016.47410049, "num_examples": 358894}, {"name": "validation", "num_bytes": 14805998.52589951, "num_examples": 39878}], "download_size": 94548374, "dataset_size": 148057015.0}}
|
2023-10-21T21:30:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sk-review-dataset-full"
More Information needed
|
[
"# Dataset Card for \"sk-review-dataset-full\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sk-review-dataset-full\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sk-review-dataset-full\"\n\nMore Information needed"
] |
6816278c37e029a8178fcac15ed157c8b5e08849
|
# Dataset Card for Evaluation run of dvruette/oasst-pythia-12b-flash-attn-5000-steps
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/dvruette/oasst-pythia-12b-flash-attn-5000-steps
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [dvruette/oasst-pythia-12b-flash-attn-5000-steps](https://huggingface.co/dvruette/oasst-pythia-12b-flash-attn-5000-steps) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_dvruette__oasst-pythia-12b-flash-attn-5000-steps",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-21T23:53:32.630430](https://huggingface.co/datasets/open-llm-leaderboard/details_dvruette__oasst-pythia-12b-flash-attn-5000-steps/blob/main/results_2023-10-21T23-53-32.630430.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001572986577181208,
"em_stderr": 0.0004058451132417735,
"f1": 0.054836409395973375,
"f1_stderr": 0.001356882457395664,
"acc": 0.3206343687936557,
"acc_stderr": 0.00813976207357049
},
"harness|drop|3": {
"em": 0.001572986577181208,
"em_stderr": 0.0004058451132417735,
"f1": 0.054836409395973375,
"f1_stderr": 0.001356882457395664
},
"harness|gsm8k|5": {
"acc": 0.009855951478392721,
"acc_stderr": 0.002721076577041663
},
"harness|winogrande|5": {
"acc": 0.6314127861089187,
"acc_stderr": 0.013558447570099316
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_dvruette__oasst-pythia-12b-flash-attn-5000-steps
|
[
"region:us"
] |
2023-10-21T22:53:36+00:00
|
{"pretty_name": "Evaluation run of dvruette/oasst-pythia-12b-flash-attn-5000-steps", "dataset_summary": "Dataset automatically created during the evaluation run of model [dvruette/oasst-pythia-12b-flash-attn-5000-steps](https://huggingface.co/dvruette/oasst-pythia-12b-flash-attn-5000-steps) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_dvruette__oasst-pythia-12b-flash-attn-5000-steps\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-21T23:53:32.630430](https://huggingface.co/datasets/open-llm-leaderboard/details_dvruette__oasst-pythia-12b-flash-attn-5000-steps/blob/main/results_2023-10-21T23-53-32.630430.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.0004058451132417735,\n \"f1\": 0.054836409395973375,\n \"f1_stderr\": 0.001356882457395664,\n \"acc\": 0.3206343687936557,\n \"acc_stderr\": 0.00813976207357049\n },\n \"harness|drop|3\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.0004058451132417735,\n \"f1\": 0.054836409395973375,\n \"f1_stderr\": 0.001356882457395664\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.009855951478392721,\n \"acc_stderr\": 0.002721076577041663\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6314127861089187,\n \"acc_stderr\": 0.013558447570099316\n }\n}\n```", "repo_url": "https://huggingface.co/dvruette/oasst-pythia-12b-flash-attn-5000-steps", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_21T23_53_32.630430", "path": ["**/details_harness|drop|3_2023-10-21T23-53-32.630430.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-21T23-53-32.630430.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_21T23_53_32.630430", "path": ["**/details_harness|gsm8k|5_2023-10-21T23-53-32.630430.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-21T23-53-32.630430.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_21T23_53_32.630430", "path": ["**/details_harness|winogrande|5_2023-10-21T23-53-32.630430.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-21T23-53-32.630430.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_21T23_53_32.630430", "path": ["results_2023-10-21T23-53-32.630430.parquet"]}, {"split": "latest", "path": ["results_2023-10-21T23-53-32.630430.parquet"]}]}]}
|
2023-10-21T22:53:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of dvruette/oasst-pythia-12b-flash-attn-5000-steps
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model dvruette/oasst-pythia-12b-flash-attn-5000-steps on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-21T23:53:32.630430(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of dvruette/oasst-pythia-12b-flash-attn-5000-steps",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model dvruette/oasst-pythia-12b-flash-attn-5000-steps on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-21T23:53:32.630430(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of dvruette/oasst-pythia-12b-flash-attn-5000-steps",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model dvruette/oasst-pythia-12b-flash-attn-5000-steps on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-21T23:53:32.630430(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
33,
31,
181,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of dvruette/oasst-pythia-12b-flash-attn-5000-steps## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model dvruette/oasst-pythia-12b-flash-attn-5000-steps on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-21T23:53:32.630430(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
e43ac9c972b6d2ab99d869512d4163553226ff9a
|
# Dataset Card for Evaluation run of OpenAssistant/llama2-13b-megacode2-oasst
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/OpenAssistant/llama2-13b-megacode2-oasst
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [OpenAssistant/llama2-13b-megacode2-oasst](https://huggingface.co/OpenAssistant/llama2-13b-megacode2-oasst) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_OpenAssistant__llama2-13b-megacode2-oasst",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-22T00:14:46.537259](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenAssistant__llama2-13b-megacode2-oasst/blob/main/results_2023-10-22T00-14-46.537259.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.002936241610738255,
"em_stderr": 0.0005541113054709714,
"f1": 0.07735004194630882,
"f1_stderr": 0.0015929098030113627,
"acc": 0.4585312232784996,
"acc_stderr": 0.010977319038600733
},
"harness|drop|3": {
"em": 0.002936241610738255,
"em_stderr": 0.0005541113054709714,
"f1": 0.07735004194630882,
"f1_stderr": 0.0015929098030113627
},
"harness|gsm8k|5": {
"acc": 0.155420773313116,
"acc_stderr": 0.009979689409499152
},
"harness|winogrande|5": {
"acc": 0.7616416732438832,
"acc_stderr": 0.011974948667702314
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_OpenAssistant__llama2-13b-megacode2-oasst
|
[
"region:us"
] |
2023-10-21T23:14:50+00:00
|
{"pretty_name": "Evaluation run of OpenAssistant/llama2-13b-megacode2-oasst", "dataset_summary": "Dataset automatically created during the evaluation run of model [OpenAssistant/llama2-13b-megacode2-oasst](https://huggingface.co/OpenAssistant/llama2-13b-megacode2-oasst) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenAssistant__llama2-13b-megacode2-oasst\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-22T00:14:46.537259](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenAssistant__llama2-13b-megacode2-oasst/blob/main/results_2023-10-22T00-14-46.537259.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002936241610738255,\n \"em_stderr\": 0.0005541113054709714,\n \"f1\": 0.07735004194630882,\n \"f1_stderr\": 0.0015929098030113627,\n \"acc\": 0.4585312232784996,\n \"acc_stderr\": 0.010977319038600733\n },\n \"harness|drop|3\": {\n \"em\": 0.002936241610738255,\n \"em_stderr\": 0.0005541113054709714,\n \"f1\": 0.07735004194630882,\n \"f1_stderr\": 0.0015929098030113627\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.155420773313116,\n \"acc_stderr\": 0.009979689409499152\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7616416732438832,\n \"acc_stderr\": 0.011974948667702314\n }\n}\n```", "repo_url": "https://huggingface.co/OpenAssistant/llama2-13b-megacode2-oasst", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_22T00_14_46.537259", "path": ["**/details_harness|drop|3_2023-10-22T00-14-46.537259.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-22T00-14-46.537259.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_22T00_14_46.537259", "path": ["**/details_harness|gsm8k|5_2023-10-22T00-14-46.537259.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-22T00-14-46.537259.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_22T00_14_46.537259", "path": ["**/details_harness|winogrande|5_2023-10-22T00-14-46.537259.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-22T00-14-46.537259.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_22T00_14_46.537259", "path": ["results_2023-10-22T00-14-46.537259.parquet"]}, {"split": "latest", "path": ["results_2023-10-22T00-14-46.537259.parquet"]}]}]}
|
2023-10-21T23:14:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of OpenAssistant/llama2-13b-megacode2-oasst
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model OpenAssistant/llama2-13b-megacode2-oasst on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-22T00:14:46.537259(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of OpenAssistant/llama2-13b-megacode2-oasst",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenAssistant/llama2-13b-megacode2-oasst on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T00:14:46.537259(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of OpenAssistant/llama2-13b-megacode2-oasst",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenAssistant/llama2-13b-megacode2-oasst on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T00:14:46.537259(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
26,
31,
174,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of OpenAssistant/llama2-13b-megacode2-oasst## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenAssistant/llama2-13b-megacode2-oasst on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-22T00:14:46.537259(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
76da64e6160558bcf3c6623758fbddac6a2b4bdc
|
# Dataset Card for "LSC_Acronyms_LDA_topics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tomashs/LSC_Acronyms_LDA_topics
|
[
"region:us"
] |
2023-10-21T23:38:35+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "short_form", "dtype": "string"}, {"name": "long_form", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "topic_vector", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 262502596, "num_examples": 352720}, {"name": "validation", "num_bytes": 56048086, "num_examples": 75339}, {"name": "test", "num_bytes": 56294328, "num_examples": 75540}], "download_size": 117708613, "dataset_size": 374845010}}
|
2023-10-21T23:39:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "LSC_Acronyms_LDA_topics"
More Information needed
|
[
"# Dataset Card for \"LSC_Acronyms_LDA_topics\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"LSC_Acronyms_LDA_topics\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"LSC_Acronyms_LDA_topics\"\n\nMore Information needed"
] |
306ac35f7a7d61808bf392d757d181d991b01f4c
|
# Dataset Card for "c4-subset-for-hellaswag-approx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
crumb/c4-subset-for-hellaswag-approx
|
[
"region:us"
] |
2023-10-21T23:40:42+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 618206614, "num_examples": 291894}], "download_size": 364064080, "dataset_size": 618206614}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T23:42:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "c4-subset-for-hellaswag-approx"
More Information needed
|
[
"# Dataset Card for \"c4-subset-for-hellaswag-approx\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"c4-subset-for-hellaswag-approx\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"c4-subset-for-hellaswag-approx\"\n\nMore Information needed"
] |
4cba49c8a8d97514083acf65137cca292bc7f224
|
# Dataset Card for "goodreads-llama-7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sahityas/goodreads-llama-7b
|
[
"region:us"
] |
2023-10-22T00:19:22+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27512, "num_examples": 254}], "download_size": 15892, "dataset_size": 27512}}
|
2023-10-25T18:14:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "goodreads-llama-7b"
More Information needed
|
[
"# Dataset Card for \"goodreads-llama-7b\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"goodreads-llama-7b\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"goodreads-llama-7b\"\n\nMore Information needed"
] |
37286dcd2c0d09281946e6817038bdac2223ca78
|
# Dataset Card for "c4-subset-for-mmlu-approx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
crumb/c4-subset-for-mmlu-approx
|
[
"region:us"
] |
2023-10-22T00:29:29+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 557757084, "num_examples": 262665}], "download_size": 339106702, "dataset_size": 557757084}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T00:31:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "c4-subset-for-mmlu-approx"
More Information needed
|
[
"# Dataset Card for \"c4-subset-for-mmlu-approx\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"c4-subset-for-mmlu-approx\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"c4-subset-for-mmlu-approx\"\n\nMore Information needed"
] |
a46f06a1f12a5a3586d92b10506608f412ab1862
|
# Dataset Card for "test_cvtGS3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fun1021183/test_cvtGS3
|
[
"region:us"
] |
2023-10-22T00:42:01+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15127712.0, "num_examples": 100}], "download_size": 15105334, "dataset_size": 15127712.0}}
|
2023-10-22T00:42:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "test_cvtGS3"
More Information needed
|
[
"# Dataset Card for \"test_cvtGS3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"test_cvtGS3\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"test_cvtGS3\"\n\nMore Information needed"
] |
7a1d4b7f40c99d0f329be1374f93f85e0086a28c
|
# Dataset Card for "autotrain-data-5qi2-42zz-zqmb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
senthil3226w/autotrain-data-5qi2-42zz-zqmb
|
[
"region:us"
] |
2023-10-22T01:02:41+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "A", "dtype": "string"}, {"name": "B", "dtype": "string"}, {"name": "C", "dtype": "string"}, {"name": "D", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "autotrain_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 902477, "num_examples": 975}, {"name": "validation", "num_bytes": 902477, "num_examples": 975}], "download_size": 1118976, "dataset_size": 1804954}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
|
2023-10-22T01:02:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotrain-data-5qi2-42zz-zqmb"
More Information needed
|
[
"# Dataset Card for \"autotrain-data-5qi2-42zz-zqmb\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotrain-data-5qi2-42zz-zqmb\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-5qi2-42zz-zqmb\"\n\nMore Information needed"
] |
caec5ff376b0e5463ae67b7489e3a34cb24aa008
|
# Dataset Card for "new_dataset_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fishytorts/new_dataset_test
|
[
"region:us"
] |
2023-10-22T01:12:10+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "audio_names", "dtype": "string"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 12388426.0, "num_examples": 6}], "download_size": 12391206, "dataset_size": 12388426.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T02:21:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "new_dataset_test"
More Information needed
|
[
"# Dataset Card for \"new_dataset_test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"new_dataset_test\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"new_dataset_test\"\n\nMore Information needed"
] |
e097be63b3a315bfe94706e1a1c7190697e4430a
|
# Dataset Card for "cv1_tGS3_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fun1021183/cvt1_GS3_test
|
[
"region:us"
] |
2023-10-22T01:30:03+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15191009.0, "num_examples": 100}, {"name": "test", "num_bytes": 1715702.0, "num_examples": 10}], "download_size": 174403, "dataset_size": 16906711.0}}
|
2023-10-22T01:31:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cv1_tGS3_test"
More Information needed
|
[
"# Dataset Card for \"cv1_tGS3_test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cv1_tGS3_test\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cv1_tGS3_test\"\n\nMore Information needed"
] |
c16d12e0a25679b190d221aee007b7a539a6c868
|
# Dataset Card for "autotrain-data-dfun-lk90-yhtx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
senthil3226w/autotrain-data-dfun-lk90-yhtx
|
[
"region:us"
] |
2023-10-22T01:32:52+00:00
|
{"dataset_info": {"features": [{"name": "Context", "dtype": "string"}, {"name": "Answers", "dtype": "string"}, {"name": "Length", "dtype": "int64"}, {"name": "Language", "dtype": "string"}, {"name": "autotrain_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6008875, "num_examples": 200}, {"name": "validation", "num_bytes": 6008875, "num_examples": 200}], "download_size": 5303902, "dataset_size": 12017750}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
|
2023-10-22T01:32:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotrain-data-dfun-lk90-yhtx"
More Information needed
|
[
"# Dataset Card for \"autotrain-data-dfun-lk90-yhtx\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotrain-data-dfun-lk90-yhtx\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-dfun-lk90-yhtx\"\n\nMore Information needed"
] |
f48525432e08f199ebe441e90531adfcc82c34ba
|
# Dataset Card for Evaluation run of jondurbin/airoboros-33b-gpt4
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/jondurbin/airoboros-33b-gpt4
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [jondurbin/airoboros-33b-gpt4](https://huggingface.co/jondurbin/airoboros-33b-gpt4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_jondurbin__airoboros-33b-gpt4",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-22T02:33:18.318001](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-33b-gpt4/blob/main/results_2023-10-22T02-33-18.318001.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.16977768456375839,
"em_stderr": 0.003844830120605158,
"f1": 0.23901740771812038,
"f1_stderr": 0.003895286466470977,
"acc": 0.44846733402227057,
"acc_stderr": 0.010490680442459668
},
"harness|drop|3": {
"em": 0.16977768456375839,
"em_stderr": 0.003844830120605158,
"f1": 0.23901740771812038,
"f1_stderr": 0.003895286466470977
},
"harness|gsm8k|5": {
"acc": 0.12661106899166036,
"acc_stderr": 0.009159715283081096
},
"harness|winogrande|5": {
"acc": 0.7703235990528808,
"acc_stderr": 0.01182164560183824
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_jondurbin__airoboros-33b-gpt4
|
[
"region:us"
] |
2023-10-22T01:33:22+00:00
|
{"pretty_name": "Evaluation run of jondurbin/airoboros-33b-gpt4", "dataset_summary": "Dataset automatically created during the evaluation run of model [jondurbin/airoboros-33b-gpt4](https://huggingface.co/jondurbin/airoboros-33b-gpt4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jondurbin__airoboros-33b-gpt4\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-22T02:33:18.318001](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-33b-gpt4/blob/main/results_2023-10-22T02-33-18.318001.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.16977768456375839,\n \"em_stderr\": 0.003844830120605158,\n \"f1\": 0.23901740771812038,\n \"f1_stderr\": 0.003895286466470977,\n \"acc\": 0.44846733402227057,\n \"acc_stderr\": 0.010490680442459668\n },\n \"harness|drop|3\": {\n \"em\": 0.16977768456375839,\n \"em_stderr\": 0.003844830120605158,\n \"f1\": 0.23901740771812038,\n \"f1_stderr\": 0.003895286466470977\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12661106899166036,\n \"acc_stderr\": 0.009159715283081096\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7703235990528808,\n \"acc_stderr\": 0.01182164560183824\n }\n}\n```", "repo_url": "https://huggingface.co/jondurbin/airoboros-33b-gpt4", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_22T02_33_18.318001", "path": ["**/details_harness|drop|3_2023-10-22T02-33-18.318001.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-22T02-33-18.318001.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_22T02_33_18.318001", "path": ["**/details_harness|gsm8k|5_2023-10-22T02-33-18.318001.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-22T02-33-18.318001.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_22T02_33_18.318001", "path": ["**/details_harness|winogrande|5_2023-10-22T02-33-18.318001.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-22T02-33-18.318001.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_22T02_33_18.318001", "path": ["results_2023-10-22T02-33-18.318001.parquet"]}, {"split": "latest", "path": ["results_2023-10-22T02-33-18.318001.parquet"]}]}]}
|
2023-10-22T01:33:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of jondurbin/airoboros-33b-gpt4
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model jondurbin/airoboros-33b-gpt4 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-22T02:33:18.318001(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of jondurbin/airoboros-33b-gpt4",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model jondurbin/airoboros-33b-gpt4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T02:33:18.318001(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of jondurbin/airoboros-33b-gpt4",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model jondurbin/airoboros-33b-gpt4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-22T02:33:18.318001(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
22,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of jondurbin/airoboros-33b-gpt4## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model jondurbin/airoboros-33b-gpt4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-22T02:33:18.318001(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
76efa5710ea31f16490457fd654c62e2571e70a0
|
## For TEST
this is a dataset for test
just for test...
|
ziqin/for-test
|
[
"task_categories:text-classification",
"language:zh",
"license:apache-2.0",
"code",
"region:us"
] |
2023-10-22T02:07:18+00:00
|
{"language": ["zh"], "license": "apache-2.0", "task_categories": ["text-classification"], "tags": ["code"]}
|
2023-10-22T06:44:31+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-classification #language-Chinese #license-apache-2.0 #code #region-us
|
## For TEST
this is a dataset for test
just for test...
|
[
"## For TEST\nthis is a dataset for test\njust for test..."
] |
[
"TAGS\n#task_categories-text-classification #language-Chinese #license-apache-2.0 #code #region-us \n",
"## For TEST\nthis is a dataset for test\njust for test..."
] |
[
32,
15
] |
[
"passage: TAGS\n#task_categories-text-classification #language-Chinese #license-apache-2.0 #code #region-us \n## For TEST\nthis is a dataset for test\njust for test..."
] |
10f83388340014f57bdac69c303f0d2276d65e21
|
This is the dataset curated from ChatGPT with personalized prompt from [Our EMNLP2023-findings Miracle](https://github.com/LZY-the-boys/MIRACLE)
We offer three personality aspects:
- 'a' = attitude (positive/negative)
- 'l' = language style (lyrical/plain)
- 'm' = mental characteristics (critical/emotional)
|
lu-vae/Miracle-Conversation
|
[
"task_categories:conversational",
"language:en",
"region:us"
] |
2023-10-22T02:08:45+00:00
|
{"language": ["en"], "task_categories": ["conversational"]}
|
2024-01-21T01:33:30+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-conversational #language-English #region-us
|
This is the dataset curated from ChatGPT with personalized prompt from Our EMNLP2023-findings Miracle
We offer three personality aspects:
- 'a' = attitude (positive/negative)
- 'l' = language style (lyrical/plain)
- 'm' = mental characteristics (critical/emotional)
|
[] |
[
"TAGS\n#task_categories-conversational #language-English #region-us \n"
] |
[
20
] |
[
"passage: TAGS\n#task_categories-conversational #language-English #region-us \n"
] |
be1a57f457082801aa327a6894cf85bf1de54b6c
|
# Dataset Card for "Medical_chat_Llama-chat-50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
antareepdey/Medical_chat_Llama-chat-50k
|
[
"region:us"
] |
2023-10-22T02:15:55+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "Text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 50561249, "num_examples": 50000}], "download_size": 31132221, "dataset_size": 50561249}}
|
2023-10-22T02:16:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Medical_chat_Llama-chat-50k"
More Information needed
|
[
"# Dataset Card for \"Medical_chat_Llama-chat-50k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Medical_chat_Llama-chat-50k\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Medical_chat_Llama-chat-50k\"\n\nMore Information needed"
] |
d67f9c465d6bf09393e96ea3aff08a94247c3b5e
|
# Project Gutenberg top 1000 titles, Sept-Oct 2023
<!-- Provide a quick summary of the dataset. -->
This is the data (title, author, monthly downloads) and [ember-v1](https://huggingface.co/llmrails/ember-v1) embeddings of the top 1000 most downloaded books on [Project Gutenberg](https://www.gutenberg.org).
All data is directly taken from Project Gutenberg's [Top 1000 page](https://www.gutenberg.org/browse/scores/top1000.php).
I am not affiliated with Project Gutenberg: I've just ported this here for convenience.
|
jkeisling/project-gutenberg-top-books-oct-2023
|
[
"license:other",
"region:us"
] |
2023-10-22T02:33:31+00:00
|
{"license": "other", "license_name": "project-gutenberg-license", "license_link": "https://gutenberg.org/policy/license.html"}
|
2023-10-22T02:50:13+00:00
|
[] |
[] |
TAGS
#license-other #region-us
|
# Project Gutenberg top 1000 titles, Sept-Oct 2023
This is the data (title, author, monthly downloads) and ember-v1 embeddings of the top 1000 most downloaded books on Project Gutenberg.
All data is directly taken from Project Gutenberg's Top 1000 page.
I am not affiliated with Project Gutenberg: I've just ported this here for convenience.
|
[
"# Project Gutenberg top 1000 titles, Sept-Oct 2023\n\n\n\nThis is the data (title, author, monthly downloads) and ember-v1 embeddings of the top 1000 most downloaded books on Project Gutenberg. \nAll data is directly taken from Project Gutenberg's Top 1000 page.\n\nI am not affiliated with Project Gutenberg: I've just ported this here for convenience."
] |
[
"TAGS\n#license-other #region-us \n",
"# Project Gutenberg top 1000 titles, Sept-Oct 2023\n\n\n\nThis is the data (title, author, monthly downloads) and ember-v1 embeddings of the top 1000 most downloaded books on Project Gutenberg. \nAll data is directly taken from Project Gutenberg's Top 1000 page.\n\nI am not affiliated with Project Gutenberg: I've just ported this here for convenience."
] |
[
11,
85
] |
[
"passage: TAGS\n#license-other #region-us \n# Project Gutenberg top 1000 titles, Sept-Oct 2023\n\n\n\nThis is the data (title, author, monthly downloads) and ember-v1 embeddings of the top 1000 most downloaded books on Project Gutenberg. \nAll data is directly taken from Project Gutenberg's Top 1000 page.\n\nI am not affiliated with Project Gutenberg: I've just ported this here for convenience."
] |
dbc800aae25eb50ed673776ca44aa6a048e595e7
|
SODA-A comprises 2513 high-resolution images of aerial scenes, which has 872069 instances annotated with oriented rectangle box annotations over 9 classes.
- [Website](https://shaunyuan22.github.io/SODA/)

|
satellite-image-deep-learning/SODA-A
|
[
"license:mit",
"remote-sensing",
"oriented-bounding-boxes",
"object-detection",
"region:us"
] |
2023-10-22T02:38:59+00:00
|
{"license": "mit", "tags": ["remote-sensing", "oriented-bounding-boxes", "object-detection"]}
|
2023-10-22T04:19:07+00:00
|
[] |
[] |
TAGS
#license-mit #remote-sensing #oriented-bounding-boxes #object-detection #region-us
|
SODA-A comprises 2513 high-resolution images of aerial scenes, which has 872069 instances annotated with oriented rectangle box annotations over 9 classes.
- Website
!SODA Image
|
[] |
[
"TAGS\n#license-mit #remote-sensing #oriented-bounding-boxes #object-detection #region-us \n"
] |
[
31
] |
[
"passage: TAGS\n#license-mit #remote-sensing #oriented-bounding-boxes #object-detection #region-us \n"
] |
b1ace46f58d5fa5ec1b5b0e39adeb0dc4847fbab
|
# Dataset Card for "SlimOrca100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mattma1970/SlimOrca100k
|
[
"region:us"
] |
2023-10-22T03:11:34+00:00
|
{"dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}, {"name": "weight", "dtype": "float64"}]}], "splits": [{"name": "train", "num_bytes": 181795884, "num_examples": 100000}], "download_size": 97226388, "dataset_size": 181795884}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T03:12:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "SlimOrca100k"
More Information needed
|
[
"# Dataset Card for \"SlimOrca100k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"SlimOrca100k\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"SlimOrca100k\"\n\nMore Information needed"
] |
b2ee7506ebc563ccee816188af50f55675fc413b
|
# Dataset Card for "CAQA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nbalepur/CAQA
|
[
"region:us"
] |
2023-10-22T04:30:34+00:00
|
{"dataset_info": {"features": [{"name": "concept", "dtype": "string"}, {"name": "concept_category", "dtype": "string"}, {"name": "regions_data", "struct": [{"name": "Caribbean", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Central Africa", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Central America", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Central Asia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Central Europe", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "East Africa", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "East Asia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "East Europe", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Latin America", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Middle East", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "North Africa", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "North Europe", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "South Asia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "South Europe", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Southeast Asia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Sub-Saharan Africa", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "West Africa", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "West Asia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "West Europe", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}]}, {"name": "continents_data", "struct": [{"name": "Africa", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Asia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Europe", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "North America", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Oceania", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "South America", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}]}, {"name": "countries_data", "struct": [{"name": "Afghanistan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Albania", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Algeria", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Angola", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Argentina", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Armenia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Australia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Austria", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Azerbaijan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Bahamas", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Bangladesh", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Belarus", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Belgium", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Belize", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Benin", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Bhutan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Bolivia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Bosnia and Herzegovina", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Botswana", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Brazil", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Brunei", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Bulgaria", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Burkina Faso", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Burundi", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Cambodia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Cameroon", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Canada", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Chad", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Chile", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "China", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Colombia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Congo (DRC)", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Costa Rica", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Croatia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Cuba", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Cyprus", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Czechia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Denmark", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Dominican Republic", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Ecuador", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Egypt", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "El Salvador", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "England", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Eritrea", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Estonia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Eswatini", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Ethiopia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Fiji", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Finland", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "France", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Gabon", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Germany", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Ghana", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Greece", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Greenland", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Guatemala", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Guinea", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Guyana", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Haiti", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Honduras", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Hong Kong", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Hungary", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Iceland", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "India", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Indonesia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Iran", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Iraq", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Ireland", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Israel", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Italy", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Ivory Coast", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Jamaica", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Japan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Jordan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Kazakhstan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Kenya", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Kuwait", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Kyrgyzstan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Laos", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Latvia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Lebanon", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Lesotho", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Liberia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Libya", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Lithuania", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Luxembourg", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Macao", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Madagascar", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Malawi", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Malaysia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Mali", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Mauritania", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Mexico", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Moldova", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Mongolia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Montenegro", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Morocco", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Mozambique", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Myanmar", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Namibia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Nepal", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Netherlands", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "New Caledonia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "New Zealand", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Nicaragua", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Niger", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Nigeria", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "North Korea", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "North Macedonia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Norway", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Oman", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Pakistan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Palestine", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Panama", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Papua New Guinea", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Paraguay", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Peru", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Philippines", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Poland", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Portugal", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Puerto Rico", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Qatar", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Republic of South Africa", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Republic of South Sudan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Republic of the Congo", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Romania", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Russia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Rwanda", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Saudi Arabia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Scotland", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Senegal", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Serbia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Sierra Leone", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Singapore", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Slovakia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Slovenia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Somalia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "South Korea", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Spain", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Sri Lanka", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Sudan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Suriname", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Sweden", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Switzerland", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Syria", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Taiwan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Tajikistan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Tanzania", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Thailand", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "The Gambia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Timor-Leste", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Togo", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Trinidad and Tobago", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Tunisia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Turkey", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Turkmenistan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Uganda", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Ukraine", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "United Arab Emirates", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "United Kingdom", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "United States", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Uruguay", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Uzbekistan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Vanuatu", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Venezuela", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Vietnam", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Wales", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Yemen", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Zambia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Zimbabwe", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}]}, {"name": "states_data", "struct": [{"name": "Alabama", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Alaska", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Arizona", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Arkansas", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "California", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Colorado", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Connecticut", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Delaware", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "District of Columbia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Florida", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Georgia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Hawaii", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Idaho", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Illinois", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Indiana", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Iowa", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Kansas", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Kentucky", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Louisiana", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Maine", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Maryland", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Massachusetts", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Michigan", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Minnesota", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Mississippi", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Missouri", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Montana", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Nebraska", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Nevada", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "New Hampshire", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "New Jersey", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "New Mexico", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "New York", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "North Carolina", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "North Dakota", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Ohio", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Oklahoma", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Oregon", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Pennsylvania", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Rhode Island", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "South Carolina", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "South Dakota", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Tennessee", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Texas", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Utah", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Vermont", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Virginia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Washington", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "West Virginia", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Wisconsin", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}, {"name": "Wyoming", "struct": [{"name": "assertions", "sequence": "string"}, {"name": "scores", "sequence": "float64"}]}]}, {"name": "num_regions_data", "dtype": "int64"}, {"name": "num_continents_data", "dtype": "int64"}, {"name": "num_countries_data", "dtype": "int64"}, {"name": "num_states_data", "dtype": "int64"}, {"name": "total_num", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 40802603, "num_examples": 17118}], "download_size": 8234348, "dataset_size": 40802603}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T04:30:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CAQA"
More Information needed
|
[
"# Dataset Card for \"CAQA\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CAQA\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CAQA\"\n\nMore Information needed"
] |
cbefeb7544c1bba687e9df22abb41a414f772aa1
|
# Dataset Card for "expertllama-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
diwank/expertllama-chatml
|
[
"region:us"
] |
2023-10-22T04:30:51+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "chatml", "list": [{"name": "content", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 100609582, "num_examples": 52002}], "download_size": 50005152, "dataset_size": 100609582}}
|
2023-10-22T04:31:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "expertllama-chatml"
More Information needed
|
[
"# Dataset Card for \"expertllama-chatml\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"expertllama-chatml\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"expertllama-chatml\"\n\nMore Information needed"
] |
0799384b73cdcc6b276bec82c520ce4e1fcec309
|
# Dataset Card for "70fd4f5c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/70fd4f5c
|
[
"region:us"
] |
2023-10-22T04:34:10+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 204, "num_examples": 10}], "download_size": 1419, "dataset_size": 204}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T04:34:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "70fd4f5c"
More Information needed
|
[
"# Dataset Card for \"70fd4f5c\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"70fd4f5c\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"70fd4f5c\"\n\nMore Information needed"
] |
7d2b7112771a0005000b9ba2b865dbe8352a1a8c
|
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]
|
gopikrsmscs/torch-issues
|
[
"task_categories:feature-extraction",
"size_categories:1K<n<10K",
"license:apache-2.0",
"region:us"
] |
2023-10-22T04:37:37+00:00
|
{"license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["feature-extraction"], "pretty_name": "Pytorch Github Issues Metadata"}
|
2023-10-23T16:12:03+00:00
|
[] |
[] |
TAGS
#task_categories-feature-extraction #size_categories-1K<n<10K #license-apache-2.0 #region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
|
[
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
[
"TAGS\n#task_categories-feature-extraction #size_categories-1K<n<10K #license-apache-2.0 #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
[
38,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] |
[
"passage: TAGS\n#task_categories-feature-extraction #size_categories-1K<n<10K #license-apache-2.0 #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
f867dacd9f45031728181b95e2a5faf27334e2e3
|
# SVGen Vector Images Dataset
## Overview
SVGen is a comprehensive dataset containing 300,000 SVG vector codes from a diverse set of sources including SVG-Repo, Noto Emoji, and InstructSVG. The dataset aims to provide a wide range of SVG files suitable for various applications including web development, design, and machine learning research.
## Data Fields
- **input**: The name or label of the SVG item
- **output**: SVG code containing the vector representation
- **description**: Brief description of the SVG item
- **source**: The original source or collection of the SVG
- **license**: Licensing terms for using the SVG
## Data Sources
- [SVG-Repo](https://www.svgrepo.com/)
- [Noto Emoji](https://huggingface.co/datasets/darknoon/noto-emoji-vector-512-svg)
- [InstructSVG](https://huggingface.co/datasets/uwunion/instruct_svg)
## Usage
The dataset is particularly useful for tasks such as icon classification, style transfer, image-to-vector translation, and much more. It serves as a rich resource for machine learning models that require high-quality SVG data.
## Help Wanted
I wanted to use BILP to generate `description`'s for each SVG, but It's not working well. If you have any ideas, please let me know. Here is the [Github](https://github.com/umuthopeyildirim/SVGenDataset) and it also contains Colab notebook links.
## License
The dataset incorporates SVG files with varying licenses. Users are advised to consult the `license` field of each record for specific usage rights.
## Contribution Guidelines
Contributions are welcome! If you find any issues or would like to add more SVGs to the dataset, please submit a pull request or open an issue in the repository.
## Acknowledgements
A huge thanks to SVGRepo, Noto Emoji, and InstructSVG for providing the SVG files that make up this dataset.
For more details and to download the dataset, visit the project repository.
|
umuthopeyildirim/svgen-500k
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc",
"SVG",
"vector",
"region:us"
] |
2023-10-22T04:47:23+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "SVGen Dataset", "tags": ["SVG", "vector"]}
|
2023-10-22T04:50:00+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-cc #SVG #vector #region-us
|
# SVGen Vector Images Dataset
## Overview
SVGen is a comprehensive dataset containing 300,000 SVG vector codes from a diverse set of sources including SVG-Repo, Noto Emoji, and InstructSVG. The dataset aims to provide a wide range of SVG files suitable for various applications including web development, design, and machine learning research.
## Data Fields
- input: The name or label of the SVG item
- output: SVG code containing the vector representation
- description: Brief description of the SVG item
- source: The original source or collection of the SVG
- license: Licensing terms for using the SVG
## Data Sources
- SVG-Repo
- Noto Emoji
- InstructSVG
## Usage
The dataset is particularly useful for tasks such as icon classification, style transfer, image-to-vector translation, and much more. It serves as a rich resource for machine learning models that require high-quality SVG data.
## Help Wanted
I wanted to use BILP to generate 'description''s for each SVG, but It's not working well. If you have any ideas, please let me know. Here is the Github and it also contains Colab notebook links.
## License
The dataset incorporates SVG files with varying licenses. Users are advised to consult the 'license' field of each record for specific usage rights.
## Contribution Guidelines
Contributions are welcome! If you find any issues or would like to add more SVGs to the dataset, please submit a pull request or open an issue in the repository.
## Acknowledgements
A huge thanks to SVGRepo, Noto Emoji, and InstructSVG for providing the SVG files that make up this dataset.
For more details and to download the dataset, visit the project repository.
|
[
"# SVGen Vector Images Dataset",
"## Overview\n\nSVGen is a comprehensive dataset containing 300,000 SVG vector codes from a diverse set of sources including SVG-Repo, Noto Emoji, and InstructSVG. The dataset aims to provide a wide range of SVG files suitable for various applications including web development, design, and machine learning research.",
"## Data Fields\n\n- input: The name or label of the SVG item\n- output: SVG code containing the vector representation\n- description: Brief description of the SVG item\n- source: The original source or collection of the SVG\n- license: Licensing terms for using the SVG",
"## Data Sources\n\n- SVG-Repo\n- Noto Emoji\n- InstructSVG",
"## Usage\n\nThe dataset is particularly useful for tasks such as icon classification, style transfer, image-to-vector translation, and much more. It serves as a rich resource for machine learning models that require high-quality SVG data.",
"## Help Wanted\n\nI wanted to use BILP to generate 'description''s for each SVG, but It's not working well. If you have any ideas, please let me know. Here is the Github and it also contains Colab notebook links.",
"## License\n\nThe dataset incorporates SVG files with varying licenses. Users are advised to consult the 'license' field of each record for specific usage rights.",
"## Contribution Guidelines\n\nContributions are welcome! If you find any issues or would like to add more SVGs to the dataset, please submit a pull request or open an issue in the repository.",
"## Acknowledgements\n\nA huge thanks to SVGRepo, Noto Emoji, and InstructSVG for providing the SVG files that make up this dataset.\n\nFor more details and to download the dataset, visit the project repository."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-cc #SVG #vector #region-us \n",
"# SVGen Vector Images Dataset",
"## Overview\n\nSVGen is a comprehensive dataset containing 300,000 SVG vector codes from a diverse set of sources including SVG-Repo, Noto Emoji, and InstructSVG. The dataset aims to provide a wide range of SVG files suitable for various applications including web development, design, and machine learning research.",
"## Data Fields\n\n- input: The name or label of the SVG item\n- output: SVG code containing the vector representation\n- description: Brief description of the SVG item\n- source: The original source or collection of the SVG\n- license: Licensing terms for using the SVG",
"## Data Sources\n\n- SVG-Repo\n- Noto Emoji\n- InstructSVG",
"## Usage\n\nThe dataset is particularly useful for tasks such as icon classification, style transfer, image-to-vector translation, and much more. It serves as a rich resource for machine learning models that require high-quality SVG data.",
"## Help Wanted\n\nI wanted to use BILP to generate 'description''s for each SVG, but It's not working well. If you have any ideas, please let me know. Here is the Github and it also contains Colab notebook links.",
"## License\n\nThe dataset incorporates SVG files with varying licenses. Users are advised to consult the 'license' field of each record for specific usage rights.",
"## Contribution Guidelines\n\nContributions are welcome! If you find any issues or would like to add more SVGs to the dataset, please submit a pull request or open an issue in the repository.",
"## Acknowledgements\n\nA huge thanks to SVGRepo, Noto Emoji, and InstructSVG for providing the SVG files that make up this dataset.\n\nFor more details and to download the dataset, visit the project repository."
] |
[
44,
8,
72,
63,
20,
53,
57,
37,
45,
54
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-cc #SVG #vector #region-us \n# SVGen Vector Images Dataset## Overview\n\nSVGen is a comprehensive dataset containing 300,000 SVG vector codes from a diverse set of sources including SVG-Repo, Noto Emoji, and InstructSVG. The dataset aims to provide a wide range of SVG files suitable for various applications including web development, design, and machine learning research.## Data Fields\n\n- input: The name or label of the SVG item\n- output: SVG code containing the vector representation\n- description: Brief description of the SVG item\n- source: The original source or collection of the SVG\n- license: Licensing terms for using the SVG## Data Sources\n\n- SVG-Repo\n- Noto Emoji\n- InstructSVG## Usage\n\nThe dataset is particularly useful for tasks such as icon classification, style transfer, image-to-vector translation, and much more. It serves as a rich resource for machine learning models that require high-quality SVG data.## Help Wanted\n\nI wanted to use BILP to generate 'description''s for each SVG, but It's not working well. If you have any ideas, please let me know. Here is the Github and it also contains Colab notebook links.## License\n\nThe dataset incorporates SVG files with varying licenses. Users are advised to consult the 'license' field of each record for specific usage rights.## Contribution Guidelines\n\nContributions are welcome! If you find any issues or would like to add more SVGs to the dataset, please submit a pull request or open an issue in the repository.## Acknowledgements\n\nA huge thanks to SVGRepo, Noto Emoji, and InstructSVG for providing the SVG files that make up this dataset.\n\nFor more details and to download the dataset, visit the project repository."
] |
51407daab35fc49ae3a2a43f7d4ec05d012114eb
|
# crumb/c4-benchfilter-nano
A 278k sample derivation of the first 3M samples from the C4 dataset for a cheap and short continued pretraining for language models to optimize for benchmark scores without sacrificing generalization and generative modelling unrelated to chat or 'instruct' data.
The estimated top 10% of highest estimated length normalized ngram (mean of tri, quad, and penta-gram) overlaps for each of the
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
on 1k samples, within the first 3M samples of C4. The top scoring sample
datasets for each benchmark are then filtered again for top 30% scores and
combined and exact-match de-duplicated. Then the top 3% scores and samples less than 200 characters long are removed
because they likely have exact large n-token matches by chance such as exact
dates or times that aren't actually relevant to the data.\*
\*Upon further examination, some of these samples are still present throughout the data, albeit at much lower frequency than before, you might benefit from using `dataset.filter(x['score'] > thresh)` for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training.
|
crumb/c4-benchfilter-nano
|
[
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"language_creators:found",
"size_categories:100K<n<1M",
"source_datasets:c4",
"language:en",
"license:odc-by",
"region:us"
] |
2023-10-22T05:20:10+00:00
|
{"language_creators": ["found"], "language": ["en"], "license": "odc-by", "size_categories": ["100K<n<1M"], "source_datasets": ["c4"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 373897649.51453334, "num_examples": 278115}], "download_size": 242478448, "dataset_size": 373897649.51453334}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-22T18:22:56+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #language_creators-found #size_categories-100K<n<1M #source_datasets-c4 #language-English #license-odc-by #region-us
|
# crumb/c4-benchfilter-nano
A 278k sample derivation of the first 3M samples from the C4 dataset for a cheap and short continued pretraining for language models to optimize for benchmark scores without sacrificing generalization and generative modelling unrelated to chat or 'instruct' data.
The estimated top 10% of highest estimated length normalized ngram (mean of tri, quad, and penta-gram) overlaps for each of the
selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based
on 1k samples, within the first 3M samples of C4. The top scoring sample
datasets for each benchmark are then filtered again for top 30% scores and
combined and exact-match de-duplicated. Then the top 3% scores and samples less than 200 characters long are removed
because they likely have exact large n-token matches by chance such as exact
dates or times that aren't actually relevant to the data.\*
\*Upon further examination, some of these samples are still present throughout the data, albeit at much lower frequency than before, you might benefit from using 'URL(x['score'] > thresh)' for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training.
|
[
"# crumb/c4-benchfilter-nano\n\nA 278k sample derivation of the first 3M samples from the C4 dataset for a cheap and short continued pretraining for language models to optimize for benchmark scores without sacrificing generalization and generative modelling unrelated to chat or 'instruct' data. \n\nThe estimated top 10% of highest estimated length normalized ngram (mean of tri, quad, and penta-gram) overlaps for each of the \nselected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based \non 1k samples, within the first 3M samples of C4. The top scoring sample \ndatasets for each benchmark are then filtered again for top 30% scores and \ncombined and exact-match de-duplicated. Then the top 3% scores and samples less than 200 characters long are removed\nbecause they likely have exact large n-token matches by chance such as exact \ndates or times that aren't actually relevant to the data.\\* \n\n\\*Upon further examination, some of these samples are still present throughout the data, albeit at much lower frequency than before, you might benefit from using 'URL(x['score'] > thresh)' for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training."
] |
[
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #language_creators-found #size_categories-100K<n<1M #source_datasets-c4 #language-English #license-odc-by #region-us \n",
"# crumb/c4-benchfilter-nano\n\nA 278k sample derivation of the first 3M samples from the C4 dataset for a cheap and short continued pretraining for language models to optimize for benchmark scores without sacrificing generalization and generative modelling unrelated to chat or 'instruct' data. \n\nThe estimated top 10% of highest estimated length normalized ngram (mean of tri, quad, and penta-gram) overlaps for each of the \nselected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based \non 1k samples, within the first 3M samples of C4. The top scoring sample \ndatasets for each benchmark are then filtered again for top 30% scores and \ncombined and exact-match de-duplicated. Then the top 3% scores and samples less than 200 characters long are removed\nbecause they likely have exact large n-token matches by chance such as exact \ndates or times that aren't actually relevant to the data.\\* \n\n\\*Upon further examination, some of these samples are still present throughout the data, albeit at much lower frequency than before, you might benefit from using 'URL(x['score'] > thresh)' for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training."
] |
[
92,
315
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #language_creators-found #size_categories-100K<n<1M #source_datasets-c4 #language-English #license-odc-by #region-us \n# crumb/c4-benchfilter-nano\n\nA 278k sample derivation of the first 3M samples from the C4 dataset for a cheap and short continued pretraining for language models to optimize for benchmark scores without sacrificing generalization and generative modelling unrelated to chat or 'instruct' data. \n\nThe estimated top 10% of highest estimated length normalized ngram (mean of tri, quad, and penta-gram) overlaps for each of the \nselected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based \non 1k samples, within the first 3M samples of C4. The top scoring sample \ndatasets for each benchmark are then filtered again for top 30% scores and \ncombined and exact-match de-duplicated. Then the top 3% scores and samples less than 200 characters long are removed\nbecause they likely have exact large n-token matches by chance such as exact \ndates or times that aren't actually relevant to the data.\\* \n\n\\*Upon further examination, some of these samples are still present throughout the data, albeit at much lower frequency than before, you might benefit from using 'URL(x['score'] > thresh)' for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.