sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
a6ddcc042f519e63e4007ea52cc887164a8bd8ed | julien-c/label-studio-my-dogs | [
"license:artistic-2.0",
"label-studio",
"region:us"
] | 2022-09-09T17:17:17+00:00 | {"license": "artistic-2.0", "tags": ["label-studio"]} | 2022-09-12T07:11:58+00:00 |
|
c77c333b7fccd5643138b200a02064979a0db135 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-40d85c-155 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-09T18:03:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-09-22T01:59:16+00:00 |
922eca60e4c424a62beca76ab414ddc4dbeb1039 | # AutoTrain Dataset for project: donut-vs-croissant
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project donut-vs-croissant.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<512x512 RGB PIL image>",
"target": 0
},
{
"image": "<512x512 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=2, names=['croissant', 'donut'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 133 |
| valid | 362 |
| victor/autotrain-data-donut-vs-croissant | [
"task_categories:image-classification",
"region:us"
] | 2022-09-09T19:29:58+00:00 | {"task_categories": ["image-classification"]} | 2022-09-09T19:32:23+00:00 |
83f056bddc1d071b67a9eeef8abf768b99802e74 | StankyDanko/testing-kp | [
"license:afl-3.0",
"region:us"
] | 2022-09-10T03:20:26+00:00 | {"license": "afl-3.0"} | 2022-09-10T03:34:01+00:00 |
|
7591ee27200f230a06b1066664860beebd995151 | Altarbeast/opart | [
"license:artistic-2.0",
"region:us"
] | 2022-09-10T03:49:28+00:00 | {"license": "artistic-2.0"} | 2022-09-10T04:44:12+00:00 |
|
667c94a72c056ca935f03871b7ad1e0356cff53b | ankitkupadhyay/mnli_hindi | [
"license:apache-2.0",
"region:us"
] | 2022-09-10T04:04:44+00:00 | {"license": "apache-2.0"} | 2022-09-10T04:47:14+00:00 |
|
77b6a33d91b4e42eb6d75fd4aa30bfb9e3dbd9dc | StankyDanko/testing-kp2 | [
"license:afl-3.0",
"region:us"
] | 2022-09-10T04:05:00+00:00 | {"license": "afl-3.0"} | 2022-09-10T04:05:22+00:00 |
|
802128c3e157ae57972d51c7744de0ebd2334d3f | GantaGoodsAI/Test | [
"license:afl-3.0",
"region:us"
] | 2022-09-10T08:15:33+00:00 | {"license": "afl-3.0"} | 2022-09-10T08:42:47+00:00 |
|
671cdca3749b70e3e3b4f23e36428f1b1890ab70 |
# Cannabis Tests, Curated by Cannlytics
<div style="margin-top:1rem; margin-bottom: 1rem;">
<img width="240px" alt="" src="https://firebasestorage.googleapis.com/v0/b/cannlytics.appspot.com/o/public%2Fimages%2Fdatasets%2Fcannabis_tests%2Fcannabis_tests_curated_by_cannlytics.png?alt=media&token=22e4d1da-6b30-4c3f-9ff7-1954ac2739b2">
</div>
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Data Collection and Normalization](#data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [License](#license)
- [Citation](#citation)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** <https://github.com/cannlytics/cannlytics>
- **Repository:** <https://huggingface.co/datasets/cannlytics/cannabis_tests>
- **Point of Contact:** <[email protected]>
### Dataset Summary
This dataset is a collection of public cannabis lab test results parsed by [`CoADoc`](https://github.com/cannlytics/cannlytics/tree/main/cannlytics/data/coas), a certificate of analysis (COA) parsing tool.
## Dataset Structure
The dataset is partitioned into the various sources of lab results.
| Subset | Source | Observations |
|--------|--------|--------------|
| `rawgarden` | Raw Gardens | 2,667 |
| `mcrlabs` | MCR Labs | Coming soon! |
| `psilabs` | PSI Labs | Coming soon! |
| `sclabs` | SC Labs | Coming soon! |
| `washington` | Washington State | Coming soon! |
### Data Instances
You can load the `details` for each of the dataset files. For example:
```py
from datasets import load_dataset
# Download Raw Garden lab result details.
dataset = load_dataset('cannlytics/cannabis_tests', 'rawgarden')
details = dataset['details']
assert len(details) > 0
print('Downloaded %i observations.' % len(details))
```
> Note: Configurations for `results` and `values` are planned. For now, you can create these data with `CoADoc().save(details, out_file)`.
### Data Fields
Below is a non-exhaustive list of fields, used to standardize the various data that are encountered, that you may expect encounter in the parsed COA data.
| Field | Example| Description |
|-------|-----|-------------|
| `analyses` | ["cannabinoids"] | A list of analyses performed on a given sample. |
| `{analysis}_method` | "HPLC" | The method used for each analysis. |
| `{analysis}_status` | "pass" | The pass, fail, or N/A status for pass / fail analyses. |
| `coa_urls` | [{"url": "", "filename": ""}] | A list of certificate of analysis (CoA) URLs. |
| `date_collected` | 2022-04-20T04:20 | An ISO-formatted time when the sample was collected. |
| `date_tested` | 2022-04-20T16:20 | An ISO-formatted time when the sample was tested. |
| `date_received` | 2022-04-20T12:20 | An ISO-formatted time when the sample was received. |
| `distributor` | "Your Favorite Dispo" | The name of the product distributor, if applicable. |
| `distributor_address` | "Under the Bridge, SF, CA 55555" | The distributor address, if applicable. |
| `distributor_street` | "Under the Bridge" | The distributor street, if applicable. |
| `distributor_city` | "SF" | The distributor city, if applicable. |
| `distributor_state` | "CA" | The distributor state, if applicable. |
| `distributor_zipcode` | "55555" | The distributor zip code, if applicable. |
| `distributor_license_number` | "L2Stat" | The distributor license number, if applicable. |
| `images` | [{"url": "", "filename": ""}] | A list of image URLs for the sample. |
| `lab_results_url` | "https://cannlytics.com/results" | A URL to the sample results online. |
| `producer` | "Grow Tent" | The producer of the sampled product. |
| `producer_address` | "3rd & Army, SF, CA 55555" | The producer's address. |
| `producer_street` | "3rd & Army" | The producer's street. |
| `producer_city` | "SF" | The producer's city. |
| `producer_state` | "CA" | The producer's state. |
| `producer_zipcode` | "55555" | The producer's zipcode. |
| `producer_license_number` | "L2Calc" | The producer's license number. |
| `product_name` | "Blue Rhino Pre-Roll" | The name of the product. |
| `lab_id` | "Sample-0001" | A lab-specific ID for the sample. |
| `product_type` | "flower" | The type of product. |
| `batch_number` | "Order-0001" | A batch number for the sample or product. |
| `metrc_ids` | ["1A4060300002199000003445"] | A list of relevant Metrc IDs. |
| `metrc_lab_id` | "1A4060300002199000003445" | The Metrc ID associated with the lab sample. |
| `metrc_source_id` | "1A4060300002199000003445" | The Metrc ID associated with the sampled product. |
| `product_size` | 2000 | The size of the product in milligrams. |
| `serving_size` | 1000 | An estimated serving size in milligrams. |
| `servings_per_package` | 2 | The number of servings per package. |
| `sample_weight` | 1 | The weight of the product sample in grams. |
| `results` | [{...},...] | A list of results, see below for result-specific fields. |
| `status` | "pass" | The overall pass / fail status for all contaminant screening analyses. |
| `total_cannabinoids` | 14.20 | The analytical total of all cannabinoids measured. |
| `total_thc` | 14.00 | The analytical total of THC and THCA. |
| `total_cbd` | 0.20 | The analytical total of CBD and CBDA. |
| `total_terpenes` | 0.42 | The sum of all terpenes measured. |
| `results_hash` | "{sha256-hash}" | An HMAC of the sample's `results` JSON signed with Cannlytics' public key, `"cannlytics.eth"`. |
| `sample_id` | "{sha256-hash}" | A generated ID to uniquely identify the `producer`, `product_name`, and `results`. |
| `sample_hash` | "{sha256-hash}" | An HMAC of the entire sample JSON signed with Cannlytics' public key, `"cannlytics.eth"`. |
<!-- | `strain_name` | "Blue Rhino" | A strain name, if specified. Otherwise, can be attempted to be parsed from the `product_name`. | -->
Each result can contain the following fields.
| Field | Example| Description |
|-------|--------|-------------|
| `analysis` | "pesticides" | The analysis used to obtain the result. |
| `key` | "pyrethrins" | A standardized key for the result analyte. |
| `name` | "Pyrethrins" | The lab's internal name for the result analyte |
| `value` | 0.42 | The value of the result. |
| `mg_g` | 0.00000042 | The value of the result in milligrams per gram. |
| `units` | "ug/g" | The units for the result `value`, `limit`, `lod`, and `loq`. |
| `limit` | 0.5 | A pass / fail threshold for contaminant screening analyses. |
| `lod` | 0.01 | The limit of detection for the result analyte. Values below the `lod` are typically reported as `ND`. |
| `loq` | 0.1 | The limit of quantification for the result analyte. Values above the `lod` but below the `loq` are typically reported as `<LOQ`. |
| `status` | "pass" | The pass / fail status for contaminant screening analyses. |
### Data Splits
The data is split into `details`, `results`, and `values` data. Configurations for `results` and `values` are planned. For now, you can create these data with:
```py
from cannlytics.data.coas import CoADoc
from datasets import load_dataset
import pandas as pd
# Download Raw Garden lab result details.
repo = 'cannlytics/cannabis_tests'
dataset = load_dataset(repo, 'rawgarden')
details = dataset['details']
# Save the data locally with "Details", "Results", and "Values" worksheets.
outfile = 'details.xlsx'
parser = CoADoc()
parser.save(details.to_pandas(), outfile)
# Read the values.
values = pd.read_excel(outfile, sheet_name='Values')
# Read the results.
results = pd.read_excel(outfile, sheet_name='Results')
```
<!-- Training data is used for training your models. Validation data is used for evaluating your trained models, to help you determine a final model. Test data is used to evaluate your final model. -->
## Dataset Creation
### Curation Rationale
Certificates of analysis (CoAs) are abundant for cannabis cultivators, processors, retailers, and consumers too, but the data is often locked away. Rich, valuable laboratory data so close, yet so far away! CoADoc puts these vital data points in your hands by parsing PDFs and URLs, finding all the data, standardizing the data, and cleanly returning the data to you.
### Source Data
| Data Source | URL |
|-------------|-----|
| MCR Labs Test Results | <https://reports.mcrlabs.com> |
| PSI Labs Test Results | <https://results.psilabs.org/test-results/> |
| Raw Garden Test Results | <https://rawgarden.farm/lab-results/> |
| SC Labs Test Results | <https://client.sclabs.com/> |
| Washington State Lab Test Results | <https://lcb.app.box.com/s/e89t59s0yb558tjoncjsid710oirqbgd> |
#### Data Collection and Normalization
You can recreate the dataset using the open source algorithms in the repository. First clone the repository:
```
git clone https://huggingface.co/datasets/cannlytics/cannabis_tests
```
You can then install the algorithm Python (3.9+) requirements:
```
cd cannabis_tests
pip install -r requirements.txt
```
Then you can run all of the data-collection algorithms:
```
python algorithms/main.py
```
Or you can run each algorithm individually. For example:
```
python algorithms/get_results_mcrlabs.py
```
In the `algorithms` directory, you can find the data collection scripts described in the table below.
| Algorithm | Organization | Description |
|-----------|---------------|-------------|
| `get_results_mcrlabs.py` | MCR Labs | Get lab results published by MCR Labs. |
| `get_results_psilabs.py` | PSI Labs | Get historic lab results published by MCR Labs. |
| `get_results_rawgarden.py` | Raw Garden | Get lab results Raw Garden publishes for their products. |
| `get_results_sclabs.py` | SC Labs | Get lab results published by SC Labs. |
| `get_results_washington.py` | Washington State | Get historic lab results obtained through a FOIA request in Washington State. |
### Personal and Sensitive Information
The dataset includes public addresses and contact information for related cannabis licensees. It is important to take care to use these data points in a legal manner.
## Considerations for Using the Data
### Social Impact of Dataset
Arguably, there is substantial social impact that could result from the study of cannabis, therefore, researchers and data consumers alike should take the utmost care in the use of this dataset.
### Discussion of Biases
Cannlytics is a for-profit data and analytics company that primarily serves cannabis businesses. The data are not randomly collected and thus sampling bias should be taken into consideration.
### Other Known Limitations
The data represents only a subset of the population of cannabis lab results. Non-standard values are coded as follows.
| Actual | Coding |
|--------|--------|
| `'ND'` | `0.000000001` |
| `'No detection in 1 gram'` | `0.000000001` |
| `'Negative/1g'` | `0.000000001` |
| '`PASS'` | `0.000000001` |
| `'<LOD'` | `0.00000001` |
| `'< LOD'` | `0.00000001` |
| `'<LOQ'` | `0.0000001` |
| `'< LOQ'` | `0.0000001` |
| `'<LLOQ'` | `0.0000001` |
| `'≥ LOD'` | `10001` |
| `'NR'` | `None` |
| `'N/A'` | `None` |
| `'na'` | `None` |
| `'NT'` | `None` |
## Additional Information
### Dataset Curators
Curated by [🔥Cannlytics](https://cannlytics.com)<br>
<[email protected]>
### License
```
Copyright (c) 2022 Cannlytics and the Cannabis Data Science Team
The files associated with this dataset are licensed under a
Creative Commons Attribution 4.0 International license.
You can share, copy and modify this dataset so long as you give
appropriate credit, provide a link to the CC BY license, and
indicate if changes were made, but you may not do so in a way
that suggests the rights holder has endorsed you or your use of
the dataset. Note that further permission may be required for
any content within the dataset that is identified as belonging
to a third party.
```
### Citation
Please cite the following if you use the code examples in your research:
```bibtex
@misc{cannlytics2022,
title={Cannabis Data Science},
author={Skeate, Keegan and O'Sullivan-Sutherland, Candace},
journal={https://github.com/cannlytics/cannabis-data-science},
year={2022}
}
```
### Contributions
Thanks to [🔥Cannlytics](https://cannlytics.com), [@candy-o](https://github.com/candy-o), [@hcadeaux](https://huggingface.co/hcadeaux), [@keeganskeate](https://github.com/keeganskeate), [The CESC](https://thecesc.org), and the entire [Cannabis Data Science Team](https://meetup.com/cannabis-data-science/members) for their contributions.
| cannlytics/cannabis_tests | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:cc-by-4.0",
"cannabis",
"lab results",
"tests",
"region:us"
] | 2022-09-10T15:54:44+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "license": ["cc-by-4.0"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "pretty_name": "cannabis_tests", "tags": ["cannabis", "lab results", "tests"]} | 2023-02-22T15:48:43+00:00 |
2c6c46871b025d47c494f0cfc2235dcf2cadc1fd |
# Dataset Card for Clinical Trials's Reason to Stop
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.opentargets.org
- **Repository:** https://github.com/LesyaR/stopReasons
- **Paper:**
- **Point of Contact:** [email protected]
### Dataset Summary
This dataset contains a curated classification of more than 5000 reasons why a clinical trial has suffered an early stop.
The text has been extracted from clinicaltrials.gov, the largest resource of clinical trial information. The text has been curated by members of the Open Targets organisation, a project aimed at providing data relevant to drug development.
All 17 possible classes have been carefully defined:
- Business_Administrative
- Another_Study
- Negative
- Study_Design
- Invalid_Reason
- Ethical_Reason
- Insufficient_Data
- Insufficient_Enrollment
- Study_Staff_Moved
- Endpoint_Met
- Regulatory
- Logistics_Resources
- Safety_Sideeffects
- No_Context
- Success
- Interim_Analysis
- Covid19
### Supported Tasks and Leaderboards
Multi class classification
### Languages
English
## Dataset Structure
### Data Instances
```json
{'text': 'Due to company decision to focus resources on a larger, controlled study in this patient population."',
'label': 'Another_Study'}
```
### Data Fields
`text`: contains the reason for the CT early stop
`label`: contains one of the 17 defined classes
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset has an Apache 2.0 license.
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@ireneisdoomed](https://github.com/<github-username>) for adding this dataset. | opentargets/clinical_trial_reason_to_stop | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"bio",
"research papers",
"clinical trial",
"drug development",
"region:us"
] | 2022-09-10T17:20:47+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "multi-label-classification"], "pretty_name": "clinical_trial_reason_to_stop", "tags": ["bio", "research papers", "clinical trial", "drug development"]} | 2022-12-12T08:57:19+00:00 |
d05e581aa93337daa3728ba7ef4c9882221b1491 | This is a floral dataset to train text inversion in Stable diffusion and being added here for future reference and additional implementation. | jags/floral | [
"license:mit",
"region:us"
] | 2022-09-10T17:41:30+00:00 | {"license": "mit"} | 2022-09-10T18:03:16+00:00 |
eb8605ffaf086f92c9f960c79be3afa91a0c336a | ptr6695/images | [
"region:us"
] | 2022-09-10T17:58:53+00:00 | {} | 2022-09-10T18:01:03+00:00 |
|
0fd633506841e8ac7c5333e199192b3d9013ea66 | Shurius/Public_TRAIN | [
"license:afl-3.0",
"region:us"
] | 2022-09-10T19:19:41+00:00 | {"license": "afl-3.0"} | 2022-09-10T19:32:55+00:00 |
|
0851765c5fb915f2cf1fcee32bcd96094440f83e | Nesboen/Style-Marc-Allante | [
"license:afl-3.0",
"region:us"
] | 2022-09-10T22:22:57+00:00 | {"license": "afl-3.0"} | 2022-09-11T00:24:49+00:00 |
|
698a77b2d5f0a87ed997a78aa71588a5d9c556d3 | ankitkupadhyay/xnli_hindi | [
"license:apache-2.0",
"region:us"
] | 2022-09-11T02:11:44+00:00 | {"license": "apache-2.0"} | 2022-09-11T02:12:15+00:00 |
|
bc32fc5ba6a332b3a3a0c8ad663b91e21a223240 | lmiro/testing | [
"license:afl-3.0",
"region:us"
] | 2022-09-11T02:33:41+00:00 | {"license": "afl-3.0"} | 2022-09-11T02:35:11+00:00 |
|
3fac5b225104ab46a1b74fed72920cc854f7bb75 | bendito999/garfield-plush-pizza-pics | [
"license:mit",
"region:us"
] | 2022-09-11T08:29:22+00:00 | {"license": "mit"} | 2022-09-11T13:34:24+00:00 |
|
93f548596663c5459ad33c179ae74e2d785ffbae | # Controlled Text Reduction
This dataset contains Controlled Text Reduction triplets - document-summary pairs, and the spans in the document that cover the summary.
The task input is consists of a document with pre-selected spans in it ("highlights"). The output is a text covering all and only the highlighted content.
The script downloads the data from the original [GitHub repository](https://github.com/lovodkin93/Controlled_Text_Reduction).
### Format
The dataset contains the following important features:
* `doc_text` - the input text.
* `summary_text` - the output text.
* `highlight_spans` - the spans in the input text (the doc_text) that lead to the output text (the summary_text).
```json
{'doc_text': 'The motion picture industry\'s most coveted award...with 32.',
'summary_text': 'The Oscar, created 60 years ago by MGM...awarded person (32).',
'highlight_spans':'[[0, 48], [50, 55], [57, 81], [184, 247], ..., [953, 975], [1033, 1081]]'}
```
where for each document-summary pair, we save the spans in the input document that lead to the summary.
Notice that the dataset consists of two subsets:
1. `DUC-2001-2002` - which is further divided into 3 splits (train, validation and test).
2. `CNN-DM` - which has a single split.
Citation
========
If you find the Controlled Text Reduction dataset useful in your research, please cite the following paper:
```
@misc{https://doi.org/10.48550/arxiv.2210.13449,
doi = {10.48550/ARXIV.2210.13449},
url = {https://arxiv.org/abs/2210.13449},
author = {Slobodkin, Aviv and Roit, Paul and Hirsch, Eran and Ernst, Ori and Dagan, Ido},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Controlled Text Reduction},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Zero v1.0 Universal}
}
``` | biu-nlp/Controlled-Text-Reduction-dataset | [
"arxiv:2210.13449",
"region:us"
] | 2022-09-11T08:44:55+00:00 | {} | 2022-10-25T12:25:49+00:00 |
f768484b9b80ae209d10ea0224681a45d5436d5c | blacknightbV1/test | [
"license:cc-by-nd-4.0",
"region:us"
] | 2022-09-11T11:47:59+00:00 | {"license": "cc-by-nd-4.0"} | 2022-09-11T11:47:59+00:00 |
|
bd0926e4ef0e4dc290cd6512a170660e15f0e619 | 0xZoki/daedaland-test | [
"license:other",
"region:us"
] | 2022-09-11T12:36:13+00:00 | {"license": "other"} | 2022-09-11T12:50:13+00:00 |
|
568efa79ccdda4c4aeda7f6e48220dc8cd7f3953 | Dataset extracted from https://www.cdc.gov/coronavirus/2019-ncov/hcp/faq.html#Treatment-and-Management.
| CShorten/CDC-COVID-FAQ | [
"license:afl-3.0",
"region:us"
] | 2022-09-11T14:42:18+00:00 | {"license": "afl-3.0"} | 2022-09-11T14:42:46+00:00 |
3a6f97c193ec1e9dc29afda5edd0325e28af9f8d | remyremy/glasssherlock | [
"license:afl-3.0",
"region:us"
] | 2022-09-11T19:24:55+00:00 | {"license": "afl-3.0"} | 2022-09-11T19:24:55+00:00 |
|
3e6692930c656756b2308aea87d4bf9e3832390c | sz4qwe/1 | [
"license:afl-3.0",
"region:us"
] | 2022-09-11T22:45:29+00:00 | {"license": "afl-3.0"} | 2022-09-11T22:45:29+00:00 |
|
b8b8ebb1af25699dab8f6e630051823bc27ff875 |
# Artic Dataset
This dataset was created using artic API, and the descriptions were scraped from the artic.edu website. The scraping code is shared at [github.com/abhisharsinha/gsoc](https://github.com/abhisharsinha/gsoc/)
The images are hosted at this [google cloud bucket](https://storage.googleapis.com/mys-released-models/gsoc/artic-dataset.zip) The image filenames correspond to `image_id` in the tabular dataset.
The description was only available for selected artworks. `full_description` is the whole text scraped from the description page. `description` is the first paragraph of the `full_description`. | abhishars/artic-dataset | [
"license:cc",
"region:us"
] | 2022-09-12T02:30:58+00:00 | {"license": "cc"} | 2023-01-05T14:41:46+00:00 |
f0bcb0f64866553125cf79c87621268a6535febd | # AutoTrain Dataset for project: climate-text-classification
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project climate-text-classification.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "So the way our model has currently been built, we would look to pay down debt with our free cash flow generation that we're planning on generating this year, which is around $20 million to $30 million.",
"target": 0
},
{
"text": "So we don't see any big drama on the long-term FMPs as a result of this.",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['0', '1'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1919 |
| valid | 481 |
| prathap-reddy/autotrain-data-climate-text-classification | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2022-09-12T04:57:07+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-09-12T05:07:52+00:00 |
47075264b1be2a84a27431aa0ffb2728c575c91a | amarjeet-op/mybase | [
"region:us"
] | 2022-09-12T06:16:13+00:00 | {} | 2022-09-12T23:34:14+00:00 |
|
95f9d9aaa25de591641b7351ec6d2cb11820cb86 | BAJIRAO/dataset | [
"license:afl-3.0",
"region:us"
] | 2022-09-12T10:30:45+00:00 | {"license": "afl-3.0"} | 2022-09-12T10:30:45+00:00 |
|
b9860f54ee2427fb647f8950fb02018a485f0c94 | This dataset is not an official one, therefore should not be used without care! | Pakulski/ELI5-test | [
"region:us"
] | 2022-09-12T11:34:06+00:00 | {} | 2022-09-24T13:34:52+00:00 |
ffdf22f42c87f1f9c0dfe9eee88ba29b2ef7122b | edc505/pokemon | [
"license:bsd-3-clause",
"region:us"
] | 2022-09-12T11:55:14+00:00 | {"license": "bsd-3-clause"} | 2022-09-12T13:27:53+00:00 |
|
53b699798eb1ca8aa69f1fdcc8e9d8416ab00d86 |
# Datastet card for Encyclopaedia Britannica Illustrated
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://data.nls.uk/data/digitised-collections/encyclopaedia-britannica/](https://data.nls.uk/data/digitised-collections/encyclopaedia-britannica/)
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Citation Information
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| biglam/encyclopaedia_britannica_illustrated | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"size_categories:1K<n<10K",
"license:cc0-1.0",
"region:us"
] | 2022-09-12T16:40:02+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": [], "language": [], "license": ["cc0-1.0"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "Encyclopaedia Britannica Illustrated", "tags": []} | 2023-02-22T18:40:02+00:00 |
ad277049fa091a96d26c60b02602b0886c6f976f | pmmucsd/stella | [
"license:mit",
"region:us"
] | 2022-09-12T17:12:13+00:00 | {"license": "mit"} | 2022-09-12T17:24:41+00:00 |
|
06928317703bcfa6099c7fc0f13e11bb295e7769 |
# LAION-Aesthetics :: CLIP → UMAP
This dataset is a CLIP (text) → UMAP embedding of the [LAION-Aesthetics dataset](https://laion.ai/blog/laion-aesthetics/) - specifically the [`improved_aesthetics_6plus` version](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus), which filters the full dataset to images with scores of > 6 under the "aesthetic" filtering model.
Thanks LAION for this amazing corpus!
---
The dataset here includes coordinates for 3x separate UMAP fits using different values for the `n_neighbors` parameter - `10`, `30`, and `60` - which are broken out as separate columns with different suffixes:
- `n_neighbors=10` → (`x_nn10`, `y_nn10`)
- `n_neighbors=30` → (`x_nn30`, `y_nn30`)
- `n_neighbors=60` → (`x_nn60`, `y_nn60`)
### `nn10`

### `nn30`

### `nn60`
(The version from [Twitter](https://twitter.com/clured/status/1565399157606580224).)

## Pipeline
The script for producing this can be found here:
https://github.com/davidmcclure/loam-viz/blob/laion/laion.py
And is very simple - just using the `openai/clip-vit-base-patch32` model out-of-the-box to encode the text captions:
```python
@app.command()
def clip(
src: str,
dst: str,
text_col: str = 'TEXT',
limit: Optional[int] = typer.Option(None),
batch_size: int = typer.Option(512),
):
"""Embed with CLIP."""
df = pd.read_parquet(src)
if limit:
df = df.head(limit)
tokenizer = CLIPTokenizerFast.from_pretrained('openai/clip-vit-base-patch32')
model = CLIPTextModel.from_pretrained('openai/clip-vit-base-patch32')
model = model.to(device)
texts = df[text_col].tolist()
embeds = []
for batch in chunked_iter(tqdm(texts), batch_size):
enc = tokenizer(
batch,
return_tensors='pt',
padding=True,
truncation=True,
)
enc = enc.to(device)
with torch.no_grad():
res = model(**enc)
embeds.append(res.pooler_output.to('cpu'))
embeds = torch.cat(embeds).numpy()
np.save(dst, embeds)
print(embeds.shape)
```
Then using `cuml.GaussianRandomProjection` to do an initial squeeze to 64d (which gets the embedding tensor small enough to fit onto a single GPU for the UMAP) -
```python
@app.command()
def random_projection(src: str, dst: str, dim: int = 64):
"""Random projection on an embedding matrix."""
rmm.reinitialize(managed_memory=True)
embeds = np.load(src)
rp = cuml.GaussianRandomProjection(n_components=dim)
embeds = rp.fit_transform(embeds)
np.save(dst, embeds)
print(embeds.shape)
```
And then `cuml.UMAP` to get from 64d -> 2d -
```python
@app.command()
def umap(
df_src: str,
embeds_src: str,
dst: str,
n_neighbors: int = typer.Option(30),
n_epochs: int = typer.Option(1000),
negative_sample_rate: int = typer.Option(20),
):
"""UMAP to 2d."""
rmm.reinitialize(managed_memory=True)
df = pd.read_parquet(df_src)
embeds = np.load(embeds_src)
embeds = embeds.astype('float16')
print(embeds.shape)
print(embeds.dtype)
reducer = cuml.UMAP(
n_neighbors=n_neighbors,
n_epochs=n_epochs,
negative_sample_rate=negative_sample_rate,
verbose=True,
)
x = reducer.fit_transform(embeds)
df['x'] = x[:,0]
df['y'] = x[:,1]
df.to_parquet(dst)
print(df)
``` | dclure/laion-aesthetics-12m-umap | [
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"license:mit",
"laion",
"stable-diffuson",
"text2img",
"region:us"
] | 2022-09-12T19:18:45+00:00 | {"annotations_creators": [], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "laion-aesthetics-12m-umap", "tags": ["laion", "stable-diffuson", "text2img"]} | 2022-09-12T20:45:15+00:00 |
f808a5c45e9a7e7dad0865df2fcb74ece47553d5 | George6584/newTest | [
"license:afl-3.0",
"region:us"
] | 2022-09-13T00:02:44+00:00 | {"license": "afl-3.0"} | 2022-09-13T02:19:40+00:00 |
|
8b1525b3fcddc02bdce5907fefab08055ecac419 | George6584/testing | [
"license:afl-3.0",
"region:us"
] | 2022-09-13T02:39:21+00:00 | {"license": "afl-3.0"} | 2022-09-13T02:52:57+00:00 |
|
83dfffd480c1284345d2a1f573276ab2b060adbb | Mokello/samelin | [
"license:afl-3.0",
"region:us"
] | 2022-09-13T03:21:55+00:00 | {"license": "afl-3.0"} | 2022-09-13T03:23:39+00:00 |
|
5c8d08d69a9d54741c252ba8bdd8653ee32f52b6 |
# CARES - A Corpus of Anonymised Radiological Evidences in Spanish 📑🏥
CARES is a high-quality text resource manually labeled with ICD-10 codes and reviewed by radiologists. These types of resources are essential for developing automatic text classification tools as they are necessary for training and fine-tuning our computational systems.
The CARES corpus has been manually annotated using the ICD-10 ontology, which stands for for the 10th version of the International Classification of Diseases. For each radiological report, a minimum of one code and a maximum of 9 codes were assigned, while the average number of codes per text is 2.15 with the standard deviation of 1.12.
The corpus was additionally preprocessed in order to make its format coherent with the automatic text classification task. Considering the hierarchical structure of the ICD-10 ontology, each sub-code was mapped to its respective code and chapter, obtaining two new sets of labels for each report. The entire CARES collection contains 6,907 sub-code annotations among the 3,219 radiologic reports. There are 223 unique ICD-10 sub-codes within the annotations, which were mapped to 156 unique ICD-10 codes and 16 unique chapters of the cited ontology. | chizhikchi/CARES_random | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:afl-3.0",
"radiology",
"biomedicine",
"ICD-10",
"region:us"
] | 2022-09-13T09:32:00+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["es"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "pretty_name": "CARES", "tags": ["radiology", "biomedicine", "ICD-10"]} | 2022-11-23T09:36:01+00:00 |
8aba365da1ab4195b44d78a9b4fa44f626a9578a | ostello/KaluSarai | [
"license:afl-3.0",
"region:us"
] | 2022-09-13T09:41:16+00:00 | {"license": "afl-3.0"} | 2022-09-13T09:41:16+00:00 |
|
e2e03c91c385e8d1a758389cdb20cf9c024f6cbf |
# Dataset Card for recycling-dataset
### Dataset Summary
This is a recycling dataset that can be used for image classification. It has 11 categories:
- aluminium
- batteries
- cardboard
- disposable plates
- glass
- hard plastic
- paper
- paper towel
- polystyrene
- soft plastics
- takeaway cups
It was scrapped from DuckDuckGo using this tool: https://pypi.org/project/jmd-imagescraper/
| viola77data/recycling-dataset | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"recycling",
"image-classification",
"region:us"
] | 2022-09-13T11:43:15+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "recycling-dataset", "tags": ["recycling", "image-classification"]} | 2022-09-13T12:17:15+00:00 |
a844ce44c89757a67d4a82f0a090aeae878cddd5 | mrmoor/cti-corpus-raw | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:slot-filling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:unknown",
"cti",
"cybert threat intelligence",
"it-security",
"apt",
"region:us"
] | 2022-09-13T13:03:55+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": [], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["masked-language-modeling", "slot-filling", "language-modeling"], "pretty_name": "cti-corpus", "tags": ["cti", "cybert threat intelligence", "it-security", "apt"]} | 2022-09-14T17:54:05+00:00 |
|
7cf0f10b5b0de082ef69ed77d4a82d12c64457fe | kenthug/kusakanmuri | [
"license:afl-3.0",
"region:us"
] | 2022-09-13T14:24:53+00:00 | {"license": "afl-3.0"} | 2022-09-13T14:24:53+00:00 |
|
d0062a5f203029c0820c5bfb6fb6c4912688a522 | huynguyen208/test_data | [
"license:unknown",
"region:us"
] | 2022-09-13T14:26:31+00:00 | {"license": "unknown"} | 2022-09-19T14:31:09+00:00 |
|
0443841c9c89d542de4ab68bce7686c988f00a12 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: JNK789/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-emotion-default-42ff1e-1454153801 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-13T17:00:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "JNK789/distilbert-base-uncased-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-09-13T17:01:07+00:00 |
f6262027a8cd9dabdab1189297b48be98141f397 | vedi/Images | [
"region:us"
] | 2022-09-13T19:17:25+00:00 | {} | 2022-09-13T19:23:19+00:00 |
|
3285a4f2eec94a80b1a1c26aab282fccba42bdb6 | stargaret/noir | [
"license:artistic-2.0",
"region:us"
] | 2022-09-13T19:20:28+00:00 | {"license": "artistic-2.0"} | 2022-09-13T19:22:30+00:00 |
|
553a78b67a0be0b2de4ac6ea2ea91624cf4de5d1 | zachhurst/tiki-mug-1 | [
"license:afl-3.0",
"region:us"
] | 2022-09-13T20:01:06+00:00 | {"license": "afl-3.0"} | 2022-09-13T20:01:51+00:00 |
|
e3054439375c30e9e0cf0308c274efed194a98c6 | # Dataset Card for CUAD
This is a modified version of original [CUAD](https://huggingface.co/datasets/cuad/blob/main/README.md) which trims the question to its label form.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Contract Understanding Atticus Dataset](https://www.atticusprojectai.org/cuad)
- **Repository:** [Contract Understanding Atticus Dataset](https://github.com/TheAtticusProject/cuad/)
- **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- **Point of Contact:** [Atticus Project Team]([email protected])
### Dataset Summary
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions.
CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at https://arxiv.org/abs/2103.06268. Code for replicating the results and the trained model can be found at https://github.com/TheAtticusProject/cuad.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset contains samples in English only.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [44],
"text": ['DISTRIBUTOR AGREEMENT']
},
"context": 'EXHIBIT 10.6\n\n DISTRIBUTOR AGREEMENT\n\n THIS DISTRIBUTOR AGREEMENT (the "Agreement") is made by and between Electric City Corp., a Delaware corporation ("Company") and Electric City of Illinois LLC ("Distributor") this 7th day of September, 1999...',
"id": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Document Name_0",
"question": "Highlight the parts (if any) of this contract related to "Document Name" that should be reviewed by a lawyer. Details: The name of the contract",
"title": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT"
}
```
### Data Fields
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
This dataset is split into train/test set. Number of samples in each set is given below:
| | Train | Test |
| ----- | ------ | ---- |
| CUAD | 22450 | 4182 |
## Dataset Creation
### Curation Rationale
A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.
Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.
To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.
### Source Data
#### Initial Data Collection and Normalization
The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.
Type of Contracts: # of Docs
Affiliate Agreement: 10
Agency Agreement: 13
Collaboration/Cooperation Agreement: 26
Co-Branding Agreement: 22
Consulting Agreement: 11
Development Agreement: 29
Distributor Agreement: 32
Endorsement Agreement: 24
Franchise Agreement: 15
Hosting Agreement: 20
IP Agreement: 17
Joint Venture Agreemen: 23
License Agreement: 33
Maintenance Agreement: 34
Manufacturing Agreement: 17
Marketing Agreement: 17
Non-Compete/No-Solicit/Non-Disparagement Agreement: 3
Outsourcing Agreement: 18
Promotion Agreement: 12
Reseller Agreement: 12
Service Agreement: 28
Sponsorship Agreement: 31
Supply Agreement: 18
Strategic Alliance Agreement: 32
Transportation Agreement: 13
TOTAL: 510
#### Who are the source language producers?
The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at https://www.sec.gov/edgar. Please read the Datasheet at https://www.atticusprojectai.org/ for information on the intended use and limitations of the CUAD.
### Annotations
#### Annotation process
The labeling process included multiple steps to ensure accuracy:
1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.
2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.
3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.
4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.
5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.
6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.
7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.
#### Who are the annotators?
Answered in above section.
### Personal and Sensitive Information
Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”).
For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”.
For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”.
Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows:
THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [* * *] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION.
Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category.
To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "<omitted>" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "<omitted>”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 <omitted> This Agreement is effective as of the date written above.”
Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Attorney Advisors
Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu
Law Student Leaders
John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran
Law Student Contributors
Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin
Technical Advisors & Contributors
Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen
### Licensing Information
CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use.
The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR.
Privacy Policy & Disclaimers
The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to [email protected]. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved.
The use of CUAD is subject to their privacy policy https://www.atticusprojectai.org/privacy-policy and disclaimer https://www.atticusprojectai.org/disclaimer.
### Citation Information
```
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={arXiv preprint arXiv:2103.06268},
year={2021}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding the original CUAD dataset. | chenghao/cuad_qa | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2103.06268",
"region:us"
] | 2022-09-13T23:01:15+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa", "extractive-qa"], "paperswithcode_id": "cuad", "pretty_name": "CUAD", "train-eval-index": [{"config": "default", "task": "question-answering", "task_id": "extractive_question_answering", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"question": "question", "context": "context", "answers": {"text": "text", "answer_start": "answer_start"}}, "metrics": [{"type": "cuad", "name": "CUAD"}]}]} | 2022-09-14T15:15:12+00:00 |
8f10e489090968f4bcef4cff3ff53487cb2e9a01 | chenghao/ledgar_qa | [
"license:mit",
"region:us"
] | 2022-09-13T23:26:36+00:00 | {"license": "mit"} | 2022-09-13T23:26:36+00:00 |
|
dde2c6e5c48757257ad9e4c7db754e29d439c240 | teletubbee/bees | [
"license:cc",
"region:us"
] | 2022-09-14T03:06:01+00:00 | {"license": "cc"} | 2022-09-14T03:27:04+00:00 |
|
5b7594b2d1e6a6a63df63bbc943409112acf0377 |
# CABank Japanese Sakura Corpus
- Susanne Miyata
- Department of Medical Sciences
- Aichi Shukotoku University
- [email protected]
- website: https://ca.talkbank.org/access/Sakura.html
## Important
This data set is a copy from the original one located at https://ca.talkbank.org/access/Sakura.html.
## Details
- Participants: 31
- Type of Study: xxx
- Location: Japan
- Media type: audio
- DOI: doi:10.21415/T5M90R
## Citation information
Some citation here.
In accordance with TalkBank rules, any use of data from this corpus must be accompanied by at least one of the above references.
## Project Description
This corpus of 18 conversations is the product of six graduation theses on gender differences in students' group talk. Each conversation lasted between 12 and 35 minutes (avg. 25 minutes) resulting in an overall time of 7 hours and 30 minutes. 31 Students (19 female, 12 male) participated in the study (Table 1). The participants gathered in groups of 4 students, either of the same or the opposite sex (6 conversations with a group of 4 female students, 6 with 4 male students, and 6 conversations with 2 male and 2 female students), according to age (first and third year students) and affiliation (two academic departments). In addition, the participants of each conversation came from the same small-sized class and were well acquainted.
The participants were informed that their conversations may be transcribed and a video recorded for use in possible publication when recruited. Additionally, permission was asked once more after the transcription in cases where either private information had been displayed, or a misunderstanding concerning the nature and degree of the publication of the conversations became apparent during the conversation.
The recordings took place in a small conference room at the university between or after lectures. The participants were given a card with a conversation topic to start with, but were free to vary (topic 1 "What do you expect from an opposite sex friend?" [isee ni motomeru koto]; topic 2 "Are you a dog lover or a cat lover?" [inuha ka nekoha ka]; topic 3 "About part-time work" [arubaito ni tsuite]). The investigator was not present during the recording. The combination of participants, the topic, and the duration of the 18 conversations are given in Table 2.
The participants produced 15,449 utterances overall (female: 8,027 utterances, male: 7,422 utterances). All utterances were linked to video and transcribed in regular Japanese orthography and Latin script (Wakachi2002), and provided with morphological tags (JMOR04.1). Proper names were replaced by pseudonyms.
## Acknowledgements
Additional contributors: Banno, Kyoko; Konishi, Saya; Matsui, Ayumi; Matsumoto, Shiori; Oogi, Rie; Takahashi, Akane; Muraki, Kyoko.
| Fhrozen/CABankSakura | [
"task_categories:audio-classification",
"task_categories:automatic-speech-recognition",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:found",
"language:ja",
"license:cc",
"speech-recognition",
"region:us"
] | 2022-09-14T04:47:24+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["ja"], "license": ["cc"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["found"], "task_categories": ["audio-classification", "automatic-speech-recognition"], "task_ids": ["speaker-identification"], "pretty_name": "banksakura", "tags": ["speech-recognition"]} | 2022-12-03T03:26:50+00:00 |
208ae52187c393a222ee77605d94ec3e033d7e92 |
# CABank Japanese CallHome Corpus
- Participants: 120
- Type of Study: phone call
- Location: United States
- Media type: audio
- DOI: doi:10.21415/T5H59V
- Web: https://ca.talkbank.org/access/CallHome/jpn.html
## Citation information
Some citation here.
In accordance with TalkBank rules, any use of data from this corpus must be accompanied by at least one of the above references.
## Project Description
This is the Japanese portion of CallHome.
Speakers were solicited by the LDC to participate in this telephone speech collection effort via the internet, publications (advertisements), and personal contacts. A total of 200 call originators were found, each of whom placed a telephone call via a toll-free robot operator maintained by the LDC. Access to the robot operator was possible via a unique Personal Identification Number (PIN) issued by the recruiting staff at the LDC when the caller enrolled in the project. The participants were made aware that their telephone call would be recorded, as were the call recipients. The call was allowed only if both parties agreed to being recorded. Each caller was allowed to talk up to 30 minutes. Upon successful completion of the call, the caller was paid $20 (in addition to making a free long-distance telephone call). Each caller was allowed to place only one telephone call.
Although the goal of the call collection effort was to have unique speakers in all calls, a handful of repeat speakers are included in the corpus. In all, 200 calls were transcribed. Of these, 80 have been designated as training calls, 20 as development test calls, and 100 as evaluation test calls. For each of the training and development test calls, a contiguous 10-minute region was selected for transcription; for the evaluation test calls, a 5-minute region was transcribed. For the present publication, only 20 of the evaluation test calls are being released; the remaining 80 test calls are being held in reserve for future LVCSR benchmark tests.
After a successful call was completed, a human audit of each telephone call was conducted to verify that the proper language was spoken, to check the quality of the recording, and to select and describe the region to be transcribed. The description of the transcribed region provides information about channel quality, number of speakers, their gender, and other attributes.
## Acknowledgements
Andrew Yankes reformatted this corpus into accord with current versions of CHAT.
| Fhrozen/CABankSakuraCHJP | [
"task_categories:audio-classification",
"task_categories:automatic-speech-recognition",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:found",
"language:ja",
"license:cc",
"speech-recognition",
"region:us"
] | 2022-09-14T04:48:24+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["ja"], "license": ["cc"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["found"], "task_categories": ["audio-classification", "automatic-speech-recognition"], "task_ids": ["speaker-identification"], "pretty_name": "banksakura", "tags": ["speech-recognition"]} | 2022-12-03T03:26:43+00:00 |
acac1e8a2f086619a3f86242e3485b3b6069d496 |
# FINN.no Slate Dataset for Recommender Systems
> Data and helper functions for FINN.no slate dataset containing both viewed items and clicks from the FINN.no second hand marketplace.
Note: The dataset is originally hosted at https://github.com/finn-no/recsys_slates_dataset and this is a copy of the readme until this repo is properly created "huggingface-style".
We release the *FINN.no slate dataset* to improve recommender systems research.
The dataset includes both search and recommendation interactions between users and the platform over a 30 day period.
The dataset has logged both exposures and clicks, *including interactions where the user did not click on any of the items in the slate*.
To our knowledge there exists no such large-scale dataset, and we hope this contribution can help researchers constructing improved models and improve offline evaluation metrics.

For each user u and interaction step t we recorded all items in the visible slate  ) (up to the scroll length ), and the user's click response .
The dataset consists of 37.4 million interactions, |U| ≈ 2.3) million users and |I| ≈ 1.3 million items that belong to one of G = 290 item groups. For a detailed description of the data please see the [paper](https://arxiv.org/abs/2104.15046).

FINN.no is the leading marketplace in the Norwegian classifieds market and provides users with a platform to buy and sell general merchandise, cars, real estate, as well as house rentals and job offerings.
For questions, email [email protected] or file an issue.
## Install
`pip install recsys_slates_dataset`
## How to use
To download the generic numpy data files:
```
from recsys_slates_dataset import data_helper
data_helper.download_data_files(data_dir="data")
```
Download and prepare data into ready-to-use PyTorch dataloaders:
``` python
from recsys_slates_dataset import dataset_torch
ind2val, itemattr, dataloaders = dataset_torch.load_dataloaders(data_dir="data")
```
## Organization
The repository is organized as follows:
- The dataset is placed in `data/` and stored using git-lfs. We also provide an automatic download function in the pip package (preferred usage).
- The code open sourced from the article ["Dynamic Slate Recommendation with Gated Recurrent Units and Thompson Sampling"](https://arxiv.org/abs/2104.15046) is found in (`code_eide_et_al21/`). However, we are in the process of making the data more generally available which makes the code incompatible with the current (newer) version of the data. Please use [the v1.0 release of the repository](https://github.com/finn-no/recsys-slates-dataset/tree/v1.0) for a compatible version of the code and dataset.
## Quickstart dataset [](https://colab.research.google.com/github/finn-no/recsys-slates-dataset/blob/main/examples/quickstart-finn-recsys-slate-data.ipynb)
We provide a quickstart Jupyter notebook that runs on Google Colab (quickstart-finn-recsys-slate-data.ipynb) which includes all necessary steps above.
It gives a quick introduction to how to use the dataset.
## Example training scripts
We provide an example training jupyter notebook that implements a matrix factorization model with categorical loss that can be found in `examples/`.
It is also runnable using Google Colab: [](https://colab.research.google.com/github/finn-no/recsys-slates-dataset/blob/main/examples/matrix_factorization.ipynb)
There is ongoing work in progress to build additional examples and use them as benchmarks for the dataset.
### Dataset files
The dataset `data.npz` contains the following fields:
- userId: The unique identifier of the user.
- click: The items the user clicked on in each of the 20 presented slates.
- click_idx: The index the clicked item was on in each of the 20 presented slates.
- slate_lengths: The length of the 20 presented slates.
- slate: All the items in each of the 20 presented slates.
- interaction_type: The recommendation slate can be the result of a search query (1), a recommendation (2) or can be undefined (0).
The dataset `itemattr.npz` contains the categories ranging from 0 to 290. Corresponding with the 290 unique groups that the items belong to. These 290 unique groups are constructed using a combination of categorical information and the geographical location.
The dataset `ind2val.json` contains the mapping between the indices and the values of the categories (e.g. `"287": "JOB, Rogaland"`) and interaction types (e.g. `"1": "search"`).
## Citations
This repository accompanies the paper ["Dynamic Slate Recommendation with Gated Recurrent Units and Thompson Sampling"](https://arxiv.org/abs/2104.15046) by Simen Eide, David S. Leslie and Arnoldo Frigessi.
The article is under review, and the preprint can be obtained [here](https://arxiv.org/abs/2104.15046).
If you use either the code, data or paper, please consider citing the paper.
```
Eide, S., Leslie, D.S. & Frigessi, A. Dynamic slate recommendation with gated recurrent units and Thompson sampling. Data Min Knowl Disc (2022). https://doi.org/10.1007/s10618-022-00849-w
```
---
license: apache-2.0
---
| simeneide/recsys_slates_dataset | [
"arxiv:2104.15046",
"region:us"
] | 2022-09-14T06:41:48+00:00 | {} | 2022-09-14T07:51:42+00:00 |
887e21ba999085e25e9cb621ad5118d5edc0439a | mishagrin/shitty_salt | [
"license:unlicense",
"region:us"
] | 2022-09-14T07:30:44+00:00 | {"license": "unlicense"} | 2022-09-14T07:32:30+00:00 |
|
826870d287708d23f6828c7cd2405b715c4f1d29 | # MFQEv2 Dataset
For some video enhancement/restoration tasks, lossless reference videos are necessary.
We open-source the dataset used in our [MFQEv2 paper](https://arxiv.org/abs/1902.09707), which includes 108 lossless YUV videos for training and 18 test videos recommended by [ITU-T](https://ieeexplore.ieee.org/document/6317156).
## 1. Content
- 108 lossless YUV videos for training.
- 18 lossless YUV videos for test, recommended by ITU-T.
- An HEVC compression tool box.
43.1 GB in total.
## 2. Download Raw Videos
[[Dropbox]](https://www.dropbox.com/sh/tphdy1lmlpz7zq3/AABR4Qim-P-3xGtouWk6ohi5a?dl=0)
or [[百度网盘 (key: mfqe)]](https://pan.baidu.com/s/1oBZf75bFGRanLmQQLAg4Ew)
## 3. Compress Videos
We compress both training and test videos by [HM](https://hevc.hhi.fraunhofer.de/) 16.5 at low delay P (LDP) mode with QP=37. The video compression toolbox is provided at the dataset folder.
We will get:
```tex
MFQEv2_dataset/
├── train_108/
│ ├── raw/
│ └── HM16.5_LDP/
│ └── QP37/
├── test_18/
│ ├── raw/
│ └── HM16.5_LDP/
│ └── QP37/
├── video_compression/
│ └── ...
└── README.md
```
### Ubuntu
1. `cd video_compression/`
2. Edit `option.yml`.
3. `chmod +x TAppEncoderStatic`
4. `python unzip_n_compress.py`
### Windows
1. Unzip `train_108.zip` and `test_18.zip` manually!
2. `cd video_compression\`
3. Edit `option.yml` (e.g., `system: windows`).
4. `python unzip_n_compress.py`
## 4. Citation
If you find this helpful, please star and cite:
```tex
@article{2019xing,
doi = {10.1109/tpami.2019.2944806},
url = {https://doi.org/10.1109%2Ftpami.2019.2944806},
year = 2021,
month = {mar},
publisher = {Institute of Electrical and Electronics Engineers ({IEEE})},
volume = {43},
number = {3},
pages = {949--963},
author = {Zhenyu Guan and Qunliang Xing and Mai Xu and Ren Yang and Tie Liu and Zulin Wang},
title = {{MFQE} 2.0: A New Approach for Multi-Frame Quality Enhancement on Compressed Video},
journal = {{IEEE} Transactions on Pattern Analysis and Machine Intelligence}
}
```
| ryanxingql/MFQEv2 | [
"license:apache-2.0",
"arxiv:1902.09707",
"region:us"
] | 2022-09-14T07:46:59+00:00 | {"license": "apache-2.0"} | 2022-09-14T07:48:17+00:00 |
d88018ac299bf2075e1860461d0165ed88e97d99 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Emanuel/twitter-emotion-deberta-v3-base
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-emotion-default-2feb36-1456053837 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-14T08:15:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "Emanuel/twitter-emotion-deberta-v3-base", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-09-14T08:16:38+00:00 |
3de4889cb01d4c83cff36d11aafd915429ac3488 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-fanpage
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ehahaha](https://huggingface.co/ehahaha) for evaluating this model. | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-8ddaed-1457553860 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-14T08:18:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-fanpage", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-09-14T12:30:24+00:00 |
8b762e1dac1b31d60e01ee8f08a9d8a232b59e17 |
# Dataset Card for Pokémon BLIP captions
_Dataset used to train [Pokémon text to image model](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning)_
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by _Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis_ (FastGAN). Original images were obtained from [FastGAN-pytorch](https://github.com/odegeasslbc/FastGAN-pytorch) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Examples

> a drawing of a green pokemon with red eyes

> a green and yellow toy with a red nose

> a red and white ball with an angry look on its face
## Citation
If you use this dataset, please cite it as:
```
@misc{pinkney2022pokemon,
author = {Pinkney, Justin N. M.},
title = {Pokemon BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions/}}
}
``` | lambdalabs/pokemon-blip-captions | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:huggan/few-shot-pokemon",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-09-14T11:04:50+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["huggan/few-shot-pokemon"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Pok\u00e9mon BLIP captions", "tags": []} | 2022-09-21T09:38:05+00:00 |
cb3553a29970018ebc7b305acf37b6ec5f66b505 |
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7014 | 0.3841 | 0.1698 | 0.5471 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7226 | 0.4023 | 0.1729 | 0.5676 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. | allenai/cochrane_sparse_max | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-09-14T12:15:14+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-MS^2", "extended|other-Cochrane"], "task_categories": ["summarization", "text2text-generation"], "paperswithcode_id": "multi-document-summarization", "pretty_name": "MSLR Shared Task"} | 2022-11-24T14:50:26+00:00 |
72ac00150e537264a866f5136f0a57c4c0e9be00 |
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==9`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7014 | 0.3841 | 0.2976 | 0.4157 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7226 | 0.4023 | 0.3095 | 0.4443 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. | allenai/cochrane_sparse_mean | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-09-14T12:15:44+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-MS^2", "extended|other-Cochrane"], "task_categories": ["summarization", "text2text-generation"], "paperswithcode_id": "multi-document-summarization", "pretty_name": "MSLR Shared Task"} | 2022-11-24T15:04:01+00:00 |
a12849702d4d495199ba73a295ff3393f600c82e |
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7014 | 0.3841 | 0.3841 | 0.3841 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7226 | 0.4023 | 0.4023 | 0.4023 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. | allenai/cochrane_sparse_oracle | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-09-14T12:16:16+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-MS^2", "extended|other-Cochrane"], "task_categories": ["summarization", "text2text-generation"], "paperswithcode_id": "multi-document-summarization", "pretty_name": "MSLR Shared Task"} | 2022-11-24T14:54:01+00:00 |
9d84b3ac8da24fbce401b98a178082e54a1bca8f |
This contains the datasets for the Trojan Detection Challenge NeurIPS 2022 competition. To learn more, please see the [competition website](http://trojandetection.ai/).
# **Trojan Detection**
##### Detect and Analyze Trojan attacks on deep neural networks that are designed to be difficult to detect.
### **Overview**
Neural Trojans are a growing concern for the security of ML systems, but little is known about the fundamental offense-defense balance of Trojan detection. Early work suggests that standard Trojan attacks may be easy to detect, but recently it has been shown that in simple cases one can design practically undetectable Trojans.
This repository contains code for the **Trojan Detection Challenge (TDC) NeurIPS 2022** [competition](https://trojandetection.ai/).
There are 3 main tracks for this competition:
- **Trojan Detection Track**: Given a dataset of Trojaned and clean networks spanning multiple data sources, build a Trojan detector that classifies a test set of networks with held-out labels (Trojan, clean). For more information, see here.
- **Trojan Analysis Track**: Given a dataset of Trojaned networks spanning multiple data sources, predict various properties of Trojaned networks on a test set with held-out labels. This track has two subtracks: (1) target label prediction, (2) trigger synthesis. For more information, see here.
- **Evasive Trojans Track**: Given a dataset of clean networks and a list of attack specifications, train a small set of Trojaned networks meeting the specifications and upload them to the evaluation server. The server will verify that the attack specifications are met, then train and evaluate a baseline Trojan detector using held-out clean networks and the submitted Trojaned networks. The task is to create Trojaned networks that are hard to detect. For more information, see here.
The competition has two rounds: In the primary round, participants will compete on the three main tracks. In the final round, the solution of the first-place team in the Evasive Trojans track will be used to train a new set of hard-to-detect Trojans, and participants will compete to detect these networks. For more information on the final round, see here.
### **Contents**
There are four folders corresponding to different tracks and subtracks: 1) Trojan Detection, 2) Trojan Analysis (Target Label Prediction), 3) Trojan Analysis (Trigger Synthesis), and 4) Evasive Trojans. We provide starter code for submitting baselines in ```example_submission.ipynb``` under each folder. The ```tdc_datasets``` folder is expected to be under the same parent directory as ```tdc-starter-kit```. The datasets are available [here](https://zenodo.org/record/6894041). You can download them from the Zenodo website or by running ```download_datasets.py```.
The ```utils.py``` file contains helper functions for loading new models, generating new attack specifications, and training clean/Trojaned networks. This is primarily used for the Evasive Trojans Track starter kit. It also contains the load_data function for loading data sources (CIFAR-10/100, GTSRB, MNIST), which may be of general use. To load GTSRB images, unzip ```gtsrb_preprocessed.zip``` in the data folder (NOTE: This folder is only for storing data sources. The network datasets are stored in tdc_datasets, which must be downloaded from Zenodo). You may need to adjust the paths in the load_data function depending on your working directory. The ```wrn.py``` file contains the definition of the Wide Residual Network class used for CIFAR-10 and CIFAR-100 models. When loading networks from the competition datasets, ```wrn.py``` must be in your path. See the example submission notebooks for details.
### **Data**
Unlike standard machine learning tasks, the datasets consist of neural networks. That is, rather than making predictions on input images, goal will be identifying hidden functionality in neural networks. Networks are trained on four standard data sources: MNIST, CIFAR-10, CIFAR-100, and GTSRB. Variants of two standard Trojan attacks are used that are modified to be harder to detect. For the Detection Track, the training, validation, and test sets have 1,000 neural networks each. Networks are split evenly across all four data sources. Half of the networks are Trojaned, and there is a 50/50 split between the two attack types.
## How to Use
**Clone this repository, download the competition [datasets](https://huggingface.co/datasets/n1ghtf4l1/vigilant-fortnight/blob/main/tdc_datasets.zip) from my HuggingFace repository and unzip adjacent to the repository**. Ensure that Jupyter version is up-to-date (fairly recent). To avoid errors with model incompatibility, please use PyTorch version 1.11.0. Run one of the example notebooks or start building your own submission.
### **Additional Information**
#### **Model Architectures and Data Sources**
Networks have been trained on four standard data sources: MNIST, CIFAR-10, CIFAR-100, and GTSRB. GTSRB images are resized to 32x32.
For MNIST, convolutional networks have been used. For CIFAR-10 and CIFAR-100, Wide Residual Networks have been used. For GTSRB, Vision Transformers have been used.
#### **Trojan Attacks**
Trojaned networks have been trained with patch and whole-image attacks. These attacks are variants of the foundational BadNets and blended attacks modified to be harder to detect. These modified attacks use a simple change to the standard Trojan training procedure. Instead of training Trojaned networks from scratch, they were fine-tuned from the starting parameters of clean networks and regularize them with various similarity losses such that they are similar to the distribution of clean networks. Additionally, the networks have been trained to have high specificity for the particular trigger pattern associated with the attack. In extensive experiments, baseline detectors have been verified obtain substantially lower performance on these hard-to-detect Trojans.
All patch attacks in datasets use random trigger patterns sampled from an independent Bernoulli 0/1 distribution for each pixel and color channel (for Trojan detection and target label prediction, patches are black-and-white; for trigger synthesis, patches are colored). Each patch attack uses a different location and size for its trigger mask. All blended attacks in our datasets use random trigger trigger patterns sampled from an independent Uniform(0,1) distribution for each pixel and color channel. All attacks are all-to-one with a random target label. For more details, please see the starter kit.
MNTD, Neural Cleanse, and ABS has been used as baseline Trojan detectors for participants to improve upon. These are well-known Trojan detectors from the academic literature, each with a distinct approach to Trojan detection. Also a specificity-based detector has been used as a baseline, since Trojan attacks with low specificity can be highly susceptible to such a detector. The specificity detector applies random triggers to inputs from a given data source, then runs these triggered inputs through the network in question. The negative entropy of the average posterior is used as a detection score. This leverages the fact that Trojan attacks without specificity are activated quite frequently by randomly sampled triggers. | n1ghtf4l1/vigilant-fortnight | [
"license:mit",
"region:us"
] | 2022-09-14T13:01:28+00:00 | {"license": "mit"} | 2022-11-01T06:59:48+00:00 |
6dd53ddc97b18d6fc7c29252712ff261543e0fea |
Dataset Contain sentimen for Indonesia Communication Industry. Source from Twitter and manually annotated in prodigy spacy
| dwisaji/indonesia-telecomunication-sentiment-dataset | [
"license:mit",
"region:us"
] | 2022-09-14T13:25:03+00:00 | {"license": "mit"} | 2022-09-16T10:36:02+00:00 |
c66d38584e94865e84e2295385fd18b39e721d79 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: t5-base
* Dataset: HadiPourmousa/TextSummarization
* Config: HadiPourmousa--TextSummarization
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@marcmaxmeister](https://huggingface.co/marcmaxmeister) for evaluating this model. | autoevaluate/autoeval-eval-HadiPourmousa__TextSummarization-HadiPourmousa__TextSum-31dfb4-1463253931 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-14T15:05:10+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["HadiPourmousa/TextSummarization"], "eval_info": {"task": "summarization", "model": "t5-base", "metrics": [], "dataset_name": "HadiPourmousa/TextSummarization", "dataset_config": "HadiPourmousa--TextSummarization", "dataset_split": "train", "col_mapping": {"text": "Text", "target": "Title"}}} | 2022-09-14T15:06:24+00:00 |
2a8b1b48cf1266ce9417abd61b51e004491e6e5d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: shivaniNK8/t5-small-finetuned-cnn-news
* Dataset: HadiPourmousa/TextSummarization
* Config: HadiPourmousa--TextSummarization
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@marcmaxmeister](https://huggingface.co/marcmaxmeister) for evaluating this model. | autoevaluate/autoeval-eval-HadiPourmousa__TextSummarization-HadiPourmousa__TextSum-31dfb4-1463253932 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-14T15:05:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["HadiPourmousa/TextSummarization"], "eval_info": {"task": "summarization", "model": "shivaniNK8/t5-small-finetuned-cnn-news", "metrics": [], "dataset_name": "HadiPourmousa/TextSummarization", "dataset_config": "HadiPourmousa--TextSummarization", "dataset_split": "train", "col_mapping": {"text": "Text", "target": "Title"}}} | 2022-09-14T15:05:51+00:00 |
cceb48e6f88955fa9487e3be0350281d2f80e473 | Izac/image2line | [
"region:us"
] | 2022-09-14T15:11:16+00:00 | {} | 2022-09-14T15:11:51+00:00 |
|
46db0397e01c802cd02a14c954cc3e60a4f929a3 |
# Şalom Ladino articles text corpus
Text corpus compiled from 397 articles from the Judeo-Espanyol section of [Şalom newspaper](https://www.salom.com.tr/haberler/17/judeo-espanyol). Original sentences and articles belong to Şalom.
Size: 176,843 words
[Offical link](https://data.sefarad.com.tr/dataset/salom-ladino-articles-text-corpus)
Paper on [ArXiv](https://arxiv.org/abs/2205.15599)
Citation:
```
Preparing an endangered language for the digital age: The Case of Judeo-Spanish. Alp Öktem, Rodolfo Zevallos, Yasmin Moslem, Güneş Öztürk, Karen Şarhon.
Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia (EURALI) @ LREC 2022. Marseille, France. 20 June 2022
```
This dataset is created as part of project "Judeo-Spanish: Connecting the two ends of the Mediterranean" carried out by Col·lectivaT and Sephardic Center of Istanbul within the framework of the “Grant Scheme for Common Cultural Heritage: Preservation and Dialogue between Turkey and the EU–II (CCH-II)” implemented by the Ministry of Culture and Tourism of the Republic of Turkey with the financial support of the European Union. The content of this website is the sole responsibility of Col·lectivaT and does not necessarily reflect the views of the European Union. | collectivat/salom-ladino-articles | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:lad",
"license:cc-by-4.0",
"arxiv:2205.15599",
"region:us"
] | 2022-09-14T15:30:48+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["lad"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"]} | 2022-10-25T10:46:20+00:00 |
a91c62f46e6e69eb7ab019798e5913c135d061f8 |
# Una fraza al diya
Ladino language learning sentences prepared by Karen Sarhon of Sephardic Center of Istanbul. Each sentence has translations in Turkish, English, Spanish. Includes audio and image. 307 sentences in total.
Source: https://sefarad.com.tr/judeo-espanyolladino/frazadeldia/
Images and audio: http://collectivat.cat/share/judeoespanyol_audio_image.zip
[Offical link on Ladino Data Hub](https://data.sefarad.com.tr/dataset/una-fraza-al-diya-skad)
Paper on [ArXiv](https://arxiv.org/abs/2205.15599)
Citation:
```
Preparing an endangered language for the digital age: The Case of Judeo-Spanish. Alp Öktem, Rodolfo Zevallos, Yasmin Moslem, Güneş Öztürk, Karen Şarhon.
Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia (EURALI) @ LREC 2022. Marseille, France. 20 June 2022
```
This dataset is created as part of project "Judeo-Spanish: Connecting the two ends of the Mediterranean" carried out by Col·lectivaT and Sephardic Center of Istanbul within the framework of the “Grant Scheme for Common Cultural Heritage: Preservation and Dialogue between Turkey and the EU–II (CCH-II)” implemented by the Ministry of Culture and Tourism of the Republic of Turkey with the financial support of the European Union. The content of this website is the sole responsibility of Col·lectivaT and does not necessarily reflect the views of the European Union.
| collectivat/una-fraza-al-diya | [
"task_categories:text-generation",
"task_categories:translation",
"task_ids:language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:lad",
"language:es",
"language:tr",
"language:en",
"license:cc-by-4.0",
"arxiv:2205.15599",
"region:us"
] | 2022-09-14T15:46:46+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["lad", "es", "tr", "en"], "license": "cc-by-4.0", "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation", "translation"], "task_ids": ["language-modeling"]} | 2022-10-25T10:46:11+00:00 |
fbc749f1c537e5c3834e93b15784302e331debe2 |
## Dataset Description
- **Repository:** https://conala-corpus.github.io/
- **Paper:** [Learning to Mine Aligned Code and Natural Language Pairs from Stack Overflow](https://arxiv.org/pdf/1805.08949.pdf)
### Dataset Summary
[CoNaLa](https://conala-corpus.github.io/) is a benchmark of code and natural language pairs, for the evaluation of code generation tasks. The dataset was crawled from Stack Overflow, automatically filtered, then curated by annotators, split into 2,379 training and 500 test examples. The automatically mined dataset is also available with almost 600k examples.
### Supported Tasks and Leaderboards
This dataset is used to evaluate code generations.
### Languages
English - Python code.
## Dataset Structure
```python
dataset_curated = load_dataset("neulab/conala")
DatasetDict({
train: Dataset({
features: ['question_id', 'intent', 'rewritten_intent', 'snippet'],
num_rows: 2379
})
test: Dataset({
features: ['question_id', 'intent', 'rewritten_intent', 'snippet'],
num_rows: 500
})
})
dataset_mined = load_dataset("neulab/conala", "mined")
DatasetDict({
train: Dataset({
features: ['question_id', 'parent_answer_post_id', 'prob', 'snippet', 'intent', 'id'],
num_rows: 593891
})
})
```
### Data Instances
#### CoNaLa - curated
This is the curated dataset by annotators
```
{
'question_id': 41067960,
'intent': 'How to convert a list of multiple integers into a single integer?',
'rewritten_intent': "Concatenate elements of a list 'x' of multiple integers to a single integer",
'snippet': 'sum(d * 10 ** i for i, d in enumerate(x[::-1]))'
}
```
#### CoNaLa - mined
This is the automatically mined dataset before curation
```
{
'question_id': 34705205,
'parent_answer_post_id': 34705233,
'prob': 0.8690001442846342,
'snippet': 'sorted(l, key=lambda x: (-int(x[1]), x[0]))',
'intent': 'Sort a nested list by two elements',
'id': '34705205_34705233_0'
}
```
### Data Fields
Curated:
|Field|Type|Description|
|---|---|---|
|question_id|int64|Id of the Stack Overflow question|
|intent|string|Natural Language intent (i.e., the title of a Stack Overflow question)|
|rewritten_intent|string|Crowdsourced revised intents that try to better reflect the full meaning of the code|
|snippet|string| Code snippet that implements the intent|
Mined:
|Field|Type|Description|
|---|---|---|
|question_id|int64|Id of the Stack Overflow question|
|parent_answer_post_id|int64|Id of the answer post from which the candidate snippet is extracted|
|intent|string|Natural Language intent (i.e., the title of a Stack Overflow question)|
|snippet|string| Code snippet that implements the intent|
|id|string|Unique id for this intent/snippet pair|
|prob|float64|Probability given by the mining model|
### Data Splits
There are two version of the dataset (curated and mined), mined only has a train split and curated has two splits: train and test.
## Dataset Creation
The dataset was crawled from Stack Overflow, automatically filtered, then curated by annotators. For more details, please refer to the original [paper](https://arxiv.org/pdf/1805.08949.pdf)
### Citation Information
```
@inproceedings{yin2018learning,
title={Learning to mine aligned code and natural language pairs from stack overflow},
author={Yin, Pengcheng and Deng, Bowen and Chen, Edgar and Vasilescu, Bogdan and Neubig, Graham},
booktitle={2018 IEEE/ACM 15th international conference on mining software repositories (MSR)},
pages={476--486},
year={2018},
organization={IEEE}
}
``` | neulab/conala | [
"task_categories:text2text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:mit",
"code-generation",
"arxiv:1805.08949",
"region:us"
] | 2022-09-14T18:31:08+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "CoNaLa", "tags": ["code-generation"]} | 2022-10-20T19:25:00+00:00 |
9c2c99e06171661d6c6659334ee1668c4853e23b | sparanoid/images | [
"license:other",
"region:us"
] | 2022-09-14T18:31:11+00:00 | {"license": "other"} | 2022-09-14T18:31:11+00:00 |
|
d8459224f29c9ab2b783534de3321b23759c47ca | drcostco/hmn-race | [
"license:other",
"region:us"
] | 2022-09-14T18:42:42+00:00 | {"license": "other"} | 2022-09-14T19:03:52+00:00 |
|
0680dc6441ef1c20661cffcdfa88ea40dcd9489c | mrmoor/cyber-threat-intelligence | [
"license:unknown",
"region:us"
] | 2022-09-14T19:13:26+00:00 | {"license": "unknown"} | 2022-10-23T08:12:59+00:00 |
|
e5eaccf06c04cd1fcedf0d73d67d51d7bd23693b |
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==10`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8753 | 0.6443 | 0.5919 | 0.6588 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8706 | 0.6280 | 0.5988 | 0.6346 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8836 | 0.6658 | 0.6296 | 0.6746 | | allenai/wcep_sparse_max | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | 2022-09-14T19:36:21+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "paperswithcode_id": "wcep", "pretty_name": "WCEP-10", "train-eval-index": [{"config": "default", "task": "summarization", "task_id": "summarization", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"document": "text", "summary": "target"}, "metrics": [{"type": "rouge", "name": "Rouge"}]}]} | 2022-11-24T15:03:54+00:00 |
4099112870faebab587478313df6acecff54008f |
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==9`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8753 | 0.6443 | 0.6196 | 0.6237 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8706 | 0.6280 | 0.6260 | 0.5989 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8836 | 0.6658 | 0.6601 | 0.6388 | | allenai/wcep_sparse_mean | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | 2022-09-14T19:36:44+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "paperswithcode_id": "wcep", "pretty_name": "WCEP-10", "train-eval-index": [{"config": "default", "task": "summarization", "task_id": "summarization", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"document": "text", "summary": "target"}, "metrics": [{"type": "rouge", "name": "Rouge"}]}]} | 2022-11-24T15:10:48+00:00 |
d21df471d1b06e5d95571001a44995a368c13c19 |
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8753 | 0.6443 | 0.6443 | 0.6443 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8706 | 0.6280 | 0.6280 | 0.6280 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8836 | 0.6658 | 0.6658 | 0.6658 | | allenai/wcep_sparse_oracle | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | 2022-09-14T19:37:12+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "paperswithcode_id": "wcep", "pretty_name": "WCEP-10", "train-eval-index": [{"config": "default", "task": "summarization", "task_id": "summarization", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"document": "text", "summary": "target"}, "metrics": [{"type": "rouge", "name": "Rouge"}]}]} | 2022-11-24T15:58:43+00:00 |
afc723a840ee4e71596d0c4970dec294f1d4eea8 |
Dataset of titles of the top 1000 posts from the top 250 subreddits scraped using [PRAW](https://praw.readthedocs.io/en/stable/index.html).
For steps to create the dataset check out the [dataset](https://github.com/daspartho/predict-subreddit/blob/main/dataset.py) script in the GitHub repo. | daspartho/subreddit-posts | [
"license:apache-2.0",
"region:us"
] | 2022-09-14T20:19:16+00:00 | {"license": "apache-2.0"} | 2022-12-23T20:52:04+00:00 |
554b062213e9b94c22c98dea9a72b1c451db1785 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_sum
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelfipps123](https://huggingface.co/samuelfipps123) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-5abc44-1464853958 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-14T20:23:28+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_sum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-14T20:32:43+00:00 |
3ac89d2b8d4eccdabc8afaaec715996921212d31 | MikroKat/Tech | [
"license:afl-3.0",
"region:us"
] | 2022-09-14T21:53:36+00:00 | {"license": "afl-3.0"} | 2022-09-14T22:01:46+00:00 |
|
ca43c11697a25fb353a7781171bca42f8367b785 | Euclid/testing | [
"license:afl-3.0",
"region:us"
] | 2022-09-14T22:19:47+00:00 | {"license": "afl-3.0"} | 2022-09-14T22:24:04+00:00 |
|
b707596946d87b12e0b9c3fdfb92280c73505003 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V12
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-fb0535-1465153964 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-14T22:24:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V12", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-09-16T05:49:48+00:00 |
b03bcdf81535a6550ece72d65a15f8a9132a5177 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V12
* Dataset: big_patent
* Config: y
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-big_patent-y-3c6f0a-1465253965 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-14T22:24:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["big_patent"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V12", "metrics": [], "dataset_name": "big_patent", "dataset_config": "y", "dataset_split": "test", "col_mapping": {"text": "description", "target": "abstract"}}} | 2022-09-16T08:16:49+00:00 |
7cc95ea515fc325023e94c1a495cd9224efeefd0 | Euclid/chammuu | [
"license:other",
"region:us"
] | 2022-09-14T22:26:34+00:00 | {"license": "other"} | 2022-09-14T22:26:47+00:00 |
|
574d5679836e0858757a0d3a15f6e88d52a8b12d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V12
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-billsum-default-a34c3f-1465353966 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-14T22:51:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V12", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-09-15T12:21:49+00:00 |
e802fcbc2e19103618b1e7afd9c0835d85642bc9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V12
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-89ef9c-1465453967 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-14T23:20:56+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V12", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-14T23:39:49+00:00 |
3739d09f05f0116bde477fbc5e9b4c8346db847d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V12
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-launch__gov_report-plain_text-c8c9c8-1465553968 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-14T23:20:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V12", "metrics": [], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-09-15T04:53:11+00:00 |
f6b8ab257df3565fbb66b5aa490535371936aa04 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V12
* Dataset: Blaise-g/PubMed_summ
* Config: Blaise-g--PubMed_summ
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-Blaise-g__PubMed_summ-Blaise-g__PubMed_summ-0234b8-1465653969 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-14T23:21:04+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/PubMed_summ"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V12", "metrics": [], "dataset_name": "Blaise-g/PubMed_summ", "dataset_config": "Blaise-g--PubMed_summ", "dataset_split": "test", "col_mapping": {"text": "article", "target": "abstract"}}} | 2022-09-16T05:40:02+00:00 |
ea5404aecf4e9eecb11b8a4e655b959ae298648c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V12
* Dataset: ccdv/arxiv-summarization
* Config: document
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-ccdv__arxiv-summarization-document-47d12e-1465753970 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-14T23:21:10+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ccdv/arxiv-summarization"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V12", "metrics": [], "dataset_name": "ccdv/arxiv-summarization", "dataset_config": "document", "dataset_split": "test", "col_mapping": {"text": "article", "target": "abstract"}}} | 2022-09-16T04:46:07+00:00 |
df25b0c51d06c4aef5f462ac1bcd0d0e37eeac82 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V12
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-228ea1-1466053986 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-15T00:38:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V12", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-09-15T10:16:52+00:00 |
f8322d1772f53552a45d61d20fb69ecc61562e33 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-1006ec-1466153987 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-15T00:38:06+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-09-16T06:13:52+00:00 |
5049442efa4cb3d9d27987be83961addba9d6ea4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-1006ec-1466153988 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-15T00:38:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-09-16T05:16:26+00:00 |
d383ce5ed1d6a52e831af930c97d4155902dff5e | AnachronicRodent/MikwaTest | [
"license:cc-by-nc-4.0",
"region:us"
] | 2022-09-15T02:32:23+00:00 | {"license": "cc-by-nc-4.0"} | 2022-09-15T03:19:56+00:00 |
|
570d90ae4f7b64fe4fdd5f42fc9f9279b8c9fd9d |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | ai4bharat/IndicQA | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-4.0",
"region:us"
] | 2022-09-15T03:52:16+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "IndicQA", "tags": []} | 2023-06-20T02:03:32+00:00 |
7d2f6a1445c3337a06a50a82775c613abe7cf508 |
# Dataset Card for Unannotated Spanish 3 Billion Words Corpora
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Source Data](#source-data)
- [Data Subset](#data-subset)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** https://github.com/josecannete/spanish-corpora
- **Paper:** https://users.dcc.uchile.cl/~jperez/papers/pml4dc2020.pdf
### Dataset Summary
* Number of lines: 300904000 (300M)
* Number of tokens: 2996016962 (3B)
* Number of chars: 18431160978 (18.4B)
### Languages
* Spanish
### Source Data
* Available to download here: [Zenodo](https://doi.org/10.5281/zenodo.3247731)
### Data Subset
* Spanish Wikis: Wich include Wikipedia, Wikinews, Wikiquotes and more. These were first processed with wikiextractor (https://github.com/josecannete/wikiextractorforBERT) using the wikis dump of 20/04/2019.
* ParaCrawl: Spanish portion of ParaCrawl (http://opus.nlpl.eu/ParaCrawl.php)
* EUBookshop: Spanish portion of EUBookshop (http://opus.nlpl.eu/EUbookshop.php)
* MultiUN: Spanish portion of MultiUN (http://opus.nlpl.eu/MultiUN.php)
* OpenSubtitles: Spanish portion of OpenSubtitles2018 (http://opus.nlpl.eu/OpenSubtitles-v2018.php)
* DGC: Spanish portion of DGT (http://opus.nlpl.eu/DGT.php)
* DOGC: Spanish portion of DOGC (http://opus.nlpl.eu/DOGC.php)
* ECB: Spanish portion of ECB (http://opus.nlpl.eu/ECB.php)
* EMEA: Spanish portion of EMEA (http://opus.nlpl.eu/EMEA.php)
* Europarl: Spanish portion of Europarl (http://opus.nlpl.eu/Europarl.php)
* GlobalVoices: Spanish portion of GlobalVoices (http://opus.nlpl.eu/GlobalVoices.php)
* JRC: Spanish portion of JRC (http://opus.nlpl.eu/JRC-Acquis.php)
* News-Commentary11: Spanish portion of NCv11 (http://opus.nlpl.eu/News-Commentary-v11.php)
* TED: Spanish portion of TED (http://opus.nlpl.eu/TED2013.php)
* UN: Spanish portion of UN (http://opus.nlpl.eu/UN.php)
## Additional Information
### Licensing Information
* [MIT Licence](https://github.com/josecannete/spanish-corpora/blob/master/LICENSE)
### Citation Information
```
@dataset{jose_canete_2019_3247731,
author = {José Cañete},
title = {Compilation of Large Spanish Unannotated Corpora},
month = may,
year = 2019,
publisher = {Zenodo},
doi = {10.5281/zenodo.3247731},
url = {https://doi.org/10.5281/zenodo.3247731}
}
@inproceedings{CaneteCFP2020,
title={Spanish Pre-Trained BERT Model and Evaluation Data},
author={Cañete, José and Chaperon, Gabriel and Fuentes, Rodrigo and Ho, Jou-Hui and Kang, Hojin and Pérez, Jorge},
booktitle={PML4DC at ICLR 2020},
year={2020}
}
``` | vialibre/splittedspanish3bwc | [
"multilinguality:monolingual",
"language:es",
"license:mit",
"region:us"
] | 2022-09-15T04:48:02+00:00 | {"language": ["es"], "license": ["mit"], "multilinguality": ["monolingual"], "pretty_name": "Unannotated Spanish 3 Billion Words Corpora"} | 2023-01-24T18:17:47+00:00 |
ceea7758a71df239a2aec65d28e54c5207f3e5b2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Adrian/distilbert-base-uncased-finetuned-squad-colab
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-c76793-16626245 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-15T04:51:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "Adrian/distilbert-base-uncased-finetuned-squad-colab", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-09-15T04:55:06+00:00 |
cc9a1b600ae3a78649cb2aed244118c15eccadc4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/distilbert-base-uncased-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-c76793-16626243 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-15T04:51:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/distilbert-base-uncased-finetuned-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-09-15T04:55:06+00:00 |
15a694a839c2cac55ecb0a6dc6a7ff1dfc395b2c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Akari/albert-base-v2-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-c76793-16626246 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-15T04:51:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "Akari/albert-base-v2-finetuned-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-09-15T04:57:19+00:00 |
3ff4b745deb79d6834359d9e3d9d38fbecad9a80 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/distilroberta-base-finetuned-squad2-lwt
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-c76793-16626244 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-15T04:51:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/distilroberta-base-finetuned-squad2-lwt", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-09-15T04:55:14+00:00 |
57b74ba8affbdcd36661fcd37b7b315f83c3cb31 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Akihiro2/bert-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-c76793-16626247 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-15T04:57:00+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "Akihiro2/bert-finetuned-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-09-15T05:01:39+00:00 |
307626be4df7c25e14c9e122770bea7b5c4b0a6d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: AyushPJ/test-squad-trained-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-c76793-16626248 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-15T04:59:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "AyushPJ/test-squad-trained-finetuned-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-09-15T05:02:49+00:00 |
c036789ee389f8b75efc172316b8153ead77708e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: haritzpuerto/MiniLM-L12-H384-uncased-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@timbmg](https://huggingface.co/timbmg) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-07bda3-16636249 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-15T04:59:33+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "haritzpuerto/MiniLM-L12-H384-uncased-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-09-15T05:03:24+00:00 |
14c2a7d0daa831f77cf485eda29f3b92bf5a9cb9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mrm8488/longformer-base-4096-finetuned-squadv2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Liam-Scott-Russell](https://huggingface.co/Liam-Scott-Russell) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-972433-16666252 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-15T05:05:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "mrm8488/longformer-base-4096-finetuned-squadv2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-09-15T06:07:27+00:00 |
15682493fdc60cef5b127c181d01e86de54decb5 | nelsano77/nelsano0077 | [
"license:cc-by-nd-4.0",
"region:us"
] | 2022-09-15T05:13:51+00:00 | {"license": "cc-by-nd-4.0"} | 2022-09-15T05:16:33+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.