sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
sequence | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
eae636f52231308429ea7b022850ba84f4cfd02b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nlpconnect/roberta-base-squad2-nq
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ankur310794](https://huggingface.co/ankur310794) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-96a02c9c-11975602 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-27T09:24:18+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "nlpconnect/roberta-base-squad2-nq", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-27T09:27:23+00:00 |
201d9a9e3d04b1bc66894808a1699731e3d45c0b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nlpconnect/roberta-base-squad2-nq
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ankur310794](https://huggingface.co/ankur310794) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-ef91144d-11985603 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-27T09:43:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "nlpconnect/roberta-base-squad2-nq", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-27T09:45:45+00:00 |
e24270fa1657929a060d81dc258fee812b3905f6 |
# Dataset Card for bc2gm_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/spyysalo/bc2gm-corpus/)
- **Repository:** [Github](https://github.com/spyysalo/bc2gm-corpus/)
- **Paper:** [NCBI](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2559986/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no disease mentioned, `1` signals the first token of a disease and `2` the subsequent disease tokens.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@mahajandiwakar](https://github.com/mahajandiwakar) for adding this dataset.
| chintagunta85/bc2gm_test | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-07-27T11:20:18+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Bc2GmCorpus"} | 2022-07-28T13:16:43+00:00 |
059927b91122a6827e7dbb4f296f6da8f5dcee1c | kiddothe2b/contract-nli | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-07-27T11:36:23+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-07-27T12:07:52+00:00 |
|
3575c59559542b22c2fdebcbfeac364b9b9e017c | prubach/knotprotSequences | [
"license:apache-2.0",
"region:us"
] | 2022-07-27T11:50:21+00:00 | {"license": "apache-2.0"} | 2022-07-27T13:59:51+00:00 |
|
1bca1af003ec196c15d46b370ee4241b26918666 | moyix/debian_csrc | [
"license:mit",
"region:us"
] | 2022-07-27T15:42:52+00:00 | {"license": "mit"} | 2022-07-27T19:54:47+00:00 |
|
f105b9d763743e20d2f3b8e33f73055ad414e7c5 |
# Dataset Card for Legal Advice Reddit Dataset
## Dataset Description
- **Paper: [Parameter-Efficient Legal Domain Adaptation](https://aclanthology.org/2022.nllp-1.10/)**
- **Point of Contact: [email protected]**
### Dataset Summary
New dataset introduced in [Parameter-Efficient Legal Domain Adaptation](https://aclanthology.org/2022.nllp-1.10) (Li et al., NLLP 2022) from the Legal Advice Reddit community (known as "/r/legaldvice"), sourcing the Reddit posts from the Pushshift
Reddit dataset. The dataset maps the text and title of each legal question posted into one of eleven classes, based on the original Reddit
post's "flair" (i.e., tag). Questions are typically informal and use non-legal-specific language. Per the Legal Advice Reddit rules, posts
must be about actual personal circumstances or situations. We limit the number of labels to the top eleven classes and remove the other
samples from the dataset.
### Citation Information
```
@inproceedings{li-etal-2022-parameter,
title = "Parameter-Efficient Legal Domain Adaptation",
author = "Li, Jonathan and
Bhambhoria, Rohan and
Zhu, Xiaodan",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.nllp-1.10",
pages = "119--129",
}
``` | jonathanli/legal-advice-reddit | [
"language:en",
"reddit",
"law",
"region:us"
] | 2022-07-27T19:19:25+00:00 | {"language": ["en"], "pretty_name": "Legal Advice Reddit", "tags": ["reddit", "law"]} | 2023-02-23T16:39:28+00:00 |
a125fdedddadfc82908c3000165134876eb6a090 | testing an audio dataset | benfoley/test-dataset | [
"region:us"
] | 2022-07-27T22:39:14+00:00 | {} | 2022-07-27T22:41:15+00:00 |
6af7a842f6fc38d0a5d963fd44deaf1681935819 | oisinoh/tomatos | [
"region:us"
] | 2022-07-27T23:54:05+00:00 | {"viewer": true} | 2022-07-28T00:12:09+00:00 |
|
6d7d0e843d195bae3df7338b261551080ed395f2 | commanderstrife/jnlpba | [
"license:apache-2.0",
"region:us"
] | 2022-07-28T04:04:33+00:00 | {"license": "apache-2.0"} | 2022-07-28T05:46:36+00:00 |
|
4c31442562033cbc26c7f3d86e5236d082ea6799 | hong/zoosdataset | [
"region:us"
] | 2022-07-28T04:20:58+00:00 | {} | 2022-07-28T04:21:23+00:00 |
|
586c8a9acf05865650594e634cb88ef3d4938136 | for trainninf
| Slepp/train | [
"region:us"
] | 2022-07-28T05:56:58+00:00 | {} | 2022-07-28T07:18:50+00:00 |
f6f04d6b8f8df133c3aa570f81b395b0c99b9fe7 | validation set | Slepp/validation | [
"region:us"
] | 2022-07-28T06:53:43+00:00 | {} | 2022-07-28T07:01:43+00:00 |
09013b8be5f523de806f9c21c548d2d6e7d92a02 |
# Dataset Card for RedCaps
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Information](#dataset-information)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Information
- **Path** [/home/daniel.baek/public/common/Data](/home/daniel.baek/public/common/Data)
- **Content type** image
- **Tag** sensor, common, ai, dataset
- **Description**
- **Homepage:** [RedCaps homepage](https://redcaps.xyz/)
- **Repository:** [RedCaps repository](https://github.com/redcaps-dataset/redcaps-downloader)
- **Paper:** [RedCaps: web-curated image-text data created by the people, for the people](https://arxiv.org/abs/2111.11431)
- **Leaderboard:**
- **Point of Contact:** [Karan Desai](mailto:[email protected])
### Dataset Summary
RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit.
Images and captions from Reddit depict and describe a wide variety of objects and scenes.
The data is collected from a manually curated set of subreddits (350 total),
which give coarse image labels and allow steering of the dataset composition
without labeling individual instances. RedCaps data is created *by the people, for the people* โ it contains everyday things that users like to share on social media, for example hobbies (r/crafts) and pets (r/shiba). Captions often contain specific and
fine-grained descriptions (northern cardinal, taj mahal). Subreddit names provide relevant image
labels (r/shiba) even when captions may not (mlem!), and sometimes may group many visually
unrelated images through a common semantic meaning (r/perfectfit).
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("red_caps", "rabbits_2017")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
Some image links point to more than one image. You can process and downloaded those as follows:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import os
import re
import urllib
import PIL.Image
import datasets
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"]))
return batch
def process_image_urls(batch):
processed_batch_image_urls = []
for image_url in batch["image_url"]:
processed_example_image_urls = []
image_url_splits = re.findall(r"http\S+", image_url)
for image_url_split in image_url_splits:
if "imgur" in image_url_split and "," in image_url_split:
for image_url_part in image_url_split.split(","):
if not image_url_part:
continue
image_url_part = image_url_part.strip()
root, ext = os.path.splitext(image_url_part)
if not root.startswith("http"):
root = "http://i.imgur.com/" + root
root = root.split("#")[0]
if not ext:
ext = ".jpg"
ext = re.split(r"[?%]", ext)[0]
image_url_part = root + ext
processed_example_image_urls.append(image_url_part)
else:
processed_example_image_urls.append(image_url_split)
processed_batch_image_urls.append(processed_example_image_urls)
batch["image_url"] = processed_batch_image_urls
return batch
dset = load_dataset("red_caps", "rabbits_2017")
dset = dset.map(process_image_urls, batched=True, num_proc=4)
features = dset["train"].features.copy()
features["image"] = datasets.Sequence(datasets.Image())
num_threads = 20
dset = dset.map(fetch_images, batched=True, batch_size=100, features=features, fn_kwargs={"num_threads": num_threads})
```
Note that in the above code, we use the `datasets.Sequence` feature to represent a list of images for the multi-image links.
### Supported Tasks and Leaderboards
From the paper:
> We have used our dataset to train deep neural networks that perform image captioning, and
that learn transferable visual representations for a variety of downstream visual recognition tasks
(image classification, object detection, instance segmentation).
> We anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks,
such as image or text retrieval or text-to-image synthesis.
### Languages
All of the subreddits in RedCaps use English as their primary language.
## Dataset Structure
### Data Instances
Each instance in RedCaps represents a single Reddit image post:
```
{
'image_id': 'bpzj7r',
'author': 'djasz1',
'image_url': 'https://i.redd.it/ho0wntksivy21.jpg',
'raw_caption': 'Found on a friendโs property in the Keys FL. She is now happily living in my house.',
'caption': 'found on a friend's property in the keys fl. she is now happily living in my house.', 'subreddit': 3,
'score': 72,
'created_utc': datetime.datetime(2019, 5, 18, 1, 36, 41),
'permalink': '/r/airplants/comments/bpzj7r/found_on_a_friends_property_in_the_keys_fl_she_is/', 'crosspost_parents': None
}
```
### Data Fields
- `image_id`: Unique alphanumeric ID of the image post (assigned by Reddit).
- `author`: Reddit username of the image post author.
- `image_url`: Static URL for downloading the image associated with the post.
- `raw_caption`: Textual description of the image, written by the post author.
- `caption`: Cleaned version of "raw_caption" by us (see Q35).
- `subreddit`: Name of subreddit where the post was submitted.
- `score`: Net upvotes (discounting downvotes) received by the image post. This field is equal to `None` if the image post is a crosspost.
- `created_utc`: Integer time epoch (in UTC) when the post was submitted to Reddit.
- `permalink`: Partial URL of the Reddit post (https://reddit.com/<permalink>).
- `crosspost_parents`: List of parent posts. This field is optional.
### Data Splits
All the data is contained in training set. The training set has nearly 12M (12,011,111) instances.
From the paper:
> We intend our dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Hence, all instances in our dataset would be used for training while
the validation split is derived from downstream task(s). If users require a validation split, we
recommend sampling it such that it follows the same subreddit distribution as entire dataset.
## Dataset Creation
### Curation Rationale
From the paper:
> Large datasets of image-text pairs are widely used for pre-training generic representations
that transfer to a variety of downstream vision and vision-and-language tasks. Existing public
datasets of this kind were curated from search engine results (SBU Captions [1]) or HTML
alt-text from arbitrary web pages (Conceptual Captions [2, 31]). They performed complex
data filtering to deal with noisy web data. Due to aggressive filtering, their data collection is
inefficient and diversity is artificially supressed. We argue that the quality of data depends on
its source, and the human intent behind its creation. In this work, we explore Reddit โ a social
media platform, for curating high quality data. We introduce RedCaps โ a large dataset of
12M image-text pairs from Reddit. While we expect the use-cases of RedCaps to be similar to
existing datasets, we discuss how Reddit as a data source leads to fast and lightweight collection,
better data quality, lets us easily steer the data distribution, and facilitates ethically responsible data curation.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> **Data Collection Pipeline**
Redditโs uniform structure allows us to parallelize data collection as independent tasks โ each task
involves collecting posts submitted to a single subreddit in one year. Our collection pipeline has three steps: (1) subreddit selection, (2) image post filtering, and (3) caption cleaning.
**Step 1**. Subreddit selection: We collect data from a manually curated set of subreddits. Subreddits
have their own rules, community norms, and moderators so curating subreddits allows us to steer the
datasetโs composition without annotating individual instances. We select subreddits with a high volume of images posts, where images tend to be photographs (rather than memes, drawings, screenshots,
etc) and post titles tend to describe image content (rather than making jokes, political commentary,
etc). We do not select any NSFW, banned, or quarantined subreddits. We want to minimize the
number of people that appear in RedCaps, so we omit subreddits whose primary purpose is to share or
comment on images of people (such as celebrity pics or user selfies). We choose subreddits focused on
general photography (r/pics, r/itookapicture), animals (r/axolotls, r/birdsofprey, r/dachshund),
plants (r/roses, r/succulents), objects (r/classiccars, r/trains, r/mechanicalkeyboards), food
(r/steak, r/macarons), scenery (r/cityporn1
, r/desertporn), or activities (r/carpentry, r/kayaking).
In total we collect data from 350 subreddits; the full list can be found in Appendix A.
**Step 2**. Image post filtering: We use Pushshift [41] and Reddit [42, 43] APIs to download all image
posts submitted to our selected subreddits from 2008โ2020. Posts are collected at least six months
after their creation to let upvotes stabilize. We only collect posts with images hosted on three domains:
Reddit (i.redd.it), Imgur (i.imgur.com), and Flickr (staticflickr.com). Some image posts contain
multiple images (gallery posts) โ in this case we only collect the first image and associate it with
the caption. We discard posts with < 2 upvotes to avoid unappealing content, and we discard posts
marked NSFW (by their authors or subreddit moderators) to avoid pornographic or disturbing content.
**Step 3**. Caption cleaning: We expect Reddit post titles to be less noisy than other large-scale
sources of image captions such as alt-text [2, 31], so we apply minimal text cleaning. We lowercase
captions and use ftfy [44] to remove character accents, emojis, and non-latin characters, following
[29, 35, 36]. Then we apply simple pattern matching to discard all sub-strings enclosed in brackets
((.*), [.*]). These sub-strings usually give non-semantic information: original content tags [oc],
image resolutions (800x600 px), camera specs (shot with iPhone), self-promotion [Instagram:
@user], and other references (link in comments). Finally, like [31] we replace social media
handles (words starting with โ@โ) with a [USR] token to protect user privacy and reduce redundancy.
Due to such filtering, โ12K (0.1%) captions in our dataset are empty strings. We do not discard them,
as subreddit names alone provide meaningful supervision. Unlike CC-3M or CC-12M that discard
captions without nouns or that donโt overlap image tags, we do not discard any instances in this step.
Through this pipeline, we collect 13.4M instances from 350 subreddits. Our collection pipeline is
less resource-intensive than existing datasets โ we do not require webpage crawlers, search engines,
or large databases of indexed webpages. RedCaps is easily extensible in the future by selecting more
subreddits and collecting posts from future years. Next, we perform additional filtering to mitigate
user privacy risks and harmful stereotypes in RedCaps, resulting in final size of 12M instances.
#### Who are the source language producers?
Reddit is the singular data source for RedCaps.
### Annotations
#### Annotation process
The dataset is built using fully automatic data collection pipeline which doesn't require any human annotators.
#### Who are the annotators?
The annotation process doesn't require any human annotators.
### Personal and Sensitive Information
From the paper:
> **Does the dataset relate to people?**
The dataset pertains to people in that people wrote the captions and posted images to Reddit
that we curate in RedCaps. We made specific design choices while curating RedCaps to avoid
large quantities of images containing people:
(a) We collect data from manually curated subreddits in which most contain primarily pertains
to animals, objects, places, or activities. We exclude all subreddits whose primary purpose
is to share and describe images of people (such as celebrity photos or user selfies).
(b) We use an off-the-shelf face detector to find and remove images with potential presence of
human faces. We manually checked 50K random images in RedCaps (Q16) and found 79
images with identifiable human faces โ the entire dataset may have โ19K (0.15%) images
with identifiable people. Refer Section 2.2 in the main paper.
> **Is it possible to identify one or more natural persons, either directly or indirectly (i.e., in
combination with other data) from the dataset?**
Yes, all instances in RedCaps include Reddit usernames of their post authors. This could be
used to look up the Reddit user profile, and some Reddit users may have identifying information
in their profiles. Some images may contain human faces which could be identified by
appearance. However, note that all this information is already public on Reddit, and searching it
in RedCaps is no easier than searching directly on Reddit.
> **Were the individuals in question notified about the data collection?**
No. Reddit users are anonymous by default, and are not required to share their personal contact
information (email, phone numbers, etc.). Hence, the only way to notify the authors of RedCaps
image posts is by sending them private messages on Reddit. This is practically difficult to do
manually, and will be classified as spam and blocked by Reddit if attempted to programmatically
send a templated message to millions of users.
> **Did the individuals in question consent to the collection and use of their data?**
Users did not explicitly consent to the use of their data in our dataset. However, by uploading
their data on Reddit, they consent that it would appear on the Reddit plaform and will be
accessible via the official Reddit API (which we use to collect RedCaps).
> **If consent was obtained, were the consenting individuals provided with a mechanism to
revoke their consent in the future or for certain uses?**
Users have full control over the presence of their data in our dataset. If users wish to revoke
their consent, they can delete the underlying Reddit post โ it will be automatically removed
dfrom RedCaps since we distributed images as URLs. Moreover, we provide an opt-out request
form on our dataset website for anybody to request removal of an individual instance if it is
potentially harmful (e.g. NSFW, violates privacy, harmful stereotypes, etc.).
## Considerations for Using the Data
### Social Impact of Dataset
From the paper:
> **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g.,
a data protection impact analysis) been conducted?**
No.
### Discussion of Biases
From the paper:
> **Harmful Stereotypes**: Another concern with
Reddit data is that images or language may represent harmful stereotypes about gender, race, or other
characteristics of people [48, 49, 51]. We select only non-NSFW subreddits with active moderation
for collecting data. This stands in contrast to less curated uses of Reddit data, such as GPT-2 [35]
whose training data includes at least 63K documents from banned or quarantined subreddits which
may contain toxic language [53]. We attempt to further reduce harmful stereotypes in two ways:
> * **NSFW images**: We use the InceptionV3 [54] model from [55] to filter images detected as porn or hentai with confidence โฅ 0.9. Similar to face filtering, we estimated precision of our filtering and estimated amount of missed detections, shown in Table 1. The model detects 87K images with low
precision (โผ1%) โ most detections are non-NSFW images with pink and beige hues.
> * **Potentially derogatory language**: We filter instances whose captions contain words or phrases from a common blocklist [56]. It is important to note that such coarse filtering might suppress language from marginalized groups reclaiming slurs [51]; however, as RedCaps is not intended to describe people, we believe this is a pragmatic tradeoff to avoid propagating harmful labels.
> **Reddit demographics**: Redditโs user demographics are not representative of the population at large.
Compared to US adults, Reddit users skew male (69% vs 49%), young (58% 18-29 years old vs
22%), college educated (36% vs 28%), and politically liberal (41% vs 25%) [57]. Reddit users
are predominantly white (63%) [57], and 49% of desktop traffic to Reddit comes from the United
States [58]. All of the subreddits in RedCaps use English as their primary language. Taken together,
these demographic biases likely also bias the types of objects and places that appear in images on
Reddit, and the language used to describe these images. We do not offer explicit countermeasures to
these biases, but users of RedCaps should keep in mind that size doesnโt guarantee diversity [51].
Subtler issues may also exist, such as imbalanced representation of demographic groups [59] or
gender bias in object co-occurrence [60] or language [61]. These are hard to control in internet
data, so we release RedCaps with explicit instructions on suitable use-cases; specifically requesting models not be trained to identify people, or make decisions that impact people. We document these instructions and other terms-of-use in a datasheet [45], provided in Appendix G.
> **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?**
The scale of RedCaps means that we are unable to verify the contents of all images and
captions. However we have tried to minimize the possibility that RedCaps contains data that
might be offensive, insulting, threatening, or might cause anxiety via the following mitigations:
(a) We manually curate the set of subreddits from which to collect data; we only chose
subreddits that are not marked NSFW and which generally contain non-offensive content.
(b) Within our curated subreddits, we did not include any posts marked NSFW.
(c) We removed all instances whose captions contained any of the 400 potentially offensive
words or phrases. Refer Section 2.2 in the main paper.
(d) We remove all instances whose images were flagged NSFW by an off-the-shelf detector.
We manually checked 50K random images in RedCaps and found one image containing
nudity (exposed buttocks; no identifiable face). Refer Section 2.2 in the main paper
> **Does the dataset identify any subpopulations (e.g., by age, gender)?**
RedCaps does not explicitly identify any subpopulations. Since some images contain people
and captions are free-form natural language written by Reddit users, it is possible that some
captions may identify people appearing in individual images as part of a subpopulation.
> **Were any ethical review processes conducted (e.g., by an institutional review board)?**
We did not conduct a formal ethical review process via institutional review boards. However,
as described in Section 2.2 of the main paper and Q16 we employed several filtering mechanisms
to try and remove instances that could be problematic.
### Other Known Limitations
From the paper:
> **Are there any errors, sources of noise, or redundancies in the dataset?**
RedCaps is noisy by design since image-text pairs on the internet are noisy and unstructured.
Some instances may also have duplicate images and captions โ Reddit users may have shared
the same image post in multiple subreddits. Such redundancies constitute a very small fraction
of the dataset, and should have almost no effect in training large-scale models.
> **Does the dataset contain data that might be considered confidential (e.g., data that is
protected by legal privilege or by doctor-patient confidentiality, data that includes the
content of individuals non-public communications)?**
No, the subreddits included in RedCaps do not cover topics that may be considered confidential. All posts were publicly shared on Reddit prior to inclusion in RedCaps.
## Additional Information
### Dataset Curators
From the paper:
> Four researchers at the University of Michigan (affiliated as of 2021) have created RedCaps:
Karan Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson.
### Licensing Information
The image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (https://www.reddit.com/wiki/
api-terms) and users must comply with Reddit User Agreeement, Content Policy,
and Privacy Policy โ all accessible at https://www.redditinc.com/policies.
From the paper:
> RedCaps should only be used for non-commercial research. RedCaps should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of RedCaps are restricted โ it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies.
### Citation Information
```bibtex
@misc{desai2021redcaps,
title={RedCaps: web-curated image-text data created by the people, for the people},
author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson},
year={2021},
eprint={2111.11431},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | actdan2016/sample1 | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2111.11431",
"region:us"
] | 2022-07-28T06:58:41+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["image-to-text"], "task_ids": ["image-captioning"], "paperswithcode_id": "redcaps", "pretty_name": "RedCaps"} | 2022-08-29T01:12:39+00:00 |
40cc352405da6da57bd64ba785bd6a38ef3a4871 |
# Dataset Card for Old Book Illustrations
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://www.oldbookillustrations.com/)**
### Dataset Summary
The Old Book Illustrations contains 4172 illustrations scanned from old books, this collection was collected & curated by the team of the website [Old Book Illustrations](https://www.oldbookillustrations.com/).
The webmaster of Old Book Illustrations kindly allowed us to scrap these information in order to create this dataset for the [BigLAM initiative](https://huggingface.co/biglam).
### Languages
The captions and descriptions are mostly in English but can contain some sentences from other languages such as French or German.
For instance you can find this description that contains a French sentence:
>The caption reads in the original French: Vue de lโaqueduc de Salones qui conduisait lโeau ร Spalatro.
## Dataset Structure
Each row contains information gathered from the page of an illustration on the website [Old Book Illustrations](https://www.oldbookillustrations.com/). As of July 2022, there are 4172 illustrations in this dataset.
### Data Fields
* `rawscan`: the image as originally scanned from the book, without further processing
* `1600px`: the cleaned image, resized to a width of 1600 pixels (height can vary)
* `info_url`: URL to the illustration page on oldbookillustrations.com
* `รฌnfo_src`: URL to an icon-sized version of the image
* `info_alt`: short description of the image
* `artist_name`: artist name
* `artist_date`: birth date of the artist
* `artist_countries`: list of the countries the artist is from
* `book_title`: original title of the book the illustration is extracted from
* `book_authors`: list of the authors of the book
* `book_publishers`: list of the publishers of the book
* `openlibrary-url`: URL to the openlibrary entry for the book
* `tags`: list of keywords for this illustration on oldbookillustrations.com
* `illustration_source_name`: list of the sources for this illustration
* `illustration_source_url`: list of the URL for these sources
* `illustration_subject`: category of the subject represented in the illustration
* `illustration_format`: category of the format of the illustration
* `image_title`: title of the image
* `image_caption`: caption of the image. Seems to be the caption that appears next to the image in the book, translated to English if in another language
* `image_description`: longer description of the image. If there is one, it also quotes the caption in the original language
* `rawscan_url`: URL to the rawscan image on oldbookillustration.com
* `1600px_url`: URL to the cleaned image on oldbookillustration.com
## Dataset Creation
### Curation Rationale
This collection was collected & curated by the team of the website [Old Book Illustrations](https://www.oldbookillustrations.com/).
This version contains all the data that was available on the website as of July 2022, but the website is being actively maintained so if you want more old book illustrations, make sure to check [Old Book Illustrations](https://www.oldbookillustrations.com/).
### Source Data
#### Initial Data Collection and Normalization
Initial data is gathered from the website [Old Book Illustrations](https://www.oldbookillustrations.com/). The sources of the illustration scans are specified for each entry in the columns `illustration_source_name` and `illustration_source_url`.
### Personal and Sensitive Information
The Old Book Illustrations' Terms and conditions reads:
>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.
## Considerations for Using the Data
### Discussion of Biases
The Old Book Illustrations' Terms and conditions reads:
>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.
## Additional Information
### Dataset Curators
The Old Book Illustrations collection is curated and maintained by the team of the [Old Book Illustrations website](https://www.oldbookillustrations.com/).
### Licensing Information
[Old Book Illustrations](https://www.oldbookillustrations.com/) website reads:
>We donโt limit the use of the illustrations available on our site, but we accept no responsibility regarding any problem, legal or otherwise, which might result from such use. More specifically, we leave it up to users to make sure that their project complies with the copyright laws of their country of residence. Text content (descriptions, translations, etc.) is published under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The Old Book Illustrations webmaster mentioned that most images are public domain in the US and Europe, but there can be some exceptions. An example are the illustrations from [*Early poems of William Morris*](https://www.oldbookillustrations.com/titles/early-poems-of-william-morris/) as the illustrator died 1955, so her work is not public domain in Europe as of 2022, or [*Under the hill*](https://www.oldbookillustrations.com/titles/under-the-hill/) which was published in the US in 1928 and therefore is not public domain there.
### Citation Information
```bibtex
@misc{old book illustrations_2007,
url={https://www.oldbookillustrations.com/},
journal={Old Book Illustrations}, year={2007}}
```
### Contributions
Thanks to [@gigant](https://huggingface.co/gigant) ([@giganttheo](https://github.com/giganttheo)) for adding this dataset. | gigant/oldbookillustrations | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:image-to-image",
"task_ids:image-captioning",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:fr",
"language:de",
"license:cc-by-nc-4.0",
"lam",
"1800-1900",
"region:us"
] | 2022-07-28T07:31:19+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en", "fr", "de"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-to-image", "image-to-text", "image-to-image"], "task_ids": ["image-captioning"], "pretty_name": "Old Book Illustrations", "tags": ["lam", "1800-1900"], "dataset_info": {"features": [{"name": "rawscan", "dtype": "image"}, {"name": "1600px", "dtype": "image"}, {"name": "info_url", "dtype": "string"}, {"name": "info_src", "dtype": "string"}, {"name": "info_alt", "dtype": "string"}, {"name": "artist_name", "dtype": "string"}, {"name": "artist_birth_date", "dtype": "string"}, {"name": "artist_death_date", "dtype": "string"}, {"name": "artist_countries", "sequence": "string"}, {"name": "book_title", "dtype": "string"}, {"name": "book_authors", "sequence": "string"}, {"name": "book_publishers", "sequence": "string"}, {"name": "date_published", "dtype": "string"}, {"name": "openlibrary-url", "dtype": "string"}, {"name": "tags", "sequence": "string"}, {"name": "illustration_source_name", "sequence": "string"}, {"name": "illustration_source_url", "sequence": "string"}, {"name": "illustration_subject", "dtype": "string"}, {"name": "illustration_format", "dtype": "string"}, {"name": "engravers", "sequence": "string"}, {"name": "image_title", "dtype": "string"}, {"name": "image_caption", "dtype": "string"}, {"name": "image_description", "dtype": "string"}, {"name": "rawscan_url", "dtype": "string"}, {"name": "1600px_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6402149401.7, "num_examples": 4154}], "download_size": 5098832185, "dataset_size": 6402149401.7}} | 2023-12-18T13:39:10+00:00 |
2c53f4b94137892d96c3bc4272028c3354c640a7 |
# Dataset Card for news-data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Dataset Curators](#dataset-curators)
### Dataset Summary
The News Dataset is an English-language dataset containing just over 4k unique news articles scrapped from AriseTv- One of the most popular news television in Nigeria.
### Supported Tasks and Leaderboards
It supports news article classification into different categories.
### Languages
English
## Dataset Structure
### Data Instances
'''
{'Title': 'Nigeria: APC Yet to Zone Party Positions Ahead of Convention'
'Excerpt': 'The leadership of the All Progressives Congress (APC), has denied reports that it had zoned some party positions ahead of'
'Category': 'politics'
'labels': 2}
'''
### Data Fields
* Title: a string containing the title of a news title as shown
* Excerpt: a string containing a short extract from the body of the news
* Category: a string that tells the category of an example (string label)
* labels: integer telling the class of an example (label)
### Data Splits
| Dataset Split | Number of instances in split |
| ----------- | ----------- |
| Train | 4,594 |
| Paragraph | 811 |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The code for the dataset creation at *https://github.com/chimaobi-okite/NLP-Projects-Competitions/blob/main/NewsCategorization/Data/NewsDataScraping.ipynb*. The examples were scrapped from
<https://www.arise.tv/>
### Annotations
#### Annotation process
The annotation is based on the news category in the [arisetv](https://www.arise.tv) website
#### Who are the annotators?
Journalists at arisetv
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop models that can classify news articles into categories.
This task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated.
### Discussion of Biases
This data is biased towards news happenings in Nigeria but the model built using it can as well classify news from other parts of the world
with a slight degradation in performance.
### Dataset Curators
The dataset is created by people at arise but was scrapped by [@github-chimaobi-okite](https://github.com/chimaobi-okite/)
| okite97/news-data | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:afl-3.0",
"region:us"
] | 2022-07-28T08:10:22+00:00 | {"annotations_creators": ["other"], "language_creators": ["found"], "language": ["en"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["topic-classification", "multi-class-classification"], "pretty_name": "News Dataset", "tags": []} | 2022-08-25T09:36:01+00:00 |
5aff92f9c824061b0781a5ff1bbf1e8246de5840 |
# Dataset Summary
This dataset is enhanced version of existing offensive language studies. Existing studies are highly imbalanced, and solving this problem is too costly. To solve this, we proposed contextual data mining method for dataset augmentation. Our method is basically prevent us from retrieving random tweets and label individually. We can directly access almost exact hate related tweets and label them directly without any further human interaction in order to solve imbalanced label problem.
In addition, existing studies *(can be found at Reference section)* are merged to create even more comprehensive and robust dataset for Turkish offensive language detection task.
The file train.csv contains 42,398, test.csv contains 8,851, valid.csv contains 1,756 annotated tweets.
# Dataset Structure
A binary dataset with with (0) Not Offensive and (1) Offensive tweets.
### Task and Labels
Offensive language identification:
- (0) Not Offensive - Tweet does not contain offense or profanity.
- (1) Offensive - Tweet contains offensive language or a targeted (veiled or direct) offense
### Data Splits
| | train | test | dev |
|------:|:------|:-----|:-----|
| 0 (Not Offensive) | 22,589 | 4,436 | 1,402 |
| 1 (Offensive) | 19,809 | 4,415 | 354 |
### Citation Information
```
T. Tanyel, B. Alkurdi and S. Ayvaz, "Linguistic-based Data Augmentation Approach for Offensive Language Detection," 2022 7th International Conference on Computer Science and Engineering (UBMK), 2022, pp. 1-6, doi: 10.1109/UBMK55850.2022.9919562.
```
### Paper codes
https://github.com/tanyelai/lingda
# References
We merged open-source offensive language dataset studies in Turkish to increase contextuality with existing data even more, before our method is applied.
- https://huggingface.co/datasets/offenseval2020_tr
- https://github.com/imayda/turkish-hate-speech-dataset-2
- https://www.kaggle.com/datasets/kbulutozler/5k-turkish-tweets-with-incivil-content
| Toygar/turkish-offensive-language-detection | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:tr",
"license:cc-by-2.0",
"offensive-language-classification",
"region:us"
] | 2022-07-28T10:45:25+00:00 | {"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["crowdsourced"], "language": ["tr"], "license": ["cc-by-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "Turkish Offensive Language Detection Dataset", "tags": ["offensive-language-classification"]} | 2023-10-31T21:57:24+00:00 |
734a6f81948727f4a41a98aaac68a8dc7cd86cd8 | biglam/archives_parlementaires_revolution_francaise | [
"language:fr",
"license:cc-by-4.0",
"region:us"
] | 2022-07-28T12:39:47+00:00 | {"language": "fr", "license": "cc-by-4.0"} | 2022-09-05T10:53:04+00:00 |
|
15ba2479192e7cf974e4e295a7d721a650c06f03 |
# Dataset Card for "sciarg"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/anlausch/ArguminSci](https://github.com/anlausch/ArguminSci)
- **Repository:** [https://github.com/anlausch/ArguminSci](https://github.com/anlausch/ArguminSci)
- **Paper:** [An argument-annotated corpus of scientific publications](https://aclanthology.org/W18-5206.pdf)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
The SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing
fine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific
publications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of
scientific writing.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `document_id`: the base file name, e.g. "A28"
- `text`: the parsed text of the scientific publication in the XML format
- `text_bound_annotations`: span annotations that mark argumentative discourse units (ADUs). Each entry has the following fields: `offsets`, `text`, `type`, and `id`.
- `relations`: binary relation annotations that mark the argumentative relations that hold between a head and a tail ADU. Each entry has the following fields: `id`, `head`, `tail`, and `type` where `head` and `tail` each have the fields: `ref_id` and `role`.
### Data Splits
The dataset consists of a single `train` split that has 40 documents.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{lauscher2018b,
title = {An argument-annotated corpus of scientific publications},
booktitle = {Proceedings of the 5th Workshop on Mining Argumentation},
publisher = {Association for Computational Linguistics},
author = {Lauscher, Anne and Glava\v{s}, Goran and Ponzetto, Simone Paolo},
address = {Brussels, Belgium},
year = {2018},
pages = {40โ46}
}
```
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| DFKI-SLT/sciarg | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:dr inventor corpus",
"language:en",
"argument mining",
"scientific text",
"relation extraction",
"argumentative discourse unit recognition",
"region:us"
] | 2022-07-28T12:55:00+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["dr inventor corpus"], "task_categories": ["token-classification"], "task_ids": [], "pretty_name": "SciArg", "tags": ["argument mining", "scientific text", "relation extraction", "argumentative discourse unit recognition"]} | 2022-07-28T13:04:31+00:00 |
0af1841a59d37a07091ea69bce12947558fa4d55 | # Emoji Predictor
Dataset consists of raw tweets as text and an emoji as the label.
original dataset: https://huggingface.co/datasets/AlekseyDorkin/extended_tweet_emojis
- Fine-tuned model: https://huggingface.co/vincentclaes/emoji-predictor
- Try the model here: https://huggingface.co/spaces/vincentclaes/emoji-predictor | vincentclaes/emoji-predictor | [
"region:us"
] | 2022-07-28T13:05:10+00:00 | {} | 2022-09-20T13:38:38+00:00 |
e81ff8291dc22db23b272e9a5c393d322e530891 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_finetuned_sumpubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-ce219d86-12025605 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-28T18:53:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_finetuned_sumpubmed", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-07-28T20:06:06+00:00 |
49bca9d76447b7dbe452b2a8a4426155c28df4ba | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: nbroad/longt5-base-global-mediasum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-ca1f103f-12035606 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-28T18:57:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "nbroad/longt5-base-global-mediasum", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-28T19:34:23+00:00 |
7b01ec427ea3d0e879e4e26ca3cdfa5ce6526ca9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: nbroad/longt5-base-global-mediasum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-xsum-20a28003-12045607 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-28T19:00:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "nbroad/longt5-base-global-mediasum", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-07-28T19:27:48+00:00 |
399ed23149edf1be91a18fd8e60e3fea25262dfc |
## Dataset Description
- **Homepage:** the [Gatherer](https://gatherer.wizards.com/Pages/)
- **Repository:** https://github.com/alcazar90/croupier-mtg-dataset
### Dataset Summary
A card images dataset of 4 types of creatures from Magic the Gathering card game: elf, goblin, knight, and zombie.
## Dataset Creation
All card information from Magic the Gathering card game is public available from the
[Gatherer]( https://gatherer.wizards.com/Pages/) website, the official Magic Card Database. The dataset is just
a subset selection of 4 kind of creatures from the game. | alkzar90/croupier-mtg-dataset | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:found",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:apache-2.0",
"mgt",
"magic-card-game",
"creature-dataset",
"region:us"
] | 2022-07-28T20:18:49+00:00 | {"annotations_creators": ["found"], "language_creators": [], "language": [], "license": ["apache-2.0"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "Croupier: a Magic the Gathering creatures dataset", "tags": ["mgt", "magic-card-game", "creature-dataset"]} | 2022-08-02T00:41:48+00:00 |
4075aa679683f3071d527283819637f3446ca488 | ## ProteinGym benchmarks overview
ProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of โผ1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes โผ300k mutants across 7 DMS assays.
Each processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables:
1) mutant (str):
- for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N')
- for the indel benchmark, it corresponds to the full mutated sequence
2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein
3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit)
Additionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular:
- The UniProt_ID of the corresponding protein, along with taxon and MSA depth category
- The target sequence (target_seq) used in the assay
- Details on how the DMS_score was created from the raw files and how it was binarized
## Reference
If you use ProteinGym in your work, please cite the following paper:
```
Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A., Marks, D.S., Gal, Y. (2022). Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. ICML.
```
## Links
- Pre-print: https://arxiv.org/abs/2205.13760
- Code: https://github.com/OATML-Markslab/Tranception | OATML-Markslab/ProteinGym | [
"arxiv:2205.13760",
"region:us"
] | 2022-07-28T21:55:30+00:00 | {} | 2022-07-28T23:12:02+00:00 |
e936ae69e3c70ff651d47889a389de6f596863b2 | ## ProteinGym benchmarks overview
ProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of โผ1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes โผ300k mutants across 7 DMS assays.
Each processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables:
1) mutant (str):
- for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N')
- for the indel benchmark, it corresponds to the full mutated sequence
2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein
3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit)
Additionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular:
- The UniProt_ID of the corresponding protein, along with taxon and MSA depth category
- The target sequence (target_seq) used in the assay
- Details on how the DMS_score was created from the raw files and how it was binarized
## Reference
If you use ProteinGym in your work, please cite the following paper:
```
Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A., Marks, D.S., Gal, Y. (2022). Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. ICML.
```
## Links
- Pre-print: https://arxiv.org/abs/2205.13760
- Code: https://github.com/OATML-Markslab/Tranception
| ICML2022/ProteinGym | [
"arxiv:2205.13760",
"region:us"
] | 2022-07-28T22:16:18+00:00 | {} | 2022-07-28T23:19:31+00:00 |
65d7baf884b0ca8c02ad1f678b83904ccc1d2062 |
# YALTAi Tabular Dataset
## Table of Contents
- [YALTAi Tabular Dataset](#YALTAi-Tabular-Dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://doi.org/10.5281/zenodo.6827706](https://doi.org/10.5281/zenodo.6827706)
- **Paper:** [https://arxiv.org/abs/2207.11230](https://arxiv.org/abs/2207.11230)
### Dataset Summary
This dataset contains a subset of data used in the paper [You Actually Look Twice At it (YALTAi): using an object detectionapproach instead of region segmentation within the Kraken engine](https://arxiv.org/abs/2207.11230). This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset covers pages with tabular information with the following objects "Header", "Col", "Marginal", "text".
### Supported Tasks and Leaderboards
- `object-detection`: This dataset can be used to train a model for object-detection on historic document images.
## Dataset Structure
This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines.
- The first configuration, `YOLO`, uses the data's original format.
- The second configuration converts the YOLO format into a format which is closer to the `COCO` annotation format. This is done to make it easier to work with the `feature_extractor`s from the `Transformers` models for object detection, which expect data to be in a COCO style format.
### Data Instances
An example instance from the COCO config:
```
{'height': 2944,
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FA413CDA210>,
'image_id': 0,
'objects': [{'area': 435956,
'bbox': [0.0, 244.0, 1493.0, 292.0],
'category_id': 0,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 88234,
'bbox': [305.0, 127.0, 562.0, 157.0],
'category_id': 2,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 5244,
'bbox': [1416.0, 196.0, 92.0, 57.0],
'category_id': 2,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 5720,
'bbox': [1681.0, 182.0, 88.0, 65.0],
'category_id': 2,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 374085,
'bbox': [0.0, 540.0, 163.0, 2295.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 577599,
'bbox': [104.0, 537.0, 253.0, 2283.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 598670,
'bbox': [304.0, 533.0, 262.0, 2285.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 56,
'bbox': [284.0, 539.0, 8.0, 7.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 1868412,
'bbox': [498.0, 513.0, 812.0, 2301.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 307800,
'bbox': [1250.0, 512.0, 135.0, 2280.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 494109,
'bbox': [1330.0, 503.0, 217.0, 2277.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 52,
'bbox': [1734.0, 1013.0, 4.0, 13.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 90666,
'bbox': [0.0, 1151.0, 54.0, 1679.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []}],
'width': 2064}
```
An example instance from the YOLO config:
``` python
{'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FAA140F2450>,
'objects': {'bbox': [[747, 390, 1493, 292],
[586, 206, 562, 157],
[1463, 225, 92, 57],
[1725, 215, 88, 65],
[80, 1688, 163, 2295],
[231, 1678, 253, 2283],
[435, 1675, 262, 2285],
[288, 543, 8, 7],
[905, 1663, 812, 2301],
[1318, 1653, 135, 2280],
[1439, 1642, 217, 2277],
[1737, 1019, 4, 13],
[26, 1991, 54, 1679]],
'label': [0, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1]}}
```
### Data Fields
The fields for the YOLO config:
- `image`: the image
- `objects`: the annotations which consist of:
- `bbox`: a list of bounding boxes for the image
- `label`: a list of labels for this image
The fields for the COCO config:
- `height`: height of the image
- `width`: width of the image
- `image`: image
- `image_id`: id for the image
- `objects`: annotations in COCO format, consisting of a list containing dictionaries with the following keys:
- `bbox`: bounding boxes for the images
- `category_id`: a label for the image
- `image_id`: id for the image
- `iscrowd`: COCO `iscrowd` flag
- `segmentation`: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)
### Data Splits
The dataset contains a train, validation and test split with the following numbers per split:
| | train | validation | test |
|----------|-------|------------|------|
| examples | 196 | 22 | 135 |
## Dataset Creation
> [this] dataset was produced using a single source, the Lectaurep Repertoires dataset [Rostaing et al., 2021], which served as a basis for only the training and development split. The testset is composed of original data, from various documents, from the 17th century up to the early 20th with a single soldier war report. The test set is voluntarily very different and out of domain with column borders that are not drawn nor printed in certain cases, layout in some kind of masonry layout. p.8
.
### Curation Rationale
This dataset was created to produce a simplified version of the [Lectaurep Repertoires dataset](https://github.com/HTR-United/lectaurep-repertoires), which was found toย contain:
> around 16 different ways to describe columns, from Col1 to Col7, the case-different col1-col7 and finally ColPair and ColOdd, which we all reduced to Col p.8
### Source Data
#### Initial Data Collection and Normalization
The LECTAUREP (LECTure Automatique de REPertoires) project, which began in 2018, is a joint initiative of the Minutier central des notaires de Paris, the National Archives and the
Minutier central des notaires de Paris of the National Archives, the [ALMAnaCH (Automatic Language Modeling and Analysis & Computational Humanities)](https://www.inria.fr/en/almanach) team at Inria and the EPHE (Ecole Pratique des Hautes Etudes), in partnership with the Ministry of Culture.
> The lectaurep-bronod corpus brings together 100 pages from the repertoire of Maรฎtre Louis Bronod (1719-1765), notary in Paris from December 13, 1719 to July 23, 1765. The pages concerned were written during the years 1742 to 1745.
#### Who are the source language producers?
[More information needed]
### Annotations
| | Train | Dev | Test | Total | Average area | Median area |
|----------|-------|-----|------|-------|--------------|-------------|
| Col | 724 | 105 | 829 | 1658 | 9.32 | 6.33 |
| Header | 103 | 15 | 42 | 160 | 6.78 | 7.10 |
| Marginal | 60 | 8 | 0 | 68 | 0.70 | 0.71 |
| Text | 13 | 5 | 0 | 18 | 0.01 | 0.00 |
| | | | - | | | |
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
This data does not contain information relating to living individuals.
## Considerations for Using the Data
### Social Impact of Dataset
A growing number of datasets are related to page layout for historical documents. This dataset offers a different approach to annotating these datasets (focusing on object detection rather than pixel-level annotations). Improving document layout recognition can have a positive impact on downstream tasks, in particular Optical Character Recognition.
### Discussion of Biases
Historical documents contain a wide variety of page layouts. This means that the ability of models trained on this dataset to transfer to documents with very different layouts is not guaranteed.
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```
@dataset{clerice_thibault_2022_6827706,
author = {Clรฉrice, Thibault},
title = {YALTAi: Tabular Dataset},
month = jul,
year = 2022,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.6827706},
url = {https://doi.org/10.5281/zenodo.6827706}
}
```
[](https://doi.org/10.5281/zenodo.6827706)
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
| biglam/yalta_ai_tabular_dataset | [
"task_categories:object-detection",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"size_categories:n<1K",
"license:cc-by-4.0",
"manuscripts",
"LAM",
"arxiv:2207.11230",
"region:us"
] | 2022-07-29T06:02:34+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": [], "license": ["cc-by-4.0"], "multilinguality": [], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["object-detection"], "task_ids": [], "pretty_name": "YALTAi Tabular Dataset", "tags": ["manuscripts", "LAM"]} | 2022-10-23T20:56:38+00:00 |
3ab203bc05d2e413b5d7ac87c5329a18bb0539a9 | crazyofapple/CME-Chinese | [
"license:apache-2.0",
"region:us"
] | 2022-07-29T06:22:27+00:00 | {"license": "apache-2.0"} | 2022-07-29T06:39:55+00:00 |
|
2080deae0c89256bb023ad321b453dec5971b61a | PaddlePaddle/duconv | [
"license:apache-2.0",
"region:us"
] | 2022-07-29T09:53:42+00:00 | {"license": "apache-2.0"} | 2022-07-29T10:44:00+00:00 |
|
a50258122840d6603aa487849c3bbc60514998fd | awacke1/DNA-Aaron-C-Wacker-Open-Source-Genome-Project | [
"license:mit",
"region:us"
] | 2022-07-29T15:50:05+00:00 | {"license": "mit"} | 2022-07-29T15:50:05+00:00 |
|
28b5e31855abe0a51c2ebc4d89dfb8d2c20efeed | bambeusz/umie-xs | [
"license:afl-3.0",
"region:us"
] | 2022-07-29T16:28:24+00:00 | {"license": "afl-3.0"} | 2022-08-17T18:16:04+00:00 |
|
17a4a3f0eec731d9559d68707b3ce65bffc4bcf5 | language:
- en
language_creators:
- found
multilinguality:
- monolingual
pretty_name: hello
size_categories:
- '100K<n<1M | pinecone/dl-doc-search | [
"region:us"
] | 2022-07-29T17:08:48+00:00 | {} | 2022-07-29T17:39:12+00:00 |
56834ba511d9eea394d1441de14c7da21bb23113 | LiptaphX/deneme | [
"license:afl-3.0",
"region:us"
] | 2022-07-29T20:33:01+00:00 | {"license": "afl-3.0"} | 2022-07-29T20:33:01+00:00 |
|
467c261e5016e4eede158b8f6cea7e0cbdb3f1ab | carbon225/lichess-elite | [
"license:cc0-1.0",
"region:us"
] | 2022-07-29T23:51:53+00:00 | {"license": "cc0-1.0"} | 2022-07-31T18:41:07+00:00 |
|
285490f2389cc194eb763409721ef3cf6d8fb075 | thocheat/vlsp | [
"license:other",
"region:us"
] | 2022-07-30T09:11:10+00:00 | {"license": "other"} | 2022-08-01T07:39:05+00:00 |
|
7b83c3f593b55b449e3c7b9bce665d55d5470b53 | fragom/full | [
"license:apache-2.0",
"region:us"
] | 2022-07-30T09:42:04+00:00 | {"license": "apache-2.0"} | 2022-07-30T10:10:05+00:00 |
|
ec4e46722c866c0e0bf1ad561b7bb8a4a5068995 |
This repository contains transcriptions with other metadata for the VOA Ukrainian dataset (~398h).
Usage:
```python
from datasets import load_dataset
ds = load_dataset('Yehor/voa-uk-transcriptions', split='train')
for row in ds:
print(row['text'])
```
| Yehor/voa-uk-transcriptions | [
"language:uk",
"license:cc-by-4.0",
"region:us"
] | 2022-07-30T10:59:07+00:00 | {"language": ["uk"], "license": "cc-by-4.0"} | 2022-09-10T09:07:34+00:00 |
1c0214d65571139d86b310eadb2e6615be0df374 | FUNSD dataset | JetsonEarth/jet_funsd | [
"region:us"
] | 2022-07-30T13:38:48+00:00 | {} | 2022-07-30T13:49:35+00:00 |
50b19f4267f1528ffa926fe0112935d5bdf17597 | FUNSD | JetsonEarth/jetson_funsd | [
"region:us"
] | 2022-07-30T14:25:09+00:00 | {} | 2022-07-30T14:28:55+00:00 |
093085f8558cfd53de8e2c8f4ccc7b9e73dc22ae | # ExeBench: an ML-scale dataset of executable C functions
ExeBench is a dataset of millions of C functions paired with dependencies and metadatada such that at least a subset of it can be executed with IO pairs. It is mainly inteded for machine learning applications but it is application-agnostic enough to have other usages.
Please read the paper for more information: https://dl.acm.org/doi/abs/10.1145/3520312.3534867.
Please see `examples/` in https://github.com/jordiae/exebench for examples.
## Usage
### Option 1: Using the helpers in this repo
```
git clone https://github.com/jordiae/exebench.git
cd exebench/
python -m venv venv
source venv/bin/activate
pip install -r requirements_examples.txt
PYTHONPATH="${PYTHONPATH}:${pwd}" python examples/basic.py
```
### Option 2: Directly using the Hugginface Datasets library
```
!pip install datasets zstandard
# Load dataset split. In this case, synthetic test split
dataset = load_dataset('jordiae/exebench', split='test_synth')
for e in dataset:
...
```
### Option 3: Directly download the dataset
Take a look at the files at: https://huggingface.co/datasets/jordiae/exebench/tree/main
The dataset consist of directories compressed with TAR. Inside each TAR, there is a series of jsonline files compressed with zstandard.
## Statistics and versions
This release corresponds to ExeBench v1.01, a version with some improvements with respect to the original one presented in the paper. The statistics and studies presented in the paper remain consistent with respect to the new ones. The final splits of the new version consist of the following functions:
```
train_not_compilable: 2.357M
train_synth_compilable: 2.308373M
train_real_compilable: 0.675074M
train_synth_simple_io: 0.550116M
train_real_simple_io: 0.043769M
train_synth_rich_io: 0.097250M
valid_synth: 5k
valid_real: 2.133k
test_synth: 5k
test_real: 2.134k
```
The original dataset (v1.00) with the exact same data studied in the paper can be accessed on request at: https://huggingface.co/datasets/jordiae/exebench_legacy (please reach out for access)
## License
All C functions keep the original license as per their original Github repository (available in the metadata). All ExeBench contributions (I/O examples, boilerplate to run functions, etc) are released with an MIT license.
## Citation
```
@inproceedings{10.1145/3520312.3534867,
author = {Armengol-Estap\'{e}, Jordi and Woodruff, Jackson and Brauckmann, Alexander and Magalh\~{a}es, Jos\'{e} Wesley de Souza and O'Boyle, Michael F. P.},
title = {ExeBench: An ML-Scale Dataset of Executable C Functions},
year = {2022},
isbn = {9781450392730},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3520312.3534867},
doi = {10.1145/3520312.3534867},
abstract = {Machine-learning promises to transform compilation and software engineering, yet is frequently limited by the scope of available datasets. In particular, there is a lack of runnable, real-world datasets required for a range of tasks ranging from neural program synthesis to machine learning-guided program optimization. We introduce a new dataset, ExeBench, which attempts to address this. It tackles two key issues with real-world code: references to external types and functions and scalable generation of IO examples. ExeBench is the first publicly available dataset that pairs real-world C code taken from GitHub with IO examples that allow these programs to be run. We develop a toolchain that scrapes GitHub, analyzes the code, and generates runnable snippets of code. We analyze our benchmark suite using several metrics, and show it is representative of real-world code. ExeBench contains 4.5M compilable and 700k executable C functions. This scale of executable, real functions will enable the next generation of machine learning-based programming tasks.},
booktitle = {Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming},
pages = {50โ59},
numpages = {10},
keywords = {Code Dataset, Program Synthesis, Mining Software Repositories, C, Machine Learning for Code, Compilers},
location = {San Diego, CA, USA},
series = {MAPS 2022}
}
```
## Credits
We thank Anghabench authors for their type inference-based synthetic dependencies generation for C functions. This software, Psyche-C, can be found at: https://github.com/ltcmelo/psychec
## Contact
```
jordi.armengol.estape at ed.ac.uk
``` | jordiae/exebench | [
"region:us"
] | 2022-07-30T19:07:06+00:00 | {} | 2023-03-09T16:06:06+00:00 |
d2bde405fafdd53aa4f92ddf03b14a7e7533d660 |
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue acadรฉmico en literatura metafรญsica, teologรญa y ciencias clรกsicas.\nSentence 2: Fue acadรฉmico en literatura metafรญsica, teologรญa y ciencia clรกsica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.33|
|bm|107056|0.11|265180|0.33|
|ak|108096|0.11|265071|0.33|
|ca|110608|0.11|271191|0.33|
|eu|113008|0.11|281199|0.35|
|fon|113072|0.11|265063|0.33|
|st|114080|0.11|265063|0.33|
|ki|115040|0.12|265180|0.33|
|tum|116032|0.12|265063|0.33|
|wo|122560|0.12|365063|0.45|
|ln|126304|0.13|365060|0.45|
|as|156256|0.16|265063|0.33|
|or|161472|0.16|265063|0.33|
|kn|165456|0.17|265063|0.33|
|ml|175040|0.18|265864|0.33|
|rn|192992|0.19|318189|0.39|
|nso|229712|0.23|915051|1.13|
|tn|235536|0.24|915054|1.13|
|lg|235936|0.24|915021|1.13|
|rw|249360|0.25|915043|1.13|
|ts|250256|0.25|915044|1.13|
|sn|252496|0.25|865056|1.07|
|xh|254672|0.26|915058|1.13|
|zu|263712|0.26|915061|1.13|
|ny|272128|0.27|915063|1.13|
|ig|325232|0.33|950097|1.17|
|yo|352784|0.35|918416|1.13|
|ne|393680|0.39|315754|0.39|
|pa|523248|0.52|339210|0.42|
|gu|560688|0.56|347499|0.43|
|sw|566656|0.57|1130481|1.4|
|mr|666240|0.67|417269|0.52|
|bn|832720|0.83|428843|0.53|
|ta|926912|0.93|415433|0.51|
|te|1343232|1.35|584590|0.72|
|ur|1918272|1.92|855756|1.06|
|vi|3102512|3.11|1672106|2.07|
|code|4330752|4.34|2707724|3.34|
|hi|4403568|4.41|1554667|1.92|
|zh|4599440|4.61|3589234|4.43|
|id|4612256|4.62|2643418|3.27|
|ar|4683456|4.69|2160181|2.67|
|fr|6591120|6.6|5316403|6.57|
|pt|6886800|6.9|3752156|4.63|
|es|8587920|8.6|5413205|6.69|
|en|39252528|39.33|32740750|40.44|
|total|99807184|100.0|80956089|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for HumanEval)
- Natural Language Inference
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
#### Additional [xP3all](https://huggingface.co/datasets/bigscience/xP3all) datasets
- Coreference Resolution
- [WSC (Fixed)](https://huggingface.co/datasets/super_glue)
- Sentence Completion
- [HellaSwag](https://huggingface.co/datasets/hellaswag)
- Translation
- [MultiEurlex](https://huggingface.co/datasets/multi_eurlex)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. | bigscience/xP3all | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"language:gu",
"language:hi",
"language:id",
"language:ig",
"language:ki",
"language:kn",
"language:lg",
"language:ln",
"language:ml",
"language:mr",
"language:ne",
"language:nso",
"language:ny",
"language:or",
"language:pa",
"language:pt",
"language:rn",
"language:rw",
"language:sn",
"language:st",
"language:sw",
"language:ta",
"language:te",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:ur",
"language:vi",
"language:wo",
"language:xh",
"language:yo",
"language:zh",
"language:zu",
"license:apache-2.0",
"arxiv:2211.01786",
"region:us"
] | 2022-07-30T20:05:02+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced"], "language": ["ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zu"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["100M<n<1B"], "task_categories": ["other"], "pretty_name": "xP3", "programming_language": ["C", "C++", "C#", "Go", "Java", "JavaScript", "Lua", "PHP", "Python", "Ruby", "Rust", "Scala", "TypeScript"]} | 2023-05-30T14:51:40+00:00 |
8eaa388a192aa57a7f0d34a8b3757c6a3d14b712 | alvations/greg-eval | [
"license:cc0-1.0",
"region:us"
] | 2022-07-31T00:46:33+00:00 | {"license": "cc0-1.0"} | 2022-07-31T20:42:32+00:00 |
|
5aa6d7d0c90976162beb9e98f11df3bdae500118 | # ํ๊ตญ์ด ์๋ด ๋ชจ์ v1.0
๊ตญ๋ฆฝ๊ตญ์ด์ ์ฐ๋ฆฌ๋ง์์ ์๋ด์ ์ ์ ํด ๋ง๋ ๋ฐ์ดํฐ์
๋๋ค.
- ํ๋์ ๋ง์ง ์๋ ๋จ์ด๊ฐ ํฌํจ๋ ์๋ด ์ญ์
- ๊ดํธ๋ก ํํ๋ ๋ณํ ์ญ์
- ์ค๋ณต๋ด์ฉ ํตํฉ
## ์๋ณธ ๋ฐ์ดํฐ ๋ฐ๊ธฐ
์ฐ๋ฆฌ๋ง์์์ ์๋ด์ ํด์ค์ ํฌํจํ ์๋ณธ๋ฐ์ดํฐ๋ฅผ ๋ค์ด๋ฐ์ ์ ์์ต๋๋ค.
> ๊ตญ๋ฆฝ๊ตญ์ด์ ๋๋ฆฌ์ง ์ฌ์ ์ ์ค๋ ค ์๋ ์๋ด์ '์์ธํ ์ฐพ๊ธฐ' ๊ธฐ๋ฅ์ ํ์ฉํ์ฌ ๋ณด์ค ์ ์์ต๋๋ค. ์๋ด์ด ๋ ๋ง์ด ์ค๋ ค ์๋ ์ฌ์ -์ฐ๋ฆฌ๋ง์์ '์์ธํ ์ฐพ๊ธฐ'๋ก ๋ค์ด๊ฐ์
์ '์๋ด'์ ์ ํํ์๋ฉด ์ฌ์ ์ ์ค๋ ค ์๋ ๋ชจ๋ ์๋ด์ ๋ชฉ๋ก์ด ๋์ต๋๋ค.
https://opendict.korean.go.kr/
์ฐ๋ฆฌ๋ง์์ ์๋น์ค ์ด์ฉ ์ฝ๊ด์ ๋ฐ๋ฅด๋ฉด
- โํฌ๋ฆฌ์์ดํฐ๋ธ ์ปค๋จผ์ฆ ์ ์์ ํ์-๋์ผ์กฐ๊ฑด๋ณ๊ฒฝํ๋ฝ2.0 ๋ํ๋ฏผ๊ตญ ๋ผ์ด์ ์คโ๋ฅผ ์ ์ฉํฉ๋๋ค.
- ์์
์ ์ฉ๋๊น์ง ํฌํจํ์ฌ ๋๊ตฌ๋ ์์ ๋กญ๊ฒ ์ด์ฉํ ์ ์์ผ๋ฉฐ ์ ์์์ ํน๋ณํ ํ๊ฐ๊ฐ ํ์ํ์ง ์์ต๋๋ค.
- ์ ์๋ฌผ์ ์ด์ฉํ๊ธฐ ์ํด์๋ ๋ค์์ ์กฐ๊ฑด์ ์ง์ผ์ผ ํฉ๋๋ค.
1. ์ ์์ ํ์: ์๋ฃ๋ฅผ ์ฌ์ฉํ ๋ ์ ์์๋ฅผ ํ์๋ก ํ์ํด์ผ ํฉ๋๋ค.
2. ๋์ผ์กฐ๊ฑด๋ณ๊ฒฝํ๋ฝ: ์๋ฃ๋ฅผ ๋ณ๊ฒฝํ์ฌ ์๋ก์ด ์ ์๋ฌผ์ ๋ง๋ค ๋, ๊ทธ ์ ์๋ฌผ๋ ๋์ผํ ๋ผ์ด์ ์ค๋ก ๋ฐฐํฌํด์ผ ํฉ๋๋ค. | mansiksohn/opendict-korean-proverb | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:ko",
"license:cc-by-2.0",
"korean",
"proverb",
"region:us"
] | 2022-07-31T02:05:28+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["ko"], "license": ["cc-by-2.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "\ud55c\uad6d\uc5b4 \uc18d\ub2f4 \ubaa8\uc74c v1.0", "tags": ["korean", "proverb"]} | 2022-07-31T02:23:30+00:00 |
f3bbca4f1441cbc73a14973fb769302713d1a298 | beiergo/test | [
"license:apache-2.0",
"region:us"
] | 2022-07-31T04:12:54+00:00 | {"license": "apache-2.0"} | 2022-07-31T04:12:55+00:00 |
|
9ad3dd427c226e588642000394eae8a394c4c845 | Turkish poems scraped from antoloji.com. Features consists of id, poet name, poem rating and the poem.
| okg/turkish-poems | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:text-scoring",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:tr",
"license:unknown",
"region:us"
] | 2022-07-31T09:09:54+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["tr"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-generation", "text-classification"], "task_ids": ["language-modeling", "text-scoring"], "pretty_name": "turkish-poems", "tags": []} | 2022-07-31T09:22:53+00:00 |
4c51ddbf5fdb05d80db8466d2a7eb9253e240dcf | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-a84cddd6-12085614 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-31T11:46:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-31T13:34:01+00:00 |
053020686dfa791746f5f3f463e4bc2875ba5ab2 | This dataset contains `<title, encoded_image>` pairs from [Medium](https://medium.com) articles. It was processed from the [Medium Articles Dataset (128k): Metadata + Images](https://www.kaggle.com/datasets/succinctlyai/medium-data) dataset on Kaggle.
The original images were processed in the following way:
1. Given an image of size `(w, h)`, we cropped a square of size `(n, n)` from the center of the image, where `n = min(w, h)`.
2. The resulting `(n, n)` image was resized to `(256, 256)`.
3. The resulting `(256, 256)` image was encoded into image tokens via the [dalle-mini/vqgan\_imagenet\_f16\_16384](https://huggingface.co/dalle-mini/vqgan_imagenet_f16_16384) model.
Note that this dataset contains ~128k entries and is too small for training a text-to-image model end to end; it is more suitable for operations on a pre-trained model
like [dalle-mini](https://huggingface.co/dalle-mini/dalle-mini) (fine-tuning, [prompt tuning](https://arxiv.org/pdf/2104.08691.pdf), etc.). | succinctly/medium-titles-and-images | [
"license:apache-2.0",
"arxiv:2104.08691",
"region:us"
] | 2022-07-31T16:24:50+00:00 | {"license": "apache-2.0"} | 2022-07-31T16:44:16+00:00 |
5057e6245fe9d2d5018f2a6594f5afb8f0048a97 | VSPuzzler/SemevalClickbaitSpoilingTrainingData | [
"region:us"
] | 2022-07-31T18:13:25+00:00 | {} | 2023-01-08T02:31:17+00:00 |
|
bafd9e2c4c9c0f5767641c249b0c10ffab96b781 | gsganden/lpz_2016_2017_processed | [
"license:bsd-3-clause",
"region:us"
] | 2022-07-31T18:29:59+00:00 | {"license": "bsd-3-clause"} | 2022-07-31T20:21:21+00:00 |
|
db3f6f363ae48cd3de82d070906e95719fc48c74 | AI-Growth-Lab/patents_claims_1.5m_traim_test_embeddings | [
"license:other",
"region:us"
] | 2022-07-31T19:22:11+00:00 | {"license": "other"} | 2022-07-31T19:45:39+00:00 |
|
ba1ab3571cae2263de50e79e0325852a4208ff53 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-samsum-0c52930e-12115616 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-31T23:21:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-07-31T23:59:32+00:00 |
96ef0d44f0763412ece4a22244a7dbb75aa4e316 |
DALL-E-Dogs is a dataset meant to produce a synthetic animal dataset. This is a precursor to DALL-E-Cats. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/) | BirdL/DALL-E-Dogs | [
"task_categories:image-classification",
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | 2022-08-01T02:24:18+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": ["other"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["image-classification", "unconditional-image-generation"], "task_ids": [], "pretty_name": "DALL-E Cats Dataset", "tags": []} | 2022-09-28T20:09:11+00:00 |
f5c77a95e61267d03a9235414f5389e2aa721e30 | Jang-Hyun/EfficientDatasetCondensation | [
"license:mit",
"region:us"
] | 2022-08-01T05:53:14+00:00 | {"license": "mit"} | 2022-08-01T05:53:14+00:00 |
|
773323193e80d60a61ee816e58e24b7564bbb98c |
### Data summary
- This repository contains small synthetic data for Image datasets; MNIST, SVHN, and CIFAR-10.
- Each torch file contains the images and corresponding labels of sizes ranging from 1,10,50 images per class (IPC).
- For more details, please refer to our GitHub page and paper below.
### Reference
https://github.com/snu-mllab/Efficient-Dataset-Condensation
### Citation
```
@inproceedings{kimICML22,
title = {Dataset Condensation via Efficient Synthetic-Data Parameterization},
author = {Kim, Jang-Hyun and Kim, Jinuk and Oh, Seong Joon and Yun, Sangdoo and Song, Hwanjun and Jeong, Joonhyun and Ha, Jung-Woo and Song, Hyun Oh},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2022}
}
``` | ICML2022/EfficientDatasetCondensation | [
"license:mit",
"region:us"
] | 2022-08-01T05:53:31+00:00 | {"license": "mit", "data_type": "image (0-1 ranged float)"} | 2022-08-01T06:12:52+00:00 |
8de79b42002a6e7ab7e713787f4c427d122a269f |
# Dataset Card for LEXTREME: A Multilingual Legal Benchmark for Natural Language Understanding
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:[email protected])
### Dataset Summary
The dataset consists of 11 diverse multilingual legal NLU datasets. 6 datasets have one single configuration and 5 datasets have two or three configurations. This leads to a total of 18 tasks (8 single-label text classification tasks, 5 multi-label text classification tasks and 5 token-classification tasks).
Use the dataset like this:
```python
from datasets import load_dataset
dataset = load_dataset("joelito/lextreme", "swiss_judgment_prediction")
```
### Supported Tasks and Leaderboards
The dataset supports the tasks of text classification and token classification.
In detail, we support the folliwing tasks and configurations:
| task | task type | configurations | link |
|:---------------------------|--------------------------:|---------------------------------:|-------------------------------------------------------------------------------------------------------:|
| Brazilian Court Decisions | Judgment Prediction | (judgment, unanimity) | [joelito/brazilian_court_decisions](https://huggingface.co/datasets/joelito/brazilian_court_decisions) |
| Swiss Judgment Prediction | Judgment Prediction | default | [joelito/swiss_judgment_prediction](https://huggingface.co/datasets/swiss_judgment_prediction) |
| German Argument Mining | Argument Mining | default | [joelito/german_argument_mining](https://huggingface.co/datasets/joelito/german_argument_mining) |
| Greek Legal Code | Topic Classification | (volume, chapter, subject) | [greek_legal_code](https://huggingface.co/datasets/greek_legal_code) |
| Online Terms of Service | Unfairness Classification | (unfairness level, clause topic) | [online_terms_of_service](https://huggingface.co/datasets/joelito/online_terms_of_service) |
| Covid 19 Emergency Event | Event Classification | default | [covid19_emergency_event](https://huggingface.co/datasets/joelito/covid19_emergency_event) |
| MultiEURLEX | Topic Classification | (level 1, level 2, level 3) | [multi_eurlex](https://huggingface.co/datasets/multi_eurlex) |
| LeNER BR | Named Entity Recognition | default | [lener_br](https://huggingface.co/datasets/lener_br) |
| LegalNERo | Named Entity Recognition | default | [legalnero](https://huggingface.co/datasets/joelito/legalnero) |
| Greek Legal NER | Named Entity Recognition | default | [greek_legal_ner](https://huggingface.co/datasets/joelito/greek_legal_ner) |
| MAPA | Named Entity Recognition | (coarse, fine) | [mapa](https://huggingface.co/datasets/joelito/mapa) |
### Languages
The following languages are supported: bg , cs , da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
### Data Instances
The file format is jsonl and three data splits are present for each configuration (train, validation and test).
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
How can I contribute a dataset to lextreme?
Please follow the following steps:
1. Make sure your dataset is available on the huggingface hub and has a train, validation and test split.
2. Create a pull request to the lextreme repository by adding the following to the lextreme.py file:
- Create a dict _{YOUR_DATASET_NAME} (similar to _BRAZILIAN_COURT_DECISIONS_JUDGMENT) containing all the necessary information about your dataset (task_type, input_col, label_col, etc.)
- Add your dataset to the BUILDER_CONFIGS list: `LextremeConfig(name="{your_dataset_name}", **_{YOUR_DATASET_NAME})`
- Test that it works correctly by loading your subset with `load_dataset("lextreme", "{your_dataset_name}")` and inspecting a few examples.
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{niklaus2023lextreme,
title={LEXTREME: A Multi-Lingual and Multi-Task Benchmark for the Legal Domain},
author={Joel Niklaus and Veton Matoshi and Pooja Rani and Andrea Galassi and Matthias Stรผrmer and Ilias Chalkidis},
year={2023},
eprint={2301.13126},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| joelniklaus/lextreme | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:sv",
"license:cc-by-4.0",
"arxiv:2301.13126",
"region:us"
] | 2022-08-01T07:41:55+00:00 | {"annotations_creators": ["other"], "language_creators": ["found"], "language": ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "ro", "sk", "sl", "sv"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended"], "task_categories": ["text-classification", "token-classification"], "task_ids": ["multi-class-classification", "multi-label-classification", "topic-classification", "named-entity-recognition"], "pretty_name": "LEXTREME: A Multilingual Legal Benchmark for Natural Language Understanding"} | 2023-04-29T06:02:17+00:00 |
6ce1c304556d5f62c1c7ad2378ec3dcbebdd4474 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-samsum-db063b78-12135617 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-01T08:22:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-08-01T08:28:59+00:00 |
32fba0b0ee59bc29ea13ff25f7029ca19b48f410 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-xsum-4118bb33-12145618 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-01T08:26:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-08-01T12:41:09+00:00 |
6e28526de611e2cce102546dc19ee2aa5c4d9606 |
# statistics
cpp-java: 627 pairs
python-java: 616 pairs
cpp-python: 545 pairs
| ziwenyd/transcoder-geeksforgeeks | [
"license:mit",
"region:us"
] | 2022-08-01T08:28:39+00:00 | {"license": "mit"} | 2022-08-03T13:59:08+00:00 |
b48f43ffb8808a1d3797ad2f9c112fc743fc37a9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-b454c496-12155619 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-01T08:30:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-08-01T14:27:24+00:00 |
ddb7e90cba94406060a1ecf502017d244b5b14c2 |
This is a Faroese NER corpus, FoNE, it was created by annotating the [Sosialurin corpus](https://huggingface.co/datasets/vesteinn/sosialurin-faroese-pos).
If you find this dataset useful, please cite
```
@inproceedings{snaebjarnarson-etal-2023-transfer,
title = "{T}ransfer to a Low-Resource Language via Close Relatives: The Case Study on Faroese",
author = "Snรฆbjarnarson, Vรฉsteinn and
Simonsen, Annika and
Glavaลก, Goran and
Vuliฤ, Ivan",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = "may 22--24",
year = "2023",
address = "Tรณrshavn, Faroe Islands",
publisher = {Link{\"o}ping University Electronic Press, Sweden},
}
``` | vesteinn/sosialurin-faroese-ner | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"size_categories:1K<n<10K",
"language:fo",
"license:cc-by-4.0",
"region:us"
] | 2022-08-01T11:33:34+00:00 | {"language": ["fo"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "FoNE"} | 2024-01-05T12:44:42+00:00 |
cd0823496bbf167f176f6239a9ee8c0985247853 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/t5-v1.1-base-dutch-cnn-test
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-a771a5f9-12165620 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-01T11:37:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ml6team/cnn_dailymail_nl"], "eval_info": {"task": "summarization", "model": "yhavinga/t5-v1.1-base-dutch-cnn-test", "metrics": [], "dataset_name": "ml6team/cnn_dailymail_nl", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-08-01T12:47:31+00:00 |
fa6ec90a7beb96d182372f09b04b96797ea6588a | This dataset is a custom dataset created by the author by crawling Naver News (https://news.naver.com) for the Korean NLP model hands-on.
- Period: July 1, 2022 - July 10, 2022
- Subject: IT, economics
```
DatasetDict({
train: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 22194
})
test: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 2740
})
validation: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 2466
})
})
```
---
license: apache-2.0
--- | daekeun-ml/naver-news-summarization-ko | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:ko",
"license:apache-2.0",
"region:us"
] | 2022-08-01T13:54:17+00:00 | {"language": ["ko"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["summarization"]} | 2023-01-10T11:12:44+00:00 |
a2bc8d5de70f89d889c35302656743bd5a00d576 |
# Dataset Card for ZINC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://zinc15.docking.org/)**
- **[Repository](https://www.dropbox.com/s/feo9qle74kg48gy/molecules.zip?dl=1):**:
- **Paper:**: ZINC 15 โ Ligand Discovery for Everyone (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/)
### Dataset Summary
The `ZINC` dataset is a "curated collection of commercially available chemical compounds prepared especially for virtual screening" (Wikipedia).
### Supported Tasks and Leaderboards
`ZINC` should be used for molecular property prediction (aiming to predict the constrained solubility of the molecules), a graph regression task. The score used is the MAE.
The associated leaderboard is here: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-regression-on-zinc).
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | big |
| #graphs | 220011 |
| average #nodes | 23.15 |
| average #edges | 49.81 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset, and follows the provided data splits.
This information can be found back using
```python
from torch_geometric.datasets import ZINC
dataset = ZINC(root = '', split='train') # valid, test
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license. Please open an issue if you know what is the license of this dataset.
### Citation Information
```bibtex
@article{doi:10.1021/acs.jcim.5b00559,
author = {Sterling, Teague and Irwin, John J.},
title = {ZINC 15 โ Ligand Discovery for Everyone},
journal = {Journal of Chemical Information and Modeling},
volume = {55},
number = {11},
pages = {2324-2337},
year = {2015},
doi = {10.1021/acs.jcim.5b00559},
note ={PMID: 26479676},
URL = {
https://doi.org/10.1021/acs.jcim.5b00559
},
eprint = {
https://doi.org/10.1021/acs.jcim.5b00559
}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | graphs-datasets/ZINC | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | 2022-08-01T14:11:09+00:00 | {"license": "unknown", "task_categories": ["graph-ml"], "dataset_info": {"features": [{"name": "node_feat", "sequence": {"sequence": "int64"}}, {"name": "edge_index", "sequence": {"sequence": "int64"}}, {"name": "edge_attr", "sequence": {"sequence": "int64"}}, {"name": "y", "sequence": "float64"}, {"name": "num_nodes", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 376796456, "num_examples": 220011}, {"name": "test", "num_bytes": 8538528, "num_examples": 5000}, {"name": "validation", "num_bytes": 41819628, "num_examples": 24445}], "download_size": 20636253, "dataset_size": 427154612}} | 2023-02-07T16:37:32+00:00 |
af9c040afaaa5902987bfcb3d4256c09239ec8ed |
# Dataset Card for PROTEINS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://academic.oup.com/bioinformatics/article/21/suppl_1/i47/202991)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/PROTEINS.zip):**:
- **Paper:**: Protein function prediction via graph kernels (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-proteins)
### Dataset Summary
The `PROTEINS` dataset is a medium molecular property prediction dataset.
### Supported Tasks and Leaderboards
`PROTEINS` should be used for molecular property prediction (aiming to predict whether molecules are enzymes or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1113 |
| average #nodes | 39.06 |
| average #edges | 72.82 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by TUDataset.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
dataset = TUDataset(root='', name = 'PROTEINS')
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have info about it.
### Citation Information
```
@article{10.1093/bioinformatics/bti1007,
author = {Borgwardt, Karsten M. and Ong, Cheng Soon and Schรถnauer, Stefan and Vishwanathan, S. V. N. and Smola, Alex J. and Kriegel, Hans-Peter},
title = "{Protein function prediction via graph kernels}",
journal = {Bioinformatics},
volume = {21},
number = {suppl_1},
pages = {i47-i56},
year = {2005},
month = {06},
abstract = "{Motivation: Computational approaches to protein function prediction infer protein function by finding proteins with similar sequence, structure, surface clefts, chemical properties, amino acid motifs, interaction partners or phylogenetic profiles. We present a new approach that combines sequential, structural and chemical information into one graph model of proteins. We predict functional class membership of enzymes and non-enzymes using graph kernels and support vector machine classification on these protein graphs.Results: Our graph model, derivable from protein sequence and structure only, is competitive with vector models that require additional protein information, such as the size of surface pockets. If we include this extra information into our graph model, our classifier yields significantly higher accuracy levels than the vector models. Hyperkernels allow us to select and to optimally combine the most relevant node attributes in our protein graphs. We have laid the foundation for a protein function prediction system that integrates protein information from various sources efficiently and effectively.Availability: More information available via www.dbs.ifi.lmu.de/Mitarbeiter/borgwardt.html.Contact:[email protected]}",
issn = {1367-4803},
doi = {10.1093/bioinformatics/bti1007},
url = {https://doi.org/10.1093/bioinformatics/bti1007},
eprint = {https://academic.oup.com/bioinformatics/article-pdf/21/suppl\_1/i47/524364/bti1007.pdf},
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | graphs-datasets/PROTEINS | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | 2022-08-01T14:50:33+00:00 | {"license": "unknown", "task_categories": ["graph-ml"]} | 2023-02-07T16:39:11+00:00 |
d0d278691a40f1d671294d5f3690a18acf6e0270 |
# Dataset Card for MUTAG
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://pubs.acs.org/doi/abs/10.1021/jm00106a046)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/MUTAG.zip):**:
- **Paper:**: Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-mutag)
### Dataset Summary
The `MUTAG` dataset is 'a collection of nitroaromatic compounds and the goal is to predict their mutagenicity on Salmonella typhimurium'.
### Supported Tasks and Leaderboards
`MUTAG` should be used for molecular property prediction (aiming to predict whether molecules have a mutagenic effect on a given bacterium or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | small |
| #graphs | 187 |
| average #nodes | 18.03 |
| average #edges | 39.80 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
cur_dataset = TUDataset(root="../dataset/loaded/",
name="MUTAG")
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have information.
### Citation Information
```
@article{doi:10.1021/jm00106a046,
author = {Debnath, Asim Kumar and Lopez de Compadre, Rosa L. and Debnath, Gargi and Shusterman, Alan J. and Hansch, Corwin},
title = {Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity},
journal = {Journal of Medicinal Chemistry},
volume = {34},
number = {2},
pages = {786-797},
year = {1991},
doi = {10.1021/jm00106a046},
URL = {
https://doi.org/10.1021/jm00106a046
},
eprint = {
https://doi.org/10.1021/jm00106a046
}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | graphs-datasets/MUTAG | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | 2022-08-01T14:58:02+00:00 | {"license": "unknown", "task_categories": ["graph-ml"]} | 2023-02-07T16:39:19+00:00 |
412288d7d6a1e6afc381bd89223e0a17c35b4875 |
# Dataset Card for IMDB-BINARY (IMDb-B)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://dl.acm.org/doi/10.1145/2783258.2783417)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/IMDB-BINARY.zip):**:
- **Paper:**: Deep Graph Kernels (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-imdb-b)
### Dataset Summary
The `IMDb-B` dataset is "a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres".
### Supported Tasks and Leaderboards
`IMDb-B` should be used for graph classification (aiming to predict whether a movie graph is an action or romance movie), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1000 |
| average #nodes | 19.79 |
| average #edges | 193.25 |
### Data Fields
Each row of a given file is a graph, with:
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
cur_dataset = TUDataset(root="../dataset/loaded/",
name="IMDB-BINARY")
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have this information.
### Citation Information
```
@inproceedings{10.1145/2783258.2783417,
author = {Yanardag, Pinar and Vishwanathan, S.V.N.},
title = {Deep Graph Kernels},
year = {2015},
isbn = {9781450336642},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2783258.2783417},
doi = {10.1145/2783258.2783417},
abstract = {In this paper, we present Deep Graph Kernels, a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels.},
booktitle = {Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
pages = {1365โ1374},
numpages = {10},
keywords = {collaboration networks, bioinformatics, r-convolution kernels, graph kernels, structured data, deep learning, social networks, string kernels},
location = {Sydney, NSW, Australia},
series = {KDD '15}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | graphs-datasets/IMDB-BINARY | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | 2022-08-01T15:17:25+00:00 | {"license": "unknown", "task_categories": ["graph-ml"]} | 2023-02-07T16:39:00+00:00 |
9e59fee55eef474310846d06a0fab238602a32d8 |
# BigScience BLOOM Evaluation Results
This repository contains evaluation results & original predictions of BLOOM & friends.
## Usage
You can load numeric results via:
```python
from datasets import load_dataset
ds = load_dataset("bigscience/evaluation-results", "bloom")
```
If it takes too long, it may be faster to clone the repository and load the data from disk:
```python
!git clone https://huggingface.co/datasets/bigscience/evaluation-results
ds = load_dataset("evaluation-results", "bloom")
```
For example generations (.jsonl files), you need to manually browse the repository.
## Structure
For `bigsciencelmevalharness`, `lmevalharness` & `codeeval` evaluation_frameworks the structure is:
`model_name > evaluation_framework > checkpoint_type > dataset_name > data`
## Evaluation Procedure
- `bigsciencelmevalharness` files were created using the below:
- https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/291
- https://github.com/bigscience-workshop/lm-evaluation-harness
- `lmevalharness` files were created using the below:
- https://github.com/bigscience-workshop/Megatron-DeepSpeed
- https://github.com/EleutherAI/lm-evaluation-harness
- `codeeval` files were created using the HumanEval code dataset with the below:
- https://github.com/loubnabnl/bloom-code-evaluation
| bigscience/evaluation-results | [
"task_categories:other",
"size_categories:100M<n<1B",
"region:us"
] | 2022-08-01T17:35:58+00:00 | {"size_categories": ["100M<n<1B"], "task_categories": ["other"], "pretty_name": "evaluation-results"} | 2023-05-27T23:13:53+00:00 |
00649413018d64c58ab9b9e9008c51c84e3d1919 |
DALL-E-Cats is a dataset meant to produce a synthetic animal dataset. This is a successor to DALL-E-Dogs. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/) | BirdL/DALL-E-Cats | [
"task_categories:image-classification",
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | 2022-08-01T19:37:15+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": ["other"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["image-classification", "unconditional-image-generation"], "task_ids": [], "pretty_name": "DALL-E Cats Dataset", "tags": []} | 2022-09-28T20:07:37+00:00 |
a9f7f1ac75934a7c01d3ca02217544251939c881 | **Pexel Videos**
*358,551 video urls, average length 19.5s, and associated metadata from pexels.com.*
Data was extracted from their video sitemaps (pexels.com/robots.txt) on 01/08/2022.
Data is stored in PexelVideos.parquet.gzip as a gzipped parquet
To get this data ensure you have git installed and do !git lfs clone https://huggingface.co/datasets/Corran/pexelvideos/
In python the reccomended reading is by opening the file with pandas.
!pip install pandas <br>
import pandas <br>
data=pd.read_parquet('PexelVideos.parquet.gzip') <br>
Get a specific url and its metadata using data.iloc[0], read this like a python dict
e.g to get the url for index i run
url= df.iloc[i]["content_loc"]
https://pandas.pydata.org/pandas-docs/version/1.1/getting_started/index.html#getting-started
**Explore this dataset using Open-Clip**
https://colab.research.google.com/drive/1m3_KfPKOC_oivqoruaseiNUlP-_MqqyX#scrollTo=bNngcd8UAOma
**License**
According to Pexels licensing, these videos are free to use for personal or commercial purposes, attribution is polite but not required however,
-Identifiable people may not appear in a bad light or in a way that is offensive. <br>
-Don't sell unaltered copies of a photo or video, e.g. as a poster, print or on a physical product without modifying it first. <br>
-Don't imply endorsement of your product by people or brands on the imagery. <br>
-Don't redistribute or sell the photos and videos on other stock photo or wallpaper platforms. <br>
license https://www.pexels.com/license/
| Corran/pexelvideos | [
"region:us"
] | 2022-08-02T01:57:25+00:00 | {} | 2022-08-08T12:22:04+00:00 |
6d6899645fe698f33873fb1e5f8f1b4166289715 | Kadarxwoody/artistic-2.0 | [
"license:artistic-2.0",
"region:us"
] | 2022-08-02T03:03:09+00:00 | {"license": "artistic-2.0"} | 2022-08-02T03:03:09+00:00 |
|
08de04b777c94502ac34a514e79652ba0086425b | NX2411/AIhub-korean-speech-data | [
"license:apache-2.0",
"region:us"
] | 2022-08-02T05:25:46+00:00 | {"license": "apache-2.0"} | 2022-08-03T08:13:28+00:00 |
|
923d33d0d849afee9887b1f80e71e686bb5a68af |
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1228646724
- CO2 Emissions (in grams): 1368.8941
## Validation Metrics
- Loss: 2.319
- Rouge1: 43.703
- Rouge2: 16.106
- RougeL: 23.715
- RougeLsum: 38.984
- Gen Len: 141.091
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/vishw2703/autotrain-unisumm_3-1228646724
``` | ShreySavaliya/TextSummarisation | [
"language:unk",
"autotrain",
"summarization",
"region:us"
] | 2022-08-02T05:27:58+00:00 | {"language": ["unk"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain \ud83e\udd17"}], "datasets": ["vishw2703/autotrain-data-unisumm_3"], "co2_eq_emissions": {"emissions": 1368.894142563709}} | 2022-08-17T05:03:10+00:00 |
7e7d231c127baf5185b7e25b3086591df61c5b07 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/mt5-base-cnn-nl
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-612d6c13-12185622 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:39:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ml6team/cnn_dailymail_nl"], "eval_info": {"task": "summarization", "model": "yhavinga/mt5-base-cnn-nl", "metrics": [], "dataset_name": "ml6team/cnn_dailymail_nl", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-08-02T11:11:44+00:00 |
fbc605ed17bc3f3930bce6489c04f4cf3546cf91 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/mt5-base-mixednews-nl
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-612d6c13-12185623 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ml6team/cnn_dailymail_nl"], "eval_info": {"task": "summarization", "model": "yhavinga/mt5-base-mixednews-nl", "metrics": [], "dataset_name": "ml6team/cnn_dailymail_nl", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-08-02T11:32:01+00:00 |
19cda222ed39522c3b1b340261a5ba09766d9d4b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-1cd241d3-12195624 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:12+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-large-squad2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T09:42:07+00:00 |
681f907c1bfc909157ce2fb38f101ab336764137 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-large-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205625 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-large-squad2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T09:42:37+00:00 |
4c021cc32cf68644cdf094a49154425f1089a8ec | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2-distilled
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205626 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2-distilled", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T09:41:34+00:00 |
8b13664c3be80d2efe8e51c4d2f9404d854d9872 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2-distilled
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205627 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-base-squad2-distilled", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T09:41:51+00:00 |
7af19d4b60ccd712521d35090b9a032bda03374c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinybert-6l-768d-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205628 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:35+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/tinybert-6l-768d-squad2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T09:41:46+00:00 |
687b60cfba2df04d63b009179832de2e6b5e2db6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205629 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-base-uncased-squad2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T09:41:55+00:00 |
39c4d334cad8018816b024476a85c85a11f082c2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection
* Dataset: sms_spam
* Config: plain_text
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Al-Ip](https://huggingface.co/Al-Ip) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-sms_spam-216c1ded-12215630 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sms_spam"], "eval_info": {"task": "binary_classification", "model": "Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection", "metrics": [], "dataset_name": "sms_spam", "dataset_config": "plain_text", "dataset_split": "train", "col_mapping": {"text": "sms", "target": "label"}}} | 2022-08-02T09:41:15+00:00 |
6500ed59d1b0764caa2b526bb72c66f097e95f8d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_1
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-54a73f7a-12235635 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:42:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_pubmed_sumpubmed_1", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T10:31:13+00:00 |
28e036a2c5176b700ef625b46740702b23034dd1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_2
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-54a73f7a-12235636 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:42:44+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_pubmed_sumpubmed_2", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T10:29:01+00:00 |
25e614252e9ce89fcf8cc4af6e918711cbb3c528 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_pubmed_explanatory
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-54a73f7a-12235637 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:42:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/long_t5_global_large_pubmed_explanatory", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T12:26:39+00:00 |
61b61341f2e6e3ff845cbb5c2a6a8ecf5f798cc9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-93d67e8f-12255638 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:43:35+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_large_baseline_pubmed", "metrics": [], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T11:01:02+00:00 |
18d6acb7b5eb51e83b9c02b70eed7f33c76c8075 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-93d67e8f-12255639 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:43:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/long_t5_global_large_baseline_pubmed", "metrics": [], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T18:47:37+00:00 |
4f333c302ff8acf17091c65ea016973bea5b55fd | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-3c512f6e-12265641 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:44:42+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/long_t5_global_large_baseline_pubmed", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T18:53:52+00:00 |
4d959d3ddcccbcdc6bd5eb9263a0bfe1ac4c21bf | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-3c512f6e-12265640 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:44:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_large_baseline_pubmed", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T11:23:15+00:00 |
47c39cc6f07bdfdb281cfe463ec5fa20b6d51a47 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: cuad
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@halima](https://huggingface.co/halima) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cuad-e5412c0a-12275642 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:45:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cuad"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/RoBERTa-base-finetuned-squad2-lwt", "metrics": [], "dataset_name": "cuad", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T10:21:22+00:00 |
d6c3f2be38076d596dfa083a987c86466634ea8d | NitishKarra/invoice-bills | [
"region:us"
] | 2022-08-02T12:23:14+00:00 | {} | 2022-08-02T12:27:10+00:00 |
|
2dbc0d5727ee0cfa7704021bc39a9480f8ee1a7d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_3
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c8bf564e-12335643 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T15:46:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_pubmed_sumpubmed_3", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T16:24:17+00:00 |
691cb00d999c35d401985121f2ee489b2b8f5de6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_4
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c8bf564e-12335644 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T15:46:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_pubmed_sumpubmed_4", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T16:43:11+00:00 |
e3fe65be167f5aa4698afaa58d32d3eeaf834c71 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_5
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c8bf564e-12335645 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T15:46:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_pubmed_sumpubmed_5", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T16:55:50+00:00 |
42a9884a2e30084417f497d64829ff3d7162492f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-07d54673-12345646 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T17:57:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-08-03T20:34:30+00:00 |
0761f2c5a7799569a8662dcc39a352206225b43d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-xsum-19ae30f1-12355647 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T18:01:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-08-04T02:41:57+00:00 |
50e25ed78f4fc72fbfca9fe76a910ce67088667e |
This dataset consists of a approx 50k collection of research articles from **PubMed** repository. Originally these documents are manually annotated by Biomedical Experts with their MeSH labels and each articles are described in terms of 10-15 MeSH labels. In this Dataset we have huge numbers of labels present as a MeSH major which is raising the issue of extremely large output space and severe label sparsity issues. To solve this Issue Dataset has been Processed and mapped to its root as Described in the Below Figure.

 | owaiskha9654/PubMed_MultiLabel_Text_Classification_Dataset_MeSH | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"size_categories:10K<n<100K",
"source_datasets:BioASQ Task A",
"language:en",
"license:afl-3.0",
"region:us"
] | 2022-08-02T19:13:50+00:00 | {"language": ["en"], "license": "afl-3.0", "size_categories": ["10K<n<100K"], "source_datasets": ["BioASQ Task A"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "pretty_name": "BioASQ, PUBMED"} | 2023-01-30T09:50:44+00:00 |
ba2fde998044a29968fa13af93c291be5626bff5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led-large-sumpubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-f53a4404-12415653 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T19:16:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led-large-sumpubmed", "metrics": [], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T21:14:52+00:00 |
609f0b21763fac0105020450bdd279714085c03f | Danitg95/feedback | [
"license:other",
"region:us"
] | 2022-08-02T19:45:40+00:00 | {"license": "other"} | 2022-08-02T19:45:40+00:00 |
|
f651060737f968bb62fe942495da2dde61b9f75f | NitishKarra/dMART_BILL | [
"region:us"
] | 2022-08-03T04:45:55+00:00 | {} | 2022-08-03T06:19:10+00:00 |
|
162574e34bf5cd64881b2689909f43b0aa971a0b | # laion2B-multi-korean-subset
## Dataset Description
- **Homepage:** [laion-5b](https://laion.ai/blog/laion-5b/)
- **Huggingface:** [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi)
## About dataset
a subset data of [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi), including only korean
### Lisence
CC-BY-4.0
## Data Structure
### Data Instance
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/laion2B-multi-korean-subset")
>>> dataset
DatasetDict({
train: Dataset({
features: ['SAMPLE_ID', 'URL', 'TEXT', 'HEIGHT', 'WIDTH', 'LICENSE', 'LANGUAGE', 'NSFW', 'similarity'],
num_rows: 11376263
})
})
```
```py
>>> dataset["train"].features
{'SAMPLE_ID': Value(dtype='int64', id=None),
'URL': Value(dtype='string', id=None),
'TEXT': Value(dtype='string', id=None),
'HEIGHT': Value(dtype='int32', id=None),
'WIDTH': Value(dtype='int32', id=None),
'LICENSE': Value(dtype='string', id=None),
'LANGUAGE': Value(dtype='string', id=None),
'NSFW': Value(dtype='string', id=None),
'similarity': Value(dtype='float32', id=None)}
```
### Data Size
download: 1.56 GiB<br>
generated: 2.37 GiB<br>
total: 3.93 GiB
### Data Field
- 'SAMPLE_ID': `int`
- 'URL': `string`
- 'TEXT': `string`
- 'HEIGHT': `int`
- 'WIDTH': `int`
- 'LICENSE': `string`
- 'LANGUAGE': `string`
- 'NSFW': `string`
- 'similarity': `float`
### Data Splits
| | train |
| --------- | -------- |
| # of data | 11376263 |
## Note
### Height, Width
์ด๋ฏธ์ง์ ๊ฐ๋ก๊ฐ `HEIGHT`๋ก, ์ธ๋ก๊ฐ `WIDTH`๋ก ๋์ด์๋ ๊ฒ ๊ฐ์ต๋๋ค.
```pycon
>>> dataset["train"][98]
{'SAMPLE_ID': 2937471001780,
'URL': 'https://image.ajunews.com/content/image/2019/04/12/20190412175643597949.png',
'TEXT': '์ธ์ฒ์๊ต์ก์ฒญ, ์ธ์ฒ ์๊ตฐ๊ตฌ๋ฐ์ ํ์ํ ์์์ง๊ณผ์ ๊ฐ๋ดํ ๊ฐ์ต',
'HEIGHT': 640,
'WIDTH': 321,
'LICENSE': '?',
'LANGUAGE': 'ko',
'NSFW': 'UNLIKELY',
'similarity': 0.33347243070602417}
```

### csv file, pandas
```py
# pip install zstandard
import pandas as pd
from huggingface_hub import hf_hub_url
url = hf_hub_url("Bingsu/laion2B-multi-korean-subset", filename="laion2B-multi-korean-subset.csv.zst", repo_type="dataset")
# url = "https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset/resolve/main/laion2B-multi-korean-subset.csv.zst"
df = pd.read_csv(url)
```
<https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset/resolve/main/laion2B-multi-korean-subset.csv.zst>
778 MB
### Code used to generate
```py
import csv
import re
from datasets import load_dataset
from tqdm import tqdm
pattern = re.compile(r"[๊ฐ-ํฃ]")
def quote(s: str) -> str:
s = s.replace('"""', "")
return s
def filter_func(example) -> bool:
lang = example.get("LANGUAGE")
text = example.get("TEXT")
if not isinstance(lang, str) or not isinstance(text, str):
return False
return lang == "ko" or pattern.search(text) is not None
file = open("./laion2B-mulit_korean_subset.csv", "w", encoding="utf-8", newline="")
ds = load_dataset("laion/laion2B-multi", split="train", streaming=True)
dsf = ds.filter(filter_func)
header = [
"SAMPLE_ID",
"URL",
"TEXT",
"HEIGHT",
"WIDTH",
"LICENSE",
"LANGUAGE",
"NSFW",
"similarity",
]
writer = csv.DictWriter(file, fieldnames=header)
writer.writeheader()
try:
for data in tqdm(dsf): # total=11378843
data["TEXT"] = quote(data.get("TEXT", ""))
if data["TEXT"]:
writer.writerow(data)
finally:
file.close()
print("Done!")
```
์คํ์ ์ฝ 8์๊ฐ์ด ์์๋์์ต๋๋ค. ์ดํ์ `HEIGHT`๋ `WIDTH`๊ฐ None์ธ ๋ฐ์ดํฐ๋ฅผ ์ ๊ฑฐํ๊ณ ์
๋ก๋ํ์์ต๋๋ค.
### img2dataset
[img2dataset](https://github.com/rom1504/img2dataset)์ ์ฌ์ฉํ์ฌ URL๋ก๋ ์ด๋ฏธ์ง๋ค์ ๋ฐ์ดํฐ์
ํํ๋ก ๋ง๋ค ์ ์์ต๋๋ค.
| Bingsu/laion2B-multi-korean-subset | [
"task_categories:feature-extraction",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:ko",
"license:cc-by-4.0",
"region:us"
] | 2022-08-03T05:57:55+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ko"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "task_categories": ["feature-extraction"], "pretty_name": "laion2B-multi-korean-subset"} | 2022-10-14T04:23:17+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.