sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
c04a984705f236028907c95089aa46b573d00258
|
romeo8080/fuego-20230220-024617-6b4f05
|
[
"fuego",
"region:us"
] |
2023-02-20T01:46:18+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230220-024617-6b4f05", "status": "done", "script": "run_glue.py", "requirements_file": "requirements.txt", "space_id": "romeo8080/fuego-20230220-024617-6b4f05", "space_hardware": "cpu-basic", "github_repo_id": "huggingface/transformers", "github_repo_branch": "main", "github_repo_sha": "7f1cdf18958efef6339040ba91edb32ae7377720"}}
|
2023-02-20T08:34:49+00:00
|
|
9081bd48bbe1d37517fce0dda49a6fb0e45049e1
|
fai/testingdataset
|
[
"license:mit",
"region:us"
] |
2023-02-20T02:35:34+00:00
|
{"license": "mit"}
|
2023-02-20T02:35:34+00:00
|
|
d81e7aea789facdbbe1fd5f9c956b24bb42f8d76
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- https://github.com/mhmaqbool/mobilerec
- **Repository:**
- https://github.com/mhmaqbool/mobilerec
- **Paper:**
- MobileRec: A Large-Scale Dataset for Mobile Apps Recommendation
- **Point of Contact:**
- M.H. Maqbool ([email protected])
- Abubakar Siddique ([email protected])
### Dataset Summary
MobileRec is a large-scale app recommendation dataset. There are 19.3 million user\item interactions. This is a 5-core dataset.
User\item interactions are sorted in ascending chronological order. There are 0.7 million users who have had at least five distinct interactions.
There are 10173 apps in total.
### Supported Tasks and Leaderboards
Sequential Recommendation
### Languages
English
## How to use the dataset?
```
from datasets import load_dataset
import pandas as pd
# load the dataset and meta_data
mbr_data = load_dataset('recmeapp/mobilerec', data_dir='interactions')
mbr_meta = load_dataset('recmeapp/mobilerec', data_dir='app_meta')
# Save dataset to .csv file for creating pandas dataframe
mbr_data['train'].to_csv('./mbr_data.csv')
# Convert to pandas dataframe
mobilerec_df = pd.read_csv('./mbr_data.csv')
# How many interactions are there in the MobileRec dataset?
print(f'There are {len(mobilerec_df)} interactions in mobilerec dataset.')
# How many unique app_packages (apps or items) are there?
print(f'There are {len(mobilerec_df["app_package"].unique())} unique apps in mobilerec dataset.')
# How many unique users are there in the mobilerec dataset?
print(f'There are {len(mobilerec_df["uid"].unique())} unique users in mobilerec dataset.')
# How many categoris are there?
print(f'There are {len(mobilerec_df["app_category"].unique())} unique categories in mobilerec dataset.')
```
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
recmeapp/mobilerec
|
[
"region:us"
] |
2023-02-20T02:40:55+00:00
|
{}
|
2023-02-21T17:06:16+00:00
|
7d29a5d18b7a210547e6fb051eeefc39abf9eb4a
|
**Official website**: https://github.com/lfoppiano/SuperMat
### Reference
The paper discussing this datset can be found [here](https://doi.org/10.1080/27660400.2021.1918396) or on [arxiv](arxiv.org/abs/2101.02455)
For citing:
```
@article{doi:10.1080/27660400.2021.1918396,
author = {Luca Foppiano and Sae Dieb and Akira Suzuki and Pedro Baptista de Castro and Suguru Iwasaki and Azusa Uzuki and Miren Garbine Esparza Echevarria and Yan Meng and Kensei Terashima and Laurent Romary and Yoshihiko Takano and Masashi Ishii},
title = {SuperMat: construction of a linked annotated dataset from superconductors-related publications},
journal = {Science and Technology of Advanced Materials: Methods},
volume = {1},
number = {1},
pages = {34-44},
year = {2021},
publisher = {Taylor & Francis},
doi = {10.1080/27660400.2021.1918396},
URL = {
https://doi.org/10.1080/27660400.2021.1918396
},
eprint = {
https://doi.org/10.1080/27660400.2021.1918396
}
}
```
|
lfoppiano/SuperMat
|
[
"task_categories:token-classification",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-4.0",
"materials science",
"ner",
"machine learning",
"superconductors",
"arxiv:2101.02455",
"region:us"
] |
2023-02-20T02:49:32+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["token-classification"], "pretty_name": "supermat", "tags": ["materials science", "ner", "machine learning", "superconductors"]}
|
2023-10-24T22:55:51+00:00
|
a246da01ea1e4f0bea068e226ace5e0224846b6a
|
# Dataset Card for "tinydata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jlbaker361/tinydata
|
[
"region:us"
] |
2023-02-20T06:12:29+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "sequence", "sequence": "int64"}, {"name": "occurence", "dtype": "int64"}, {"name": "split", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13421, "num_examples": 10}], "download_size": 5408, "dataset_size": 13421}}
|
2023-02-20T06:12:31+00:00
|
78f635dd423fa9d31ba6cca41d1a5072a6f8e0e1
|
napakan/agoji
|
[
"region:us"
] |
2023-02-20T06:35:04+00:00
|
{"pretty_name": "agoji"}
|
2023-02-20T07:00:34+00:00
|
|
7c375f952d0e1e4509c2db3b37a2e4e7ce1876bf
|
# Dataset Card for "enwiki20230101-pageid-minilml6v2embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lsb/enwiki20230101-pageid-minilml6v2embeddings
|
[
"region:us"
] |
2023-02-20T07:12:51+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "minilml6v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 110468184098, "num_examples": 57745806}], "download_size": 137147681757, "dataset_size": 110468184098}}
|
2023-02-20T09:34:53+00:00
|
4793232ab14f30524228ec562ba9e119a6263598
|
napakan/agojiwhite
|
[
"region:us"
] |
2023-02-20T07:25:41+00:00
|
{}
|
2023-02-20T07:29:04+00:00
|
|
ed0e4994f18f86132594a81f1f3a5df84b946d04
|
## To use this dataset for your research, please cite the following preprint. Full-paper will be available soon.
[Preprint](https://arxiv.org/abs/2212.02842)
### Citation:
@article{thambawita2022visem,
title={VISEM-Tracking: Human Spermatozoa Tracking Dataset},
author={Thambawita, Vajira and Hicks, Steven A and Stor{\aa}s, Andrea M and Nguyen, Thu and Andersen, Jorunn M and Witczak, Oliwia and Haugen, Trine B and Hammer, Hugo L, and Halvorsen, P{\aa}l and Riegler, Michael A},
journal={arXiv preprint arXiv:2212.02842}, year={2022}
}
☝️ ☝️ ☝️
### Motivation and background
Manual evaluation of a sperm sample using a microscope is time-consuming and requires costly experts who have extensive training. In addition, the validity of manual sperm analysis becomes unreliable due to limited reproducibility and high inter-personnel variations due to the complexity of tracking, identifying, and counting sperm in fresh samples. The existing computer-aided sperm analyzer systems are not working well enough for application in a real clinical setting due to unreliability caused by the consistency of the semen sample. Therefore, we need to research new methods for automated sperm analysis.
### Target group
The task is of interest to researchers in the areas of machine learning (classification and detection), visual content analysis, and multimodal fusion. Overall, this task is intended to encourage the multimedia community to help improve the healthcare system through the application of their knowledge and methods to reach the next level of computer and multimedia-assisted diagnosis, detection, and interpretation.
### Class Label Mapping
sperm: 0
cluster: 1
small or pinhead: 2
|
SimulaMet-HOST/VISEM-Tracking
|
[
"task_categories:object-detection",
"size_categories:1B<n<10B",
"license:cc-by-4.0",
"sperm",
"VISEM-Tracking",
"sperm tracking",
"tracking",
"arxiv:2212.02842",
"region:us"
] |
2023-02-20T07:42:59+00:00
|
{"license": "cc-by-4.0", "size_categories": ["1B<n<10B"], "task_categories": ["object-detection"], "pretty_name": "VISEM-Tracking", "tags": ["sperm", "VISEM-Tracking", "sperm tracking", "tracking"]}
|
2023-02-20T08:54:57+00:00
|
338fcd0068661b89ae7ad6a4a96e8703ac919d4b
|
powopowo/1111
|
[
"license:openrail",
"region:us"
] |
2023-02-20T08:28:54+00:00
|
{"license": "openrail"}
|
2023-02-20T08:28:54+00:00
|
|
b75fd0d0eeb8ff7eeea97cb691be8ef631945561
|
# Dataset Card for "CaribbeanScans"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Nadav/CaribbeanScans
|
[
"region:us"
] |
2023-02-20T09:09:33+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "evaluation", "1": "train"}}}}], "splits": [{"name": "train", "num_bytes": 152948913099.784, "num_examples": 1675172}, {"name": "test", "num_bytes": 9056919525.81, "num_examples": 87721}], "download_size": 57344797328, "dataset_size": 162005832625.594}}
|
2023-02-21T01:28:57+00:00
|
ee6f39553b6fe4424f0a52b61a8de2a893390744
|
# Dataset Card for "stackoverflow_python"
### Dataset Summary
This dataset comes originally from [kaggle](https://www.kaggle.com/stackoverflow/pythonquestions).
It was originally split into three tables (CSV files) (Questions, Answers, and Tags)
now merged into a single table. Each row corresponds to a pair (question-answer) and
their associated tags.
The dataset contains all questions asked between August 2, 2008 and Ocotober 19, 2016.
### Supported Tasks and Leaderboards
This might be useful for open-domain question-answering tasks.
## Additional information
### License
All Stack Overflow user contributions are licensed under CC-BY-SA 3.0 with attribution required.
|
koutch/stackoverflow_python
|
[
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] |
2023-02-20T09:44:08+00:00
|
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["question-answering"], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question_body", "dtype": "string"}, {"name": "question_score", "dtype": "int64"}, {"name": "question_date", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}, {"name": "answer_body", "dtype": "string"}, {"name": "answer_score", "dtype": "int64"}, {"name": "answer_date", "dtype": "string"}, {"name": "tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 2142466142, "num_examples": 987122}], "download_size": 829547986, "dataset_size": 2142466142}}
|
2023-03-27T14:22:32+00:00
|
fdab8b1143183e51f0f50c6a52e6b16bd06a453f
|
wooden-ufo/MyStorage2
|
[
"license:other",
"region:us"
] |
2023-02-20T09:46:35+00:00
|
{"license": "other"}
|
2023-02-21T01:28:44+00:00
|
|
55ab0408717cf9d2e4f4819079a59a94ca7a5db9
|
# Hyundai Equus 1999
현대 에쿠스 1세대 로라
가중치 0.8 ~ 1 권장
[다운로드 (151MB)](https://huggingface.co/datasets/AIARTCHAN/lora-Hyundai_Equus_1999/resolve/main/Equus_1-000006.safetensors)
|
AIARTCHAN/lora-Hyundai_Equus_1999
|
[
"license:creativeml-openrail-m",
"lora",
"aiartchan",
"stable-diffusion",
"region:us"
] |
2023-02-20T10:05:56+00:00
|
{"license": "creativeml-openrail-m", "pretty_name": "Hyundai Equus 1999", "tags": ["lora", "aiartchan", "stable-diffusion"]}
|
2023-02-20T10:08:16+00:00
|
8975c9562ddeb770b544fbdf1c97acd3b41c89e6
|
dataset_info:
features:
- name: questionId
dtype: int64
- name: question
dtype: string
- name: image
sequence:
sequence:
sequence:
sequence: uint8
- name: docId
dtype: int64
- name: ucsf_document_id
dtype: string
- name: ucsf_document_page_no
dtype: string
- name: answers
sequence: string
- name: data_split
dtype: string
- name: words
sequence: string
- name: boxes
sequence:
sequence: int64
splits:
- name: train
num_bytes: 6387690838
num_examples: 39463
- name: val
num_bytes: 869953677
num_examples: 5349
- name: test
num_examples: 5188
download_size: 2583317804
dataset_size: 7257644515
|
Near-Start/layoutlm_docvqa_demo
|
[
"license:openrail",
"region:us"
] |
2023-02-20T10:51:00+00:00
|
{"license": "openrail"}
|
2023-02-20T11:54:45+00:00
|
4865a8976b10d8f0e35b072d5218637601c8c94f
|
Ruzt/Del
|
[
"region:us"
] |
2023-02-20T11:18:03+00:00
|
{}
|
2023-02-20T11:27:44+00:00
|
|
b4a1f6a7be51b198fb059e10819f7d02dc7f1f86
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: stacked-summaries/flan-t5-large-samsum
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
|
autoevaluate/autoeval-eval-samsum-samsum-8b7a44-3603696533
|
[
"autotrain",
"evaluation",
"region:us"
] |
2023-02-20T12:31:00+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "stacked-summaries/flan-t5-large-samsum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
|
2023-02-20T12:34:03+00:00
|
4cb5e4739d2120ee7eaa9e6c27c5c82aee1ff31a
|
UndyingRageblade/Beatrice
|
[
"license:other",
"region:us"
] |
2023-02-20T12:32:47+00:00
|
{"license": "other"}
|
2023-02-20T12:43:48+00:00
|
|
48617dcbf143bc0021beb1a280e2ec5b4540fc57
|
# Dataset Card for "zambezivoice_lozi_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zambezivoice/zambezivoice_lozi_text
|
[
"region:us"
] |
2023-02-20T12:36:45+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 185840, "num_examples": 2525}], "download_size": 107478, "dataset_size": 185840}}
|
2023-02-20T12:36:49+00:00
|
b439a8a2af3c04c046354da4ad2ef23a04b98e16
|
momensirri/BrickSunsetTest
|
[
"license:afl-3.0",
"region:us"
] |
2023-02-20T13:01:38+00:00
|
{"license": "afl-3.0"}
|
2023-02-20T13:01:38+00:00
|
|
0ffe24908a2f79653e3555baff915aff51e3efd1
|
# Dataset Card for "RO-News-Offense"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/readerbench/news-ro-offense](https://github.com/readerbench/news-ro-offense)
- **Repository:** [https://github.com/readerbench/news-ro-offense](https://github.com/readerbench/news-ro-offense)
- **Paper:** News-RO-Offense - A Romanian Offensive Language Dataset and Baseline Models Centered on News Article Comments
- **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory)
### Dataset Summary
a novel Romanian language dataset for offensive message detection with manually
annotated comment from a local Romanian news website (stiri de cluj) into five classes:
* non-offensive
* targeted insults
* racist
* homophobic
* sexist
Resulting in 4052 annotated messages
### Languages
Romanian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'comment_id': 5,
'reply_to_comment_id':2,
'comment_nr': 1,
'content_id': 23,
'comment_text':'PLACEHOLDER TEXT',
'LABEL': 3
}
```
### Data Fields
- `comment_id`: The unique comment ID,
- `reply_to_comment_id`: contains the header comment, if part of a conversation tree, otherwise empty
- `comment_nr`: the comments current number on the article
- `content_id`: the article ID
- `comment_text`: full comment text
- `LABEL`: 0 = Non-offensive, 1 = Targeted insult, 2 = Racist, 3 = Homophobic, 4 = Sexist
### Data Splits
| name |train|test|
|---------|----:|---:|
|ro|x|x|
## Dataset Creation
### Curation Rationale
Collecting data for abusive language classification for Romanian Language.
### Source Data
News Articles comments
#### Initial Data Collection and Normalization
#### Who are the source language producers?
News Article readers
### Annotations
#### Annotation process
#### Who are the annotators?
Native speakers
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
This data is available and distributed under Apache-2.0 license
### Citation Information
```
@misc{cojocaru2022news,
title = {News-RO-Offense - A Romanian Offensive Language Dataset and Baseline Models Centered on News Article Comments},
author = {Cojocaru, Andreea and Paraschiv, Andrei and Dascălu, Mihai},
year = 2022,
journal = {RoCHI - International Conference on Human-Computer Interaction},
publisher = {MATRIX ROM},
doi = {10.37789/rochi.2022.1.1.12},
url = {http://dx.doi.org/10.37789/rochi.2022.1.1.12}
}
```
### Contributions
|
readerbench/news-ro-offense
|
[
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ro",
"license:apache-2.0",
"hate-speech-detection",
"region:us"
] |
2023-02-20T13:04:34+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ro"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "News-RO-Offense", "tags": ["hate-speech-detection"], "extra_gated_prompt": "Warning: this repository contains harmful content (abusive language, hate speech)."}
|
2023-06-13T19:03:39+00:00
|
ca4bd93ea3531eac2269cb8d8d5ff3ff088397e3
|
nanaaaa/emotion_chinese_english
|
[
"task_categories:text-classification",
"language:zh",
"language:en",
"doi:10.57967/hf/1019",
"region:us"
] |
2023-02-20T13:24:36+00:00
|
{"language": ["zh", "en"], "task_categories": ["text-classification"]}
|
2023-03-05T10:36:14+00:00
|
|
5846527512e7f82a419aa73b134f84419c3efb46
|
# Dataset Card for "zambezivoice_toi_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zambezivoice/zambezivoice_toi_text
|
[
"region:us"
] |
2023-02-20T13:39:36+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 777892, "num_examples": 8881}], "download_size": 438920, "dataset_size": 777892}}
|
2023-02-20T13:39:40+00:00
|
34c7e5ed9b1896712cd34d8b285153174b9c7585
|
# MOCKS: Multilingual Open Custom Keyword Spotting Testset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [MOCKS 1.0: Multilingual Open Custom Keyword Spotting Testset](https://www.isca-speech.org/archive/pdfs/interspeech_2023/pudo23_interspeech.pdf)
### Dataset Summary
Multilingual Open Custom Keyword Spotting Testset (MOCKS) is a comprehensive audio testset for evaluation and benchmarking
Open-Vocabulary Keyword Spotting (OV-KWS) models. It supports multiple OV-KWS problems:
both text-based and audio-based keyword spotting, as well as offline and online (streaming) modes.
It is based on the LibriSpeech and Mozilla Common Voice datasets and contains
almost 50,000 keywords, with audio data available in English, French, German, Italian, and Spanish.
The testset was generated using automatically generated alignments used for the extraction of parts of the recordings that were split into keywords and test samples.
MOCKS contains both positive and negative examples selected based on phonetic transcriptions that are challenging and should allow for in-depth OV-KWS model evaluation.
Please refer to our [paper](https://www.isca-speech.org/archive/pdfs/interspeech_2023/pudo23_interspeech.pdf) for further details.
### Supported Tasks and Leaderboards
The MOCKS dataset can be used for the Open-Vocabulary Keyword Spotting (OV-KWS) task. It supports two OV-KWS types:
- Query-by-Text, where the keyword is provided by text and needs to be detected in the audio stream.
- Query-by-Example, where the keyword is provided with enrollment audio for detection in the audio stream.
It also allows for:
- offline keyword detection, where test audio is trimmed to contain only keywords of interest.
- online (streaming) keyword detection, where test audio has past and future context besides keywords of interest.
### Languages
The MOCKS incorporates 5 languages:
- English - primary and largest test set,
- German,
- Spanish,
- French,
- Italian.
## Dataset Structure
The MOCKS testset is split by language, source dataset, and OV-KWS type:
```
MOCKS
│
└───de
│ └───MCV
│ │ └───test
│ │ │ └───offline
│ │ │ │ │ all.pair.different.tsv
│ │ │ │ │ all.pair.positive.tsv
│ │ │ │ │ all.pair.similar.tsv
│ │ │ │ │ data.tar.gz
│ │ │ │ │ subset.pair.different.tsv
│ │ │ │ │ subset.pair.positive.tsv
│ │ │ │ │ subset.pair.similar.tsv
│ │ │ │
│ │ │ └───online
│ │ │ │ │ all.pair.different.tsv
│ │ │ │ │ ...
│ │ │ │ data.offline.transcription.tsv
│ │ │ │ data.online.transcription.tsv
│
└───en
│ └───LS-clean
│ │ └───test
│ │ │ └───offline
│ │ │ │ │ all.pair.different.tsv
│ │ │ │ │ ...
│ │ │ │ ...
│ │
│ └───LS-other
│ │ └───test
│ │ │ └───offline
│ │ │ │ │ all.pair.different.tsv
│ │ │ │ │ ...
│ │ │ │ ...
│ │
│ └───MCV
│ │ └───test
│ │ │ └───offline
│ │ │ │ │ all.pair.different.tsv
│ │ │ │ │ ...
│ │ │ │ ...
│
└───...
```
Each split is divided into:
- positive examples (`all.pair.positive.tsv`) - test examples with true keywords, 5000-8000 keywords in each subset,
- similar examples (`all.pair.similar.tsv`) - test examples with similar phrases to the keyword selected based on phonetic transcription distance,
- different examples (`all.pair.different.tsv`) - test examples with completely different phrases.
All those files contain columns separated by tab:
- `keyword_path` - path to audio containing keyword phrase.
- `adversary_keyword_path` - path to test audio.
- `adversary_keyword_timestamp_start` - start time in seconds of phrase of interest for a given keyword from `keyword_path`, the field only available in **offline** split.
- `adversary_keyword_timestamp_end` - end time in seconds of phrase of interest for a given keyword from `keyword_path`, the field only available in **offline** split.
- `label` - whether the `adversary_keyword_path` contain keyword from `keyword_path` or not (1 - contains keyword, 0 - doesn't contain keyword).
Each split also contains a subset of whole data with the same field structure to allow faster evaluation (`subset.pair.*.tsv`).
Also, transcriptions are provided for each audio in:
- `data_offline_transcription.tsv` - transcriptions for **offline** examples and `keyword_path` from **online** scenario,
- `data_online_transcription.tsv` - transcriptions for the adversary, test examples from **online** scenario,
three columns are present within each file:
- `path_to_keyword`/`path_to_adversary_keyword` - path to the audio file,
- `keyword_transcription`/`adversary_keyword_transcription` - audio transcription,
- `keyword_phonetic_transcription`/`adversary_keyword_phonetic_transcription` - audio phonetic transcription.
## Using the Dataset
The dataset can be used by:
- downloading the archive and constructing all the test cases based on the provided `tsv` files,
- `datasets` package.
In the latter case, the following should work:
```
load_dataset(path="voiceintelligenceresearch/MOCKS", name="en.LS-clean", split="offline")
```
The allowed values for `name` are:
- `en.LS-{clean,other}`,
- `en.LS-{clean,other}.positive`,
- `en.LS-{clean,other}.similar`,
- `en.LS-{clean,other}.different`,
- `en.LS-{clean,other}.subset`,
- `en.LS-{clean,other}.positive_subset`,
- `en.LS-{clean,other}.similar_subset`,
- `en.LS-{clean,other}.different_subset`,
- `{de,en,es,fr,it}.MCV.positive`,
- `{de,en,es,fr,it}.MCV.positive.similar`,
- `{de,en,es,fr,it}.MCV.positive.different`,
- `{de,en,es,fr,it}.MCV.positive.subset`,
- `{de,en,es,fr,it}.MCV.positive.positive_subset`,
- `{de,en,es,fr,it}.MCV.positive.similar_subset`,
- `{de,en,es,fr,it}.MCV.positive.different_subset`.
The allowed values for `split` are:
- `offline`,
- `online`.
`load_dataset` provides a list of the dictionary objects with the following contents:
```
{
"keyword_id": datasets.Value("string"),
"keyword_transcription": datasets.Value("string"),
"test_id": datasets.Value("string"),
"test_transcription": datasets.Value("string"),
"test_audio": datasets.Audio(sampling_rate=16000),
"label": datasets.Value("bool"),
}
```
Each element of this list represents a single test case for the QbyT KWS:
- `keyword_id` - the name of the keyword audio file in `data.tar.gz` (not used in QbyT KWS),
- `keyword_transcription` - transcription of the keyword,
- `test_id` - the name of the test audio file in `data.tar.gz`,
- `test_transcription` - transcription of the test sample,
- `test_audio` - raw data of the test audio,
- `label` - `True` if the test case is positive (`keyword_transcription` is a substring of the `test_transcription`), `False` otherwise (`similar` and `different` subsets).
Note that each test case can be extended to QbyE KWS by reading the proper `keyword_id` file. Unfortunately, there is no easy way to do that in the loading script.
All the test files are provided in 16 kHz, even though `{de,en,es,fr,it}.MCV` files are stored in the original sampling (usually 48 kHz) in the `data.tar.gz` archives.
## Dataset Creation
The MOCKS testset was created from LibriSpeech and Mozilla Common Voice (MCV) datasets that are publicly available. To create it:
- a [MFA](https://mfa-models.readthedocs.io/en/latest/acoustic/index.html) with publicly available models was used to extract word-level alignments,
- an internally developed, rule-based grapheme-to-phoneme (G2P) algorithm was used to prepare phonetic transcriptions for each sample.
The data is stored in a 16-bit, single-channel WAV format. 16kHz sampling rate is used for LibriSpeech based testset
and 48kHz sampling rate for MCV based testset.
The offline testset contains an additional 0.1 seconds at the beginning and end of the extracted audio sample to mitigate the cut-speech effect.
The online version contains an additional 1 second or so at the beginning and end of the extracted audio sample.
The MOCKS testset is gender balanced.
## Citation Information
```bibtex
@inproceedings{pudo23_interspeech,
author={Mikołaj Pudo and Mateusz Wosik and Adam Cieślak and Justyna Krzywdziak and Bożena Łukasiak and Artur Janicki},
title={{MOCKS} 1.0: Multilingual Open Custom Keyword Spotting Testset},
year={2023},
booktitle={Proc. Interspeech 2023},
}
```
|
voiceintelligenceresearch/MOCKS
|
[
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:en",
"language:de",
"language:es",
"language:fr",
"language:it",
"license:cc-by-4.0",
"license:mpl-2.0",
"region:us"
] |
2023-02-20T13:40:22+00:00
|
{"annotations_creators": ["expert-generated"], "language": ["en", "de", "es", "fr", "it"], "license": ["cc-by-4.0", "mpl-2.0"], "multilinguality": ["multilingual"], "dataset_info": [{"config_name": "config", "features": [{"name": "audio_id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}]}]}
|
2023-10-27T14:55:12+00:00
|
12662342b6f1a32ae6790ca8ac27012005268d02
|
Ruramai/zimbabwe_history_and_heritage
|
[
"license:openrail",
"region:us"
] |
2023-02-20T13:52:11+00:00
|
{"license": "openrail"}
|
2023-02-20T13:53:50+00:00
|
|
c972e311567afdd3e4e6be81be124a5421398658
|
# Dataset Card for "zambezivoice_nya_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zambezivoice/zambezivoice_nya_text
|
[
"region:us"
] |
2023-02-20T13:57:06+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 877942, "num_examples": 8739}], "download_size": 461513, "dataset_size": 877942}}
|
2023-02-20T13:57:10+00:00
|
7220bfe8f1a4f02b0d4a61a9e441ac8ea4cb0865
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_2.7b_mode_VQAv2_visclues_detection_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_2.7b_mode_VQAv2_visclues_detection_ns_1000
|
[
"region:us"
] |
2023-02-20T14:09:43+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_8", "num_bytes": 26699595, "num_examples": 1000}], "download_size": 5515420, "dataset_size": 26699595}}
|
2023-02-20T14:09:46+00:00
|
e693e8bbb195b3b4c2911ba82cae237a91042cdb
|
# Dataset Card for "RO-Offense-Sequences"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
<!--
- **Paper:** News-RO-Offense - A Romanian Offensive Language Dataset and Baseline Models Centered on News Article Comments
-->
- **Homepage:** [https://github.com/readerbench/ro-offense-sequences](https://github.com/readerbench/ro-offense-sequences)
- **Repository:** [https://github.com/readerbench/ro-offense-sequences](https://github.com/readerbench/ro-offense-sequences)
- **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory)
-
### Dataset Summary
a novel Romanian language dataset for offensive language detection with manually
annotated offensive labels from a local Romanian sports news website (gsp.ro):
Resulting in 12,445 annotated messages
### Languages
Romanian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'id': 5,
'text':'PLACEHOLDER TEXT',
'label': 'OTHER'
}
```
### Data Fields
- `id`: The unique comment ID, corresponding to the ID in [RO Offense](https://huggingface.co/datasets/readerbench/ro-offense)
- `text`: full comment text
- `label`: the type of offensive message (OTHER, PROFANITY, INSULT, ABUSE)
### Data Splits
Train | Other | Profanity | Insult | Abuse
:---| :---| :---| :---| :---:
9953 | 3656 | 1293 | 2236 | 2768
Test | Other | Profanity | Insult | Abuse
:---| :---| :---| :---| :---:
2492 | 916 | 324 | 559 | 693
## Dataset Creation
### Curation Rationale
Collecting data for abusive language classification for Romanian Language.
For the labeling of texts we loosely base our definitions on the Germeval 2019 task for detecting offensive language in german tweets (Struß et al., 2019)
Data source: Comments on articles in Gazeta Sporturilor (gsp.ro) between 2011 and 2020
Selection for annotation: we select comments from a pool of secific articles based on the number of comments in the article.
The number of comments per article has the following distribution:
```
mean 183.820923
std 334.707177
min 1.000000
25% 20.000000
50% 58.000000
75% 179.000000
max 2151.000000
```
Based on this we select only comments from articles having between 20 and 50 comments. Also, we remove comments containing urls or three consecutive *, since these were mostly censored by editors or automatic profanity detection algorythms.
Additional, in order to have some meaningful messages for annotation, we select only messages with length between 50 and 500 characters.
### Source Data
Sports News Articles comments
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Sports News Article readers
### Annotations
- Andrei Paraschiv
- Irina Maria Sandu
#### Annotation process
##### OTHER
Label used for non offensive texts.
##### PROFANITY
This is the "lighter" form of abusive language. When profane words are used without a direct intend on offending a target, or without ascribing some negative qualities to a target we use this label. Some messages in this class may even have a positive sentiment and uses swearwords as emphasis. Messages containing profane words that are not directed towards a specific group or person, we label as **PROFANITY**
Also, self censored messages with swear words having some letters hidden, or some deceitful misspellings of swearwords that have clear intend on circumventing profanity detectors will be treated as **PROFANITY**.
##### INSULT
The message clearly wants to offend someone, ascribing negatively evaluated qualities or deficiences, labeling a person or a group of persons as unworthy or unvalued. Insults do imply disrespect and contempt directed towards a target.
##### ABUSE
This label marks messages containing the stronger form of offensive and abusive language. This type of language ascribes the target a social identity that is judged negatively by the majority of society, or at least is percieved as a mostly negative judged identity. Shameful, unworthy or morally unaceptable identytities fall in this category. In contrast to insults, instances of abusive language require that the target of judgment is seen as a representative of a group and it is ascribed negative qualities that are taken to be universal, omnipresent and unchangeable characteristics of the group.
In contrast to insults, instances of abusive language require that the target of judgment tis seen as a representative of a group and it is ascribed negative qualities that are taken to be universal, omnipresent and unchangeable characteristics of the group.
Additional, dehumanizing language targeting a person or group is also classified as ABUSE.
#### Who are the annotators?
Native speakers
### Personal and Sensitive Information
The data was public at the time of collection. PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
This data is available and distributed under Apache-2.0 license
### Citation Information
```
tbd
```
### Contributions
|
readerbench/ro-offense
|
[
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:readerbench/ro-offense",
"language:ro",
"license:apache-2.0",
"hate-speech-detection",
"offensive speech",
"romanian",
"nlp",
"region:us"
] |
2023-02-20T14:21:40+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ro"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["readerbench/ro-offense"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "RO-Offense-Sequences", "tags": ["hate-speech-detection", "offensive speech", "romanian", "nlp"], "extra_gated_prompt": "Warning: this repository contains harmful content (abusive language, hate speech).", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train.csv"}, {"split": "test", "path": "test.csv"}]}, {"config_name": "ner", "data_files": [{"split": "train", "path": "train_ner.csv"}, {"split": "test", "path": "test_ner.csv"}]}]}
|
2023-08-08T09:48:15+00:00
|
2dd02a1361975908c58d4664200b2cc64bb7bbd1
|
# Dataset Card for "FTRACE-Synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rasgaard/FTRACE-Synth
|
[
"region:us"
] |
2023-02-20T14:26:55+00:00
|
{"dataset_info": {"features": [{"name": "inputs_pretokenized", "dtype": "string"}, {"name": "targets_pretokenized", "dtype": "string"}, {"name": "uuid", "dtype": "string"}, {"name": "proponents", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 21802634, "num_examples": 10000}, {"name": "train", "num_bytes": 710815844, "num_examples": 3190000}], "download_size": 172358159, "dataset_size": 732618478}}
|
2023-02-20T14:32:23+00:00
|
4bffffb740bc6cdba4d832637d081822a9ce0f21
|
silkski/ENERAD
|
[
"license:wtfpl",
"region:us"
] |
2023-02-20T15:11:16+00:00
|
{"license": "wtfpl"}
|
2023-05-12T08:54:52+00:00
|
|
95e55fbcc26e04e15f79ce37b7c68da621fa0a29
|
# Dataset Card for "processed_oscar_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
5w4n/processed_oscar_bert_dataset
|
[
"region:us"
] |
2023-02-20T15:20:30+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 250351200.0, "num_examples": 69542}], "download_size": 85253912, "dataset_size": 250351200.0}}
|
2023-02-20T15:28:51+00:00
|
d32d3c68ee346a3f342614546bd6f0f29ea3bb1a
|
# Dataset Card for "pubmed-summarization-sample2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LeaBresson/pubmed-summarization-sample2
|
[
"region:us"
] |
2023-02-20T15:24:30+00:00
|
{"dataset_info": {"features": [{"name": "article", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 56479394.15796671, "num_examples": 3000}], "download_size": 26417003, "dataset_size": 56479394.15796671}}
|
2023-02-20T15:24:41+00:00
|
82e8eabcfec8ed72ec1a0deb637a68f769940ec4
|
This dataset contains 67 images around kent that have text on the signs. They have varying levels of being cropped.
|
Tom-nerd/English-signs-with-text
|
[
"size_categories:n<1K",
"language:en",
"license:mit",
"region:us"
] |
2023-02-20T17:33:46+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"]}
|
2023-02-20T17:38:10+00:00
|
a8f4bb35a3566430ae26ef40ff9a2606af44cd98
|
641 4032*3024 images in a garden of a stone buddha in jpg format.
|
Tom-nerd/Images-of-stone-buddha
|
[
"size_categories:n<1K",
"language:en",
"license:mit",
"region:us"
] |
2023-02-20T17:45:38+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"]}
|
2023-02-20T18:17:37+00:00
|
c295f30d9dba467d0059ac918ea86a92db97fc3e
|
# Dataset Card for "kaggle-kernels-metadata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
cakiki/kaggle-kernels-metadata
|
[
"region:us"
] |
2023-02-20T20:10:09+00:00
|
{"dataset_info": {"features": [{"name": "Id", "dtype": "int64"}, {"name": "download_link", "dtype": "string"}, {"name": "AuthorUserId", "dtype": "int64"}, {"name": "CurrentKernelVersionId", "dtype": "int64"}, {"name": "ForkParentKernelVersionId", "dtype": "int64"}, {"name": "ForumTopicId", "dtype": "int64"}, {"name": "FirstKernelVersionId", "dtype": "int64"}, {"name": "CreationDate", "dtype": "string"}, {"name": "EvaluationDate", "dtype": "string"}, {"name": "MadePublicDate", "dtype": "string"}, {"name": "IsProjectLanguageTemplate", "dtype": "bool"}, {"name": "CurrentUrlSlug", "dtype": "string"}, {"name": "Medal", "dtype": "int64"}, {"name": "MedalAwardDate", "dtype": "string"}, {"name": "TotalViews", "dtype": "int64"}, {"name": "TotalComments", "dtype": "int64"}, {"name": "TotalVotes", "dtype": "int64"}, {"name": "UserName", "dtype": "string"}, {"name": "DisplayName", "dtype": "string"}, {"name": "RegisterDate", "dtype": "string"}, {"name": "PerformanceTier", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 236631252, "num_examples": 852022}], "download_size": 81797588, "dataset_size": 236631252}}
|
2023-02-21T11:47:24+00:00
|
5d44c4487da86ed43ff170b5ac46981377a7162d
|
bhatvineet/mr_trial
|
[
"license:afl-3.0",
"region:us"
] |
2023-02-20T20:25:18+00:00
|
{"license": "afl-3.0"}
|
2023-02-22T10:45:24+00:00
|
|
9fd85ce3ffbd703a2ae082769903c73ee20f526b
|
prycci/teste
|
[
"license:openrail",
"region:us"
] |
2023-02-20T20:39:12+00:00
|
{"license": "openrail"}
|
2023-02-20T20:39:12+00:00
|
|
1589c29d06d0a18b1d286d31785fdb06e1402ac0
|
prycci/testando
|
[
"license:bigscience-openrail-m",
"region:us"
] |
2023-02-20T20:39:56+00:00
|
{"license": "bigscience-openrail-m"}
|
2023-02-20T20:39:56+00:00
|
|
261619b31399bf93742de03243472fb634c0f753
|
- This Dataset has been downloaded from PubMed
- It has abstracts and titles that are related to Breast Cancer
- the data has been cleaned before uploading
- it could be used for any NLP task, such as Domain Adaptation
|
Gaborandi/breast_cancer_pubmed_abstracts
|
[
"region:us"
] |
2023-02-20T20:53:57+00:00
|
{}
|
2023-02-21T23:07:39+00:00
|
ed3310d6090c3128e0fc0231b1f867015d6a8232
|
timhigins/crisisbench
|
[
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"language:es",
"language:it",
"language:fr",
"language:pt",
"language:tl",
"license:cc-by-nc-sa-4.0",
"crisis",
"twitter",
"region:us"
] |
2023-02-20T22:14:41+00:00
|
{"language": ["en", "es", "it", "fr", "pt", "tl"], "license": "cc-by-nc-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "tags": ["crisis", "twitter"]}
|
2023-02-20T22:27:29+00:00
|
|
ffcfb5b3587448bf5f5874c97d3e7a891f1639ae
|
ecoue/nordmann2023
|
[
"task_categories:translation",
"multilinguality:translation",
"size_categories:1M<n<10M",
"language:de",
"language:en",
"license:unknown",
"europarl",
"newscommentary",
"wikititles",
"ecb",
"rapid",
"eesc",
"ema",
"europat",
"books",
"ted2020",
"qed",
"eubookshop",
"doi:10.57967/hf/0386",
"region:us"
] |
2023-02-20T22:55:31+00:00
|
{"annotations_creators": [], "language_creators": [], "language": ["de", "en"], "license": ["unknown"], "multilinguality": ["translation"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": ["translation"], "task_ids": [], "pretty_name": "nordmann2023", "tags": ["europarl", "newscommentary", "wikititles", "ecb", "rapid", "eesc", "ema", "europat", "books", "ted2020", "qed", "eubookshop"], "dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "en"]}}}], "config_name": "balanced", "splits": [{"name": "train", "num_bytes": 1539472445, "num_examples": 5656659}, {"name": "validation", "num_bytes": 706611, "num_examples": 2754}, {"name": "test", "num_bytes": 411077, "num_examples": 1831}], "download_size": 4076594396, "dataset_size": 1540590133}}
|
2023-02-21T23:11:15+00:00
|
|
10aaa75fc572651bb5b2b59c530f64f5ff8cf225
|
### Dataset Card for SNLI Back Translation
back translation of SNLI dataset: only use the test version
|
sagnikrayc/snli-bt
|
[
"license:afl-3.0",
"region:us"
] |
2023-02-20T23:03:02+00:00
|
{"license": "afl-3.0"}
|
2023-02-20T23:11:17+00:00
|
1c686e6d2d8b67da5d9aab2068361f3f479a8b33
|
# flan-t5-onnx
This is an ONNX export of the [Google FLAN T5](https://huggingface.co/google/flan-t5-base) models. It includes every size except xxl.
The export script is included at `./exportt5.py`.
---
## license: apache-2.0
|
bakks/flan-t5-onnx
|
[
"region:us"
] |
2023-02-21T00:41:57+00:00
|
{}
|
2023-02-22T18:40:21+00:00
|
0bf796fc9b8952aaca40b9ca6d18b284fba253e4
|
Molecules in this set
* have a molecular weight of fewer than 1500 Daltons,
* not possess counter ions,
* only contain the elements C, H, O, N, P, S, F, Cl, Br, I, Se and B,
* not contain isotopes of Hydrogens (D, T),
* have 3–40 bonds,
* not contain any charged groups including zwitterionic forms,
* only contain implicit hydrogens, except in functional groups,
* have less than 40 SMILES characters,
* no stereochemistry is allowed.
The original dataset from Decimer was imported and randomly sampled. 516x516 sized images were generated using RDKit.
## Reference
> Rajan, Kohulan; Zielesny, Achim; Steinbeck, Christoph (2021): DECIMER 1.0: Deep Learning for Chemical Image Recognition using Transformers. ChemRxiv. Preprint. https://doi.org/10.26434/chemrxiv.14479287.v1
|
navanchauhan/decimer-data-mini
|
[
"task_categories:image-to-text",
"size_categories:10K<n<100K",
"license:openrail",
"region:us"
] |
2023-02-21T01:12:25+00:00
|
{"license": "openrail", "size_categories": ["10K<n<100K"], "task_categories": ["image-to-text"], "pretty_name": "PubChem 68K", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "smiles", "dtype": "string"}, {"name": "selfies", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1185846198.576, "num_examples": 68996}, {"name": "test", "num_bytes": 267097779.576, "num_examples": 15499}, {"name": "validation", "num_bytes": 266912227.912, "num_examples": 15499}], "download_size": 1692942822, "dataset_size": 1719856206.064}}
|
2023-02-21T07:06:36+00:00
|
3d11872f00818e2b30d3dc4a26d9d44119e45701
|
- This Dataset has been downloaded from PubMed
- It has abstracts and titles that are related to Alzheimer's Disease
- the data has been cleaned before uploading
- it could be used for any NLP task, such as Domain Adaptation
|
Gaborandi/Alzheimer_pubmed_abstracts
|
[
"region:us"
] |
2023-02-21T01:34:10+00:00
|
{}
|
2023-02-21T23:16:19+00:00
|
3594a6573e07a4f37f050e8b0afba909297e5ef7
|
rraux/testdataset
|
[
"license:mit",
"region:us"
] |
2023-02-21T01:35:47+00:00
|
{"license": "mit"}
|
2023-02-21T01:37:15+00:00
|
|
8b768411eb431053ddcf7c394ff97b7bd2bae04a
|
jungsungmoon/Korean_dialog
|
[
"license:unknown",
"region:us"
] |
2023-02-21T01:46:53+00:00
|
{"license": "unknown"}
|
2023-02-21T02:06:59+00:00
|
|
34dd93726fbdb0f57ee4114a0970578277754a64
|
# Dataset Card for "VQAv2_sample_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_1000
|
[
"region:us"
] |
2023-02-21T01:58:33+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 26699615, "num_examples": 1000}], "download_size": 5515967, "dataset_size": 26699615}}
|
2023-02-21T01:58:36+00:00
|
794607c13ee73175e7cd0954de327dd5d301ac8b
|
# Dataset Card for "products-second-checkpoint"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
matterr/products-10k-test
|
[
"region:us"
] |
2023-02-21T02:16:06+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1997550994.326, "num_examples": 10001}], "download_size": 2344525315, "dataset_size": 1997550994.326}}
|
2023-02-21T02:17:36+00:00
|
268cfaa48d1018a2b161c626a04566e031e6e958
|
This is a more than 1 million word token dataset consist of Historical black writers who wrote about black emancipation. Include in this datasets are
Collected Articles of Frederick Douglass(8000 word tokens),THREE ADDRESSES BY Fred Douglas(28K word token), Why is the Negro
Lynched?(15K word token) by FREDERICK DOUGLASS, MY BONDAGE and MY FREEDOM(135Kword token), Narrative of the Life of Frederick Douglass(40K word tokens)
darkwater by W. E.(67K word tokens), GIFT _of_ BLACK FOLK(77K word tokens), John Brown (101K word token), Negro problem(36K word tokens), THE CONSERVATION OF RACES(5k word token),
The Negro(57K word token), The quest of the Fleece(109k), THE SUPPRESSION OF THE AFRICAN SLAVE-TRADE(123K word tokens) by W. E. BURGHARDT DU BOIS,
UP FROM SLAVERY AN AUTOBIOGRAPHY BY Booker T Washington(77K word tokens).
The evaluation data set consist of The Underground Railroad, by William Still(400K word token)
|
armahlovis/BlackWriterOnFreedom
|
[
"license:mit",
"region:us"
] |
2023-02-21T03:22:21+00:00
|
{"license": "mit"}
|
2023-02-21T03:51:03+00:00
|
dbf712bcbe3ac703178df13b0a6c690fa597c6d7
|
# Dataset Card for "rlhf-qa-comparisons"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kastan/rlhf-qa-comparisons
|
[
"region:us"
] |
2023-02-21T03:27:17+00:00
|
{"dataset_info": {"features": [{"name": "Question", "dtype": "string"}, {"name": "Chosen", "dtype": "string"}, {"name": "Rejected", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 172575, "num_examples": 337}], "download_size": 58298, "dataset_size": 172575}}
|
2023-02-27T19:31:09+00:00
|
b2ca1f2a2316a8fe9ce484af6a242ba75cedb8f8
|
# Dataset Card for "common_voice_10_1_th_sentence"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DylanonWic/common_voice_10_1_th_sentence
|
[
"region:us"
] |
2023-02-21T04:13:57+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 959372, "num_examples": 9904}, {"name": "validation", "num_bytes": 948673, "num_examples": 9775}, {"name": "train", "num_bytes": 2424732, "num_examples": 28024}], "download_size": 2035494, "dataset_size": 4332777}}
|
2023-02-21T04:14:01+00:00
|
6d39f59e84a1136b4b29e4d8570d91210e006924
|
test text
|
Shelldid/1dataset
|
[
"license:openrail",
"region:us"
] |
2023-02-21T04:17:14+00:00
|
{"license": "openrail"}
|
2023-02-21T04:22:33+00:00
|
3e07c34559d4c6b4038345050467633db76175e3
|
# Dataset Card for "enwiki20230101-pageid-minilml6v2embeddingsjson"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lsb/enwiki20230101-pageid-minilml6v2embeddingsjson
|
[
"region:us"
] |
2023-02-21T05:57:16+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "minilml6v2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 185406292691, "num_examples": 57745806}], "download_size": 74786404654, "dataset_size": 185406292691}}
|
2023-02-21T08:29:59+00:00
|
9856e855f5ddbaa1c49ea4b5501dcc22effdfa1e
|
This is the imdb dataset, https://huggingface.co/datasets/imdb
We've used a reward / sentiment model, https://huggingface.co/lvwerra/distilbert-imdb to compute the rewards of the offline data.
This is so that we can use offline RL on the data.
|
thejaminator/imdb_rewarded
|
[
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-02-21T06:07:47+00:00
|
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation"]}
|
2023-02-21T06:23:19+00:00
|
212ed1b2f7633a8551262379af1272e29501bbb7
|
Joe02/Character_refs
|
[
"license:other",
"region:us"
] |
2023-02-21T06:43:33+00:00
|
{"license": "other"}
|
2023-04-28T06:48:51+00:00
|
|
75f225898f016e8e8d0af54ff84bab3c1877e9bf
|
Understanding the cellular architecture is a fundamental problem in various biological studies.
C. elegans is widely used as a model organism in these studies because of its unique fate determinations.
In recent years, researchers have worked extensively on C. elegans to excavate the regulations of genes and proteins on cell mobility and communication.
Although various algorithms have been proposed to analyze nucleus, cell shape features are not yet well recorded
Here this dataset used for segmenting etc.
|
devoworm-group/EPIC-DATASET
|
[
"license:mit",
"region:us"
] |
2023-02-21T06:55:23+00:00
|
{"license": "mit"}
|
2023-02-24T17:55:26+00:00
|
611dcd2fe24749690421b7e6f2b6d81241d86d5a
|
# Dataset Card for "context-dialogue-generate-ds-zh-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
svjack/context-dialogue-generate-ds-zh-v1
|
[
"region:us"
] |
2023-02-21T07:28:37+00:00
|
{"dataset_info": {"features": [{"name": "sent", "dtype": "string"}, {"name": "dialogue", "sequence": "string"}, {"name": "L_emb", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 74417088, "num_examples": 20000}], "download_size": 82191201, "dataset_size": 74417088}}
|
2023-02-21T07:59:42+00:00
|
1cbbd51f62724fc8861e00bfc078d01157178363
|
# Dataset Card for "nlp244_french_snli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Brendan/nlp244_french_snli
|
[
"region:us"
] |
2023-02-21T07:32:09+00:00
|
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "fr_premise", "dtype": "string"}, {"name": "fr_hypothesis", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2298242, "num_examples": 10000}, {"name": "train", "num_bytes": 122710788, "num_examples": 550152}, {"name": "validation", "num_bytes": 2305275, "num_examples": 10000}], "download_size": 40406975, "dataset_size": 127314305}}
|
2023-02-21T07:32:38+00:00
|
8c5aa57b3f3435b374557e27855716a164b6c5fe
|
xxss/landscape
|
[
"region:us"
] |
2023-02-21T07:43:09+00:00
|
{}
|
2023-02-21T07:47:41+00:00
|
|
da4921c6b7cc19242f7d4bb93f387db9ee10974e
|
zydxn77/zydxn77
|
[
"license:mit",
"region:us"
] |
2023-02-21T07:46:02+00:00
|
{"license": "mit"}
|
2023-02-21T07:48:18+00:00
|
|
905f7f4cccaf092148b94da7b911d6710280e76e
|
zydxn77/zydxn
|
[
"license:mit",
"region:us"
] |
2023-02-21T07:55:11+00:00
|
{"license": "mit"}
|
2023-02-21T07:57:36+00:00
|
|
f6458b7d0a1b861be328404cea9ec952a5063e2f
|
# Dataset Card for "generated_ar_en_th_datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Shularp/generated_ar_en_th_datasets
|
[
"region:us"
] |
2023-02-21T07:58:14+00:00
|
{"dataset_info": {"features": [{"name": "ar", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "th", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 168583, "num_examples": 584}, {"name": "validation", "num_bytes": 75552, "num_examples": 251}], "download_size": 106639, "dataset_size": 244135}}
|
2023-02-21T07:58:18+00:00
|
295b95fb4fe9be4ff3f933b73142d142cf6b2c97
|
https://github.com/Yale-LILY/FOLIO
```
@article{han2022folio,
title={FOLIO: Natural Language Reasoning with First-Order Logic},
author = {Han, Simeng and Schoelkopf, Hailey and Zhao, Yilun and Qi, Zhenting and Riddell, Martin and Benson, Luke and Sun, Lucy and Zubova, Ekaterina and Qiao, Yujie and Burtell, Matthew and Peng, David and Fan, Jonathan and Liu, Yixin and Wong, Brian and Sailor, Malcolm and Ni, Ansong and Nan, Linyong and Kasai, Jungo and Yu, Tao and Zhang, Rui and Joty, Shafiq and Fabbri, Alexander R. and Kryscinski, Wojciech and Lin, Xi Victoria and Xiong, Caiming and Radev, Dragomir},
journal={arXiv preprint arXiv:2209.00840},
url = {https://arxiv.org/abs/2209.00840},
year={2022}
}
```
|
tasksource/folio
|
[
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"language:en",
"license:cc",
"arxiv:2209.00840",
"region:us"
] |
2023-02-21T08:15:17+00:00
|
{"language": ["en"], "license": "cc", "task_categories": ["text-classification"], "task_ids": ["natural-language-inference", "multi-input-text-classification"]}
|
2024-01-18T08:34:47+00:00
|
71b580bef5684dc1669270f64e37e8f9ea826df2
|
# Dataset Card for Multipage Document Visual Question Answering (MP-DocVQA)
## Dataset Description
- **Homepage: [Robust Reading Competition Portal](https://rrc.cvc.uab.es/?ch=17&com=introduction)**
- **Repository: [Robust Reading Competition Portal](https://rrc.cvc.uab.es/?ch=17&com=downloads)**
- **Paper: [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/abs/2212.05935.pdf])**
- **Leaderboard: [Task 4 of DocVQA on the Robust Reading Competition Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4)**
### Dataset Summary
The dataset is aimed to perform Visual Question Answering on multipage industry scanned documents. The questions and answers are reused from Single Page DocVQA (SP-DocVQA) dataset. The images also corresponds to the same in original dataset with previous and posterior pages with a limit of up to 20 pages per document.
### Download the Dataset
The dataset is not integrated with Huggingface yet. But you can download it from the [DocVQA Challenge](https://rrc.cvc.uab.es/?ch=17) in the RRC Portal, [Downloads section](https://rrc.cvc.uab.es/?ch=17&com=downloads).
### Leaderboard
You can also check the live leaderboard at the [RRC Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4)
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
| | Train | Validation | Test | Total |
|----------|:-----:|:-----------:|:------:|:-------:|
|**Questions** |36230 | 5187 |5019 | 46436 |
|**Documents** |5131 | 927 |959 | 5929 |
|**Pages / Images** |37269 | 6510 |6223 | 47952 |
Note that some documents might appear in both validation and test set. But they are never seen during training.
### Citation Information
```tex
@article{tito2022hierarchical,
title={Hierarchical multimodal transformers for Multi-Page DocVQA},
author={Tito, Rub{\`e}n and Karatzas, Dimosthenis and Valveny, Ernest},
journal={arXiv preprint arXiv:2212.05935},
year={2022}
}
```
|
rubentito/mp-docvqa
|
[
"task_categories:question-answering",
"task_categories:document-question-answering",
"multilinguality:monolingual",
"source_datasets:Single Page Document Visual Question Answering",
"language:en",
"license:mit",
"arxiv:2212.05935",
"region:us"
] |
2023-02-21T08:36:46+00:00
|
{"language": ["en"], "license": "mit", "multilinguality": ["monolingual"], "source_datasets": ["Single Page Document Visual Question Answering"], "task_categories": ["question-answering", "document-question-answering", "document-visual-question-answering"], "pretty_name": "MP-DocVQA (Multipage Document Visual Question Answering)"}
|
2023-02-27T16:09:10+00:00
|
4e95e90e6eb902a76c5f545c748510ef90342a22
|
# Dataset Card for "livedoor_news_corpus"
## Dataset Description
- **Homepage:** [ダウンロード - 株式会社ロンウイット](http://www.rondhuit.com/download.html#ldcc)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [RONDHUIT](mailto:[email protected])
### Dataset Summary
The livedoor News Corpus is a collection of 7k human-written Japanese news stories.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language in the dataset is Japanese. The BCP-47 code for Japanese is ja.
## Dataset Structure
### Data Instances
For each instance, there is a string for the URL, a datetime for the date, a string for the title, a string for the text, and an integer for the label.
```
{'url': 'http://news.livedoor.com/article/detail/6601535/',
'date': '2012-05-28T12:55:00+0900',
'title': 'NTTドコモ、2012夏モデル新商品内覧会を東京・名古屋・大阪で開催!DCMXおよびプレミアステージ会員向け',
'text': '2012夏モデル新商品内覧会が開催! \n\nNTTドコモは28日、この夏以降に発売予定の新商品を発売前に体験できる「2012 夏モデル新商品内覧会」を東京や名古屋、大阪にてDCMX会員およびプレミアステージ会員(ドコモプレミアクラブ)を対象に実施することをお知らせしています。\n\n事前お申込みは不要で、当日、入場の際にDCMXカードもしくはドコミプレミアクラブ・サイト画面を提示することで、入場できます。\n\nまた、1人の対象者がいれば、知り合いや友だちを連れていっても大丈夫とのことです。なお、DCMX mini会員は対象外となるということです。\n\n開催日時および開催会場は、以下の通りです。ただし、時間帯によっては混雑のために入場制限をする場合があるとのことですので、ご注意ください。\n\n【開催日】\n・東京会場\n2012年6月8日(金)〜10日(日)\n・名古屋会場\n2012年6月15日(金)〜17日(日)\n・大阪会場\n2012年6月16日(土)〜17日(日)\n\n※時間帯によっては混雑のため、入場制限させていただく場合があります。あらかじめご了承願います。\n※お連れ様は何名でもご来場いただけます。\n※会場までの交通費等はお客様ご負担となります。\n※ご来場の際は、公共交通機関をご利用ください。\n\n【東京会場】\n■会場\n東京ドームシティ プリズムホール 1F\n大好評の各機種のメーカー担当者によるプレゼンテーション、スマートフォン講座の他、20周年の感謝の気持ちを込めて、約60機種の歴代ケータイの展示や、歴代ドコモダケ展示など、特別企画も盛りだくさん!ご家族、お友達をお誘いの上、是非ご来場ください。\n\nステージスケジュールは6月1日(金)公開予定!\n■日時\n2012年6月8日(金)午後5:00〜午後9:00\n※最終入場時間:午後8:30\n2011年6月9日(土)・10日(日)午前10:30〜午後6:00\n※最終入場時間:午後5:30\n\n※途中入場可\n※開場時間にご注意ください。\n※当日の様子を取材しホームページ等に掲載する場合があります。なお、当日取材させていただいた画像、コメントなどの肖像権は弊社に帰属するものとさせていただきます。\n■混雑状況\n当日の混雑状況についてご確認いただけます。\n詳しくはこちら\n■住所\n東京都文京区後楽1-3-61\n東京ドームシティ プリズムホール 1F\n■交通アクセス\n・JR中央線・総武線・都営三田線「水道橋駅」徒歩約1分\n・東京メトロ丸ノ内線・南北線「後楽園駅」徒歩約3分\n・都営大江戸線「春日駅」徒歩約5分\n\n\n【名古屋会場】\n■会場\n栄ガスビル5F ガスホール\nスマートフォンのステージイベントを実施予定!モバイルアスキー・アスキードットPC編集部presentsで定番のアプリからおすすめの人気アプリなどを紹介します。\n\nステージスケジュールは6月1日(金)公開予定!\n\nDCMXのカードをご提示いただいた方に抽選で粗品をプレゼントいたします。DCMX会員の皆様は、是非DCMXのカードをご持参ください。\n※6月15日(金)は内覧会は開催されますが、ステージはございません。\n■日時\n2012年6月15日(金)午後6:00〜午後9:00\n※最終入場時間:午後8:30\n2012年6月16日(土)・17日(日)午前11:00〜午後6:00\n※最終入場時間:午後5:30\n\n※途中入場可\n※開催時間にご注意ください。\n■住所\n愛知県名古屋市中区栄3-15-33\n栄ガスホール 5F 栄ガスホール\n■交通アクセス\n・地下鉄東山線・名城線「栄駅」サカエチカ6番出口より徒歩約5分\n・地下鉄名城線「矢場町駅」6番出口より徒歩約2分\n\n\n【大阪会場】\n■会場\nハービスOSAKA B2F ハービスHALL\nスペシャルステージを実施予定! 各機種のメーカー担当者によるプレゼンテーションの他、メーカー担当者が一堂に会する「スマートフォンサミット」、その他お楽しみ企画もあるよ!\nステージスケジュールは6月1日(金)公開予定!\n\n■日時\n2012年6月16日(土)・17日(日)午前11:00〜午後6:00\n※最終入場時間:午後5:30\n※途中入場可\n※当日の様子を取材しホームページ等に掲載する場合があります。なお、当日取材させていただいた画像、コメントなどの肖像権は弊社に帰属するものとさせていただきます。\n■住所\n大阪府大阪市北区梅田2-5-25\nハービスOSAKA B2F ハービスHALL\n■交通アクセス\n・阪神電車「梅田駅」西改札より徒歩約6分\n・JR線「大阪駅」桜橋口より徒歩約7分\n・地下鉄御堂筋線「梅田駅」南改札より徒歩約10分\n・阪急電車「梅田駅」より徒歩約15分\n\n記事執筆:memn0ck\n\n■関連リンク\n・エスマックス(S-MAX)\n・エスマックス(S-MAX) smaxjp on Twitter\n・DCMX|ドコモのケータイクレジット\n',
'label': 6}
```
### Data Fields
- `url`: a string that URL
- `date`: a datetime that date
- `title`: a string that title
- `text`: a string that text
- `label`: an integer whose value may be either 0, indicating that category is Topic News, 1, indicating that category is Sports Watch, 2, indicating that category is IT Life Hack, 3, indicating that category is Appliance Channel, 4, indicating that category is MOVIE ENTER, 5, indicating that category is Single Woman Report, 6, indicating that category is Smax, 7, indicating that category is livedoor HOMME, 8, indicating that category is Peachy.
### Data Splits
The livedoor News Corpus has 1 split: *train*.
| Dataset Split | Number of Instances in Split |
| ------------- | ---------------------------- |
| Train | 7,367 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The livedoor News Corpus was developed by [RONDHUIT](https://www.rondhuit.com/en.html).
### Licensing Information
The livedoor News Corpus is licensed under a [Creative Commons Attribution-NoDerivs 2.1 Japan License](https://creativecommons.org/licenses/by-nd/2.1/jp/)
### Citation Information
```
@misc{livedoornewscorpus,
title={livedoor News Corpus},
author={RONDHUIT},
year={2012},
howpublished={\url{http://www.rondhuit.com/download.html#ldcc}}
}
```
### Contributions
Thanks to [@rondhuit](https://github.com/RONDHUIT) for adding this dataset.
|
t0mmy/livedoor_news_corpus
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:ja",
"license:cc",
"region:us"
] |
2023-02-21T09:02:23+00:00
|
{"language": ["ja"], "license": "cc", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "livedoor News Corpus"}
|
2023-03-12T02:25:37+00:00
|
a0f2d9641115a78c40d7bb493823774415528e12
|
summernight66/traintest
|
[
"license:openrail",
"region:us"
] |
2023-02-21T09:26:50+00:00
|
{"license": "openrail"}
|
2023-02-21T09:26:50+00:00
|
|
bcfe384adce83cfc45a64632b9fa045008bc3a87
|
# Dataset Card for "tokenized_generated_ar_en_th_datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Shularp/tokenized_generated_ar_en_th_datasets
|
[
"region:us"
] |
2023-02-21T10:02:25+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1049412, "num_examples": 2336}, {"name": "validation", "num_bytes": 466947, "num_examples": 1004}], "download_size": 475301, "dataset_size": 1516359}}
|
2023-02-21T10:02:33+00:00
|
fa3dd54c8f3989a60820ab7a41c7e00b1f0ab65e
|
hamtech/tst
|
[
"size_categories:100B<n<1T",
"language:en",
"license:pddl",
"region:us"
] |
2023-02-21T10:20:49+00:00
|
{"language": ["en"], "license": "pddl", "size_categories": ["100B<n<1T"], "pretty_name": "tst"}
|
2023-02-21T10:23:01+00:00
|
|
064e41191cc868da3dcc3e26d045e665ee196a4b
|
Toywanit/bokchar
|
[
"region:us"
] |
2023-02-21T11:49:14+00:00
|
{}
|
2023-02-21T11:50:32+00:00
|
|
169b7499e6674fb92878c33dda63b636275f4a89
|
The images are originally from this [fine-tuned dreambooth model](https://huggingface.co/jefsnacker/azzy). And it's just for study purpose to create this dataset so it'll be handy to load these images for further experiment.
|
Vincent-luo/dreambooth-cat
|
[
"region:us"
] |
2023-02-21T12:20:23+00:00
|
{}
|
2023-02-21T12:35:44+00:00
|
4d7b487875d143f61a1bcc9d233ac86cda744ebd
|
# Dataset Card for "instructpix2pix-demo"
Dataset was created using [this notebook](https://colab.research.google.com/gist/sayakpaul/f90aa06f8f89c831f798dd5b3939818b/scratchpad.ipynb).
Paper reference: [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800)
|
sayakpaul/instructpix2pix-demo
|
[
"arxiv:2211.09800",
"region:us"
] |
2023-02-21T12:21:29+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "edit", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2456199.0, "num_examples": 5}], "download_size": 2460397, "dataset_size": 2456199.0}}
|
2023-02-22T04:38:14+00:00
|
9d03c39aa7d4d39d823f87f64ecf78bd5ef05296
|
# Dataset Card for "reklambox_filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklambox_filtered
|
[
"region:us"
] |
2023-02-21T12:23:10+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "label_name", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 480137, "num_examples": 416}, {"name": "train", "num_bytes": 1106131, "num_examples": 968}], "download_size": 947347, "dataset_size": 1586268}}
|
2023-02-21T12:23:22+00:00
|
7654851616cbc04835a916ccbb41e2e541f43ae0
|
An imitation learning environment for the atari_alien environment, sample for the policy atari_2B_atari_alien_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_alien_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T12:25:37+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T12:54:53+00:00
|
77614f0769e497f3e135f7c247b65f91ae22f4b5
|
An imitation learning environment for the atari_amidar environment, sample for the policy atari_2B_atari_amidar_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_amidar_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T13:00:59+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T13:01:52+00:00
|
6d2eb57daf067d206104c97fc0b056496a228917
|
# Dataset Card for "reklambox-filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/reklambox-filtered
|
[
"region:us"
] |
2023-02-21T13:03:49+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "label_name", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "sentence_length", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 281204, "num_examples": 350}, {"name": "train", "num_bytes": 643860, "num_examples": 808}], "download_size": 554464, "dataset_size": 925064}}
|
2023-02-21T13:04:00+00:00
|
986be229c59d478aaed1eb0b9f924598a5ab916c
|
An imitation learning environment for the atari_assault environment, sample for the policy atari_2B_atari_assault_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_assault_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T13:07:08+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T13:08:02+00:00
|
f1c14da633ba86d4c9403cdef0666c4f2b3d44ab
|
An imitation learning environment for the atari_asterix environment, sample for the policy atari_2B_atari_asterix_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_asterix_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T13:13:37+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T13:15:48+00:00
|
59c2f97fcb3413edad21b6566729253afabc6c40
|
An imitation learning environment for the atari_asteroid environment, sample for the policy atari_2B_atari_asteroid_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_asteroid_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T13:21:21+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T13:21:58+00:00
|
8df556754b26a7101cd1d73d9887968f0a18a4a2
|
An imitation learning environment for the atari_atlantis environment, sample for the policy atari_2B_atari_atlantis_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_atlantis_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T13:27:47+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T13:28:40+00:00
|
bc42c7fdab3dadf4d50929037a5caa6e8fc2aa0c
|
An imitation learning environment for the atari_bankheist environment, sample for the policy atari_2B_atari_bankheist_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_bankheist_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T13:34:24+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T13:35:25+00:00
|
84a5366559b4ae34ae316d87596a38227c6b9471
|
An imitation learning environment for the atari_battlezone environment, sample for the policy atari_2B_atari_battlezone_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_battlezone_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T13:41:31+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T13:42:24+00:00
|
06741b0271089f7609058d8ca7fcf33e909f7700
|
An imitation learning environment for the atari_beamrider environment, sample for the policy atari_2B_atari_beamrider_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_beamrider_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T13:48:15+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T13:48:52+00:00
|
81d1e48410a332b774dd530a513f42219349bee5
|
An imitation learning environment for the atari_berzerk environment, sample for the policy atari_2B_atari_berzerk_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_berzerk_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T13:54:19+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T13:54:56+00:00
|
7fdde2afb71dc7d21f683b7ec5d4b8208b9934e9
|
An imitation learning environment for the atari_bowling environment, sample for the policy atari_2B_atari_bowling_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_bowling_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T14:00:15+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T14:01:08+00:00
|
b0aa6fdf6f7d1ce6c6872de977486721aae32898
|
An imitation learning environment for the atari_boxing environment, sample for the policy atari_2B_atari_boxing_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_boxing_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T14:07:48+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T14:08:41+00:00
|
68cdd4d8b1f19b25c8dc959f8bb64a984def77c3
|
An imitation learning environment for the atari_breakout environment, sample for the policy atari_2B_atari_breakout_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_breakout_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T14:14:23+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T14:15:00+00:00
|
d9d4896deb2d833bebe98b04865da24f8d4c6b36
|
An imitation learning environment for the atari_centipede environment, sample for the policy atari_2B_atari_centipede_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_centipede_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T14:21:02+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T14:22:16+00:00
|
e8d06fd1e9734e67f3008ebd567d1939b4fbff32
|
# Dataset Card for mini_raw_diachronic_swe
The Swedish Diachronic Corpus is a project funded by [Swe-Clarin](https://sweclarin.se/eng) and provides a corpus of texts covering the time period from Old Swedish.
### Data Splits
**This will be further extended!**
* Number of instances in split: 4760470
## Acknowledgements
We gratefully acknowledge [SWE-clarin](https://sweclarin.se/) for the datasets.
## Citation Information
Eva Pettersson and Lars Borin (2022)
Swedish Diachronic Corpus
In Darja Fišer & Andreas Witt (eds.), CLARIN. The Infrastructure for Language Resources. Berlin: deGruyter. https://degruyter.com/document/doi/10.1515/9783110767377-022/html
|
Riksarkivet/mini_raw_diachronic_swe
|
[
"size_categories:1M<n<10M",
"language:sv",
"license:mit",
"historical",
"WIP",
"region:us"
] |
2023-02-21T14:21:36+00:00
|
{"language": ["sv"], "license": "mit", "size_categories": ["1M<n<10M"], "pretty_name": "Kbuhist2", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 796312222, "num_examples": 4760470}], "download_size": 475243460}, "tags": ["historical", "WIP"]}
|
2023-03-13T11:39:53+00:00
|
26b03c0151e69c4b555361ec5df29ceee0bb6bc6
|
# AutoTrain Dataset for project: chessbig
## Dataset Description
This dataset has been automatically processed by AutoTrain for project chessbig.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"source": "r1b1k1nr/p6p/2p1p1p1/1p1pPp2/B2P4/2P5/PP2KP1P/RN3R2 b kq - 0 16",
"target": "b5a4"
},
{
"source": "r1b1k2r/ppbp1ppp/2n3q1/8/2B1Pp2/3P1Q2/PPP2PPP/R4RK1 b kq - 1 11",
"target": "c6d4"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"source": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2387 |
| valid | 597 |
|
lebi376/chess_3000_moves
|
[
"task_categories:translation",
"region:us"
] |
2023-02-21T14:26:47+00:00
|
{"task_categories": ["translation"]}
|
2023-02-21T14:27:40+00:00
|
21d3f3cd209304678ce14a3c3428bada0cd790c2
|
silkski/ENERAD_test
|
[
"license:other",
"region:us"
] |
2023-02-21T14:28:26+00:00
|
{"license": "other"}
|
2023-02-21T14:28:41+00:00
|
|
bab291f167ecb6bfb0eb1c97f05274ef31a91185
|
An imitation learning environment for the atari_choppercommand environment, sample for the policy atari_2B_atari_choppercommand_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_choppercommand_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T14:28:31+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T14:30:09+00:00
|
47fb075adaef1e8c01a41156b573bf8cffe62e1c
|
An imitation learning environment for the atari_crazyclimber environment, sample for the policy atari_2B_atari_crazyclimber_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_crazyclimber_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T14:35:37+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T14:36:17+00:00
|
5e9853543fd3f2c35a62e477184c8adbc4ea6a16
|
tyhuang/ShapeNet_Rendering
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-21T14:39:31+00:00
|
{"license": "apache-2.0"}
|
2023-02-21T19:15:17+00:00
|
|
5801d413c8a8c50a46f1f899ce44c9842e9cb56a
|
An imitation learning environment for the atari_defender environment, sample for the policy atari_2B_atari_defender_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_defender_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T14:41:48+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T14:42:45+00:00
|
b848678d439a537f97aca2864eb198859382a493
|
An imitation learning environment for the atari_demonattack environment, sample for the policy atari_2B_atari_demonattack_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_demonattack_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T14:48:57+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T14:49:52+00:00
|
02491fbc74e3340cdd06b9cdbc849c32c9719276
|
An imitation learning environment for the atari_doubledunk environment, sample for the policy atari_2B_atari_doubledunk_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_doubledunk_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T14:56:39+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T14:57:21+00:00
|
873e34b3b69a44e17a26c041ea3677bc183210b8
|
An imitation learning environment for the atari_enduro environment, sample for the policy atari_2B_atari_enduro_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_enduro_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T15:03:44+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T15:04:31+00:00
|
e203261b5176546d4ebf66eeaf41fb6d586d3e75
|
An imitation learning environment for the atari_fishingderby environment, sample for the policy atari_2B_atari_fishingderby_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_fishingderby_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T15:11:12+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T15:12:05+00:00
|
fda2bcb463a7d7388e8a3d8b7ac3589652d31591
|
An imitation learning environment for the atari_freeway environment, sample for the policy atari_2B_atari_freeway_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_freeway_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-21T15:18:53+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T15:20:12+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.