sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
4959fb5156d5951aff72852f401e8a4b10406c98
|
VLyb/FB15k-237
|
[
"license:unlicense",
"region:us"
] |
2023-02-16T07:56:35+00:00
|
{"license": "unlicense"}
|
2023-02-16T07:59:38+00:00
|
|
39924ccbc52c7e9e5a1b1adab590c62a307483e2
|
VLyb/WN18RR
|
[
"license:unlicense",
"region:us"
] |
2023-02-16T08:02:26+00:00
|
{"license": "unlicense"}
|
2023-02-16T08:07:01+00:00
|
|
ecb04758ade513529e118c5c97bca4252a6bec67
|
VLyb/YAGO3-10
|
[
"license:unlicense",
"region:us"
] |
2023-02-16T08:08:57+00:00
|
{"license": "unlicense"}
|
2023-02-16T08:14:16+00:00
|
|
99c6bf9ddcb252be5dd4511c5818de46177c7e1a
|
VLyb/Nations
|
[
"license:unlicense",
"region:us"
] |
2023-02-16T08:15:55+00:00
|
{"license": "unlicense"}
|
2023-02-16T08:16:08+00:00
|
|
1ba3ef93bac4272eb598b618f21a4e50b42b5848
|
VLyb/DBpedia50
|
[
"license:unlicense",
"region:us"
] |
2023-02-16T08:18:27+00:00
|
{"license": "unlicense"}
|
2023-02-16T08:18:51+00:00
|
|
dc55170d2a33a4c613fe8dacfb9daafef0eaa318
|
VLyb/DBpedia500
|
[
"license:unlicense",
"region:us"
] |
2023-02-16T08:23:11+00:00
|
{"license": "unlicense"}
|
2023-02-16T08:39:36+00:00
|
|
e825221228f24e9a37cf92fd280ad3799c06f650
|
VLyb/Kinship
|
[
"license:unlicense",
"region:us"
] |
2023-02-16T08:45:10+00:00
|
{"license": "unlicense"}
|
2023-02-16T08:46:09+00:00
|
|
dd4668d4cd1a73cabac3f3a62fd0a20d687a1fe9
|
VLyb/UMLS
|
[
"license:unlicense",
"region:us"
] |
2023-02-16T08:49:31+00:00
|
{"license": "unlicense"}
|
2023-02-16T09:13:21+00:00
|
|
1d611b7384148ac997a7c004ef98ba18d215c2ea
|
# Dataset Card for Helpful Instructions
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Lewis Tunstall**
### Dataset Summary
Helpful Instructions is a dataset of `(instruction, demonstration)` pairs that are derived from public datasets. As the name suggests, it focuses on instructions that are "helpful", i.e. the kind of questions or tasks a human user might instruct an AI assistant to perform. You can load the dataset as follows:
```python
from datasets import load_dataset
# Load all subsets
helpful_instructions = load_dataset("HuggingFaceH4/helpful_instructions")
# Load a single subset
helpful_instructions_subset = load_dataset("HuggingFaceH4/helpful_instructions", data_dir="data/helpful-anthropic-raw")
```
### Supported Tasks and Leaderboards
This dataset can be used to fine-tune pretrained language models to follow instructions.
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
HuggingFaceH4/helpful-instructions
|
[
"license:apache-2.0",
"human-feedback",
"region:us"
] |
2023-02-16T09:12:16+00:00
|
{"license": "apache-2.0", "pretty_name": "Helpful Instructions", "tags": ["human-feedback"]}
|
2023-02-20T08:58:24+00:00
|
6e58ad4f493bf9e409c56e9a3f3ef42012db7a3f
|
# AutoTrain Dataset for project: bbc-news-classifier
## Dataset Description
This dataset has been automatically processed by AutoTrain for project bbc-news-classifier.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "tv debate urged for party chiefs broadcasters should fix a date for a pre-election televised debate between the three main political leaders according to the hansard society. it would then be up to tony blair michael howard and charles kennedy to decide whether to take part the non-partisan charity said. chairman lord holme argued that prime ministers should not have the right of veto on a matter of public interest . the broadcasters should make the decision to go ahead he said. lord holme s proposal for a televised debate comes just four months after millions of viewers were able to watch us president george w bush slug it out verbally with his democratic challenger john kerry. he said it was a democratically dubious proposition that it was up to the incumbent prime minister to decide whether a similar event takes place here. if mr blair did not want to take part the broadcasters could go ahead with an empty chair or cancel the event and explain their reasons why lord holme said. what makes the present situation even less acceptable is that although mr howard and mr kennedy have said they would welcome a debate no-one has heard directly from the prime minister he said. it has been left to nudges and winks hints and briefings from his aides and campaign managers to imply that mr blair doesn t want one but we haven t heard from the prime minister himself. lord holme who has campaigned for televised debates at previous elections said broadcasters were more than willing to cooperate with the arrangements . opinion polls suggested that the idea had the backing of the public who like comparing the personalities and policies of the contenders in their own homes he said. lord holme argued that as part of their public service obligations broadcasters should make the decision to go ahead as soon as the election is called. an independent third-party body such as the hansard society or electoral commission could work out the ground rules so they were fair to participants and informative to the public he said. it would be up to each party leader to accept or refuse said lord holme. if the prime minister s reported position is true and he does want to take part he would then be obliged to say why publicly. the broadcasters would then have the option of cancelling the event for obvious and well-understood reasons or going ahead with an empty chair. either way would be preferable to the present hidden veto. the hansard society has long campaigned for televised debates and has published reports on the issue in 1997 and 2001. tony blair has already ruled out taking part in a televised debate during the forthcoming election campaign. last month he said: we answer this every election campaign and for the reasons i have given before the answer is no he said at his monthly news conference.",
"target": 2
},
{
"text": "ecb holds rates amid growth fears the european central bank has left its key interest rate unchanged at 2% for the 19th month in succession. borrowing costs have remained on hold amid concerns about the strength of economic growth in the 12 nations sharing the euro analysts said. despite signs of pick-up labour markets and consumer demand remain sluggish while firms are eyeing cost cutting measures such as redundancies. high oil prices meanwhile have put upward pressure on the inflation rate. surveys of economists have shown that the majority expect borrowing costs to stay at 2% in coming months with an increase of a quarter of a percentage point predicted some time in the second half of the year. if anything there may be greater calls for an interest rate cut especially with the euro continuing to strengthen against the dollar. the euro land economy is still struggling with this recovery said economist dirk schumacher. the ecb may sound rather hawkish but once the data allows them to cut again they will. data coming out of germany on thursday underlined the problems facing european policy makers. while germany s economy expanded by 1.7% in 2004 growth was driven by export sales and lost some of its momentum in the last three months of the year. the strength of the euro is threatening to dampen that foreign demand in 2005 and domestic consumption currently is not strong enough to take up the slack. inflation in the eurozone however is estimated at about 2.3% in december above ecb guidelines of 2%. ecb president jean-claude trichet has remained upbeat about prospects for the region and inflation is expected to drop below 2% later in 2005. the ecb has forecast economic growth in the eurozone of 1.9% in 2005.",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['business', 'entertainment', 'politics', 'sport', 'technology'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 198 |
| valid | 52 |
|
Saripudin/autotrain-data-bbc-news-classifier
|
[
"task_categories:text-classification",
"region:us"
] |
2023-02-16T09:50:57+00:00
|
{"task_categories": ["text-classification"]}
|
2023-02-16T09:54:19+00:00
|
aa399ffcba12f6baa52e5bff826b59bf2ab86e51
|
# Dataset Card for "ismus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tiro-is/ismus
|
[
"region:us"
] |
2023-02-16T11:50:58+00:00
|
{"dataset_info": {"features": [{"name": "audio_id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "normalized_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15107936585.61, "num_examples": 109511}, {"name": "test", "num_bytes": 947114213.608, "num_examples": 3184}], "download_size": 16411953840, "dataset_size": 16055050799.218}}
|
2023-02-16T12:11:03+00:00
|
164fbe58548fe426e4fa13afcbf4de34732f68e9
|
# AutoTrain Dataset for project: new_1000_respostas
## Dataset Description
This dataset has been automatically processed by AutoTrain for project new_1000_respostas.
### Languages
The BCP-47 code for the dataset's language is pt.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 0,
"text": " Ol\u00e1, no meu \u00faltimo pedido eu paguei o item errado. Paguei a cerveja long neck, quando o correto \u00e9 a garrafa de 600ml."
},
{
"target": 4,
"text": " Boa tarde!!! Sou moradora do Citt\u00e0 Imbu\u00ed, hoje 15/01 por volta das 11:50, meu filho tentou comprar uma coca cola e n\u00e3o conseguiu, mas o valor do produto foi debitado. Voc\u00eas podem verificar nas imagens e externar o valor? Desde j\u00e1, agrade\u00e7o. Att, Ana Carla"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(names=['Compra Equivocada', 'Cr\u00e9dito n\u00e3o compensado', 'Desativa\u00e7\u00e3o de conta', 'Dificuldade para finalizar a compra', 'Estorno/devolu\u00e7\u00e3o de valor', 'Problemas com destrava', 'Problemas com promo\u00e7\u00f5es', 'Produto danificado/Vencido', 'Produto n\u00e3o encontrado', 'Solicita\u00e7\u00e3o de reposi\u00e7\u00e3o'], id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 715 |
| valid | 182 |
|
pedro-m4u/autotrain-data-new_1000_respostas
|
[
"task_categories:text-classification",
"language:pt",
"region:us"
] |
2023-02-16T12:15:49+00:00
|
{"language": ["pt"], "task_categories": ["text-classification"]}
|
2023-02-16T12:20:00+00:00
|
1653a2b77349fb0dbf41c677bc74fa3bad7874b2
|
# Dataset Card for Dataset CityLearn
This dataset is used to train a decision Transformer for the CityLearn 2022 environment https://www.aicrowd.com/challenges/neurips-2022-citylearn-challenge.
You can load data from this dataset via:
datasets.load_dataset('TobiTob/CityLearn', 'data_name')
A short description of all data sets can be found in file CityLearn.py
|
TobiTob/CityLearn
|
[
"region:us"
] |
2023-02-16T12:16:52+00:00
|
{}
|
2023-06-27T10:14:53+00:00
|
5878918b23b4c415df7158ec75eee187247b4801
|
# danbooru-metadata
Dump of various portions danbooru's metadata, as of Feburary 2023. Everything was taken directly from their JSON API.
The directory structure follows the Danbooru20XX format of each subfolder for a record type being the record's ID modulo 1000. The `.zip` files can sometimes hold hundreds of thousands of small JSON files when put together, so use caution when extracting.
|
stma/danbooru-metadata
|
[
"region:us"
] |
2023-02-16T13:10:54+00:00
|
{}
|
2023-02-17T06:21:00+00:00
|
9791b5eef6ed8c26e9dfaed7de915775d7659b7c
|
DanteKallen/Aqua_Konosuba_Lykon_Lora
|
[
"license:unlicense",
"region:us"
] |
2023-02-16T13:21:11+00:00
|
{"license": "unlicense"}
|
2023-02-16T13:46:57+00:00
|
|
c7cc0096855ca88268882f824569d4b3ce3f48ef
|
# Dataset Card for "diffusiondb_2m_first_5k_canny"
Process [diffusiondb 2m first 5k canny](https://huggingface.co/datasets/poloclub/diffusiondb) to edges by Canny algorithm.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HighCWu/diffusiondb_2m_first_5k_canny
|
[
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:openrail",
"region:us"
] |
2023-02-16T14:16:14+00:00
|
{"language": ["en"], "license": "openrail", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "guide", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3204091410, "num_examples": 5000}], "download_size": 3203076374, "dataset_size": 3204091410}}
|
2023-02-16T14:53:35+00:00
|
450d4826a01ad05624b9a5e0e0de3e062983e479
|
emergentorder/StarTrekMemoryAlpha20230216
|
[
"task_categories:fill-mask",
"task_ids:masked-language-modeling",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"star trek",
"memory alpha",
"region:us"
] |
2023-02-16T15:10:47+00:00
|
{"annotations_creators": [], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["fill-mask"], "task_ids": ["masked-language-modeling"], "pretty_name": "Memory Alpha - The Star Trek Wiki -Full Database Dump as of 20230216", "tags": ["star trek", "memory alpha"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 115575629, "num_examples": 54234}], "download_size": 64791573, "dataset_size": 115575629}}
|
2023-02-16T15:47:01+00:00
|
|
8086bd9dd6f601e50c44eddb7ecbc3a3c345571b
|

|
xJunko/Eden
|
[
"region:us"
] |
2023-02-16T15:34:29+00:00
|
{"license": "other", "pretty_name": "Random LoRA(s)"}
|
2023-05-07T16:31:19+00:00
|
4557380c0675f2ccb443c71c829de7dc7578efd1
|
# Dataset Card for "wikitext103_VALUE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/wikitext103_VALUE
|
[
"region:us"
] |
2023-02-16T15:42:47+00:00
|
{"dataset_info": {"features": [{"name": "sentence-glue", "dtype": "string"}, {"name": "sentence-glue-html", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "sentence-ass", "dtype": "int64"}, {"name": "sentence-been_done", "dtype": "int64"}, {"name": "sentence-dey_it", "dtype": "int64"}, {"name": "sentence-drop_aux", "dtype": "int64"}, {"name": "sentence-got", "dtype": "int64"}, {"name": "sentence-lexical", "dtype": "int64"}, {"name": "sentence-negative_concord", "dtype": "int64"}, {"name": "sentence-negative_inversion", "dtype": "int64"}, {"name": "sentence-null_genetive", "dtype": "int64"}, {"name": "sentence-null_relcl", "dtype": "int64"}, {"name": "sentence-total", "dtype": "int64"}, {"name": "sentence-uninflect", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 4493075, "num_examples": 2891}, {"name": "train", "num_bytes": 1880407626, "num_examples": 1164310}, {"name": "validation", "num_bytes": 3962030, "num_examples": 2411}], "download_size": 988572681, "dataset_size": 1888862731}}
|
2023-02-16T15:43:18+00:00
|
bf85d5046c2cf17ba9c2dbebd52b24ad154e9207
|
polinaeterna/audio_configs_default
|
[
"region:us"
] |
2023-02-16T15:49:43+00:00
|
{"configs_kwargs": {"data_dir": "v1", "drop_labels": true}, "duplicated_from": "polinaeterna/audio_configs2"}
|
2023-02-16T17:01:39+00:00
|
|
833a1ba016f422c65ed6ee990fe1db03f9597386
|
### RTHK News Dataset
(RTHK)[https://www.rthk.hk/] is a public broadcasting service under the Hong Kong Government according to (Wikipedia)[https://en.wikipedia.org/wiki/RTHK]
This dataset at the moment is obtained from exporting messages from their (telegram channel)[https://t.me/rthk_new_c],
which contains news since April 2018.
I will update this dataset with more data in the future.
|
jed351/rthk_news
|
[
"language:zh",
"region:us"
] |
2023-02-16T16:44:01+00:00
|
{"language": ["zh"]}
|
2023-02-16T17:24:50+00:00
|
a01c3fa42e2f2e17804738075cfdc6752f11f93a
|
# AutoTrain Dataset for project: air
## Dataset Description
This dataset has been automatically processed by AutoTrain for project air.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 0.04100000113248825,
"id": 1,
"feat_split": "train"
},
{
"target": 0.04800000041723251,
"id": 2,
"feat_split": "train"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "Value(dtype='float32', id=None)",
"id": "Value(dtype='int64', id=None)",
"feat_split": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1467 |
| valid | 1467 |
|
abhishekDTU/autotrain-data-air
|
[
"region:us"
] |
2023-02-16T16:45:02+00:00
|
{}
|
2023-02-16T17:09:28+00:00
|
3c6edf515625069decca7ec7a21d2e7a2813bfae
|
plogp/MIO_DiffSinger
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-16T17:05:15+00:00
|
{"license": "apache-2.0"}
|
2023-02-16T17:05:15+00:00
|
|
c299be4298b58f71d60f2718273b7c0d64d3aacd
|
# Dataset Summary
This dataset contains all MQM human annotations from previous [WMT Metrics shared tasks](https://wmt-metrics-task.github.io/) and the MQM annotations from [Experts, Errors, and Context](https://aclanthology.org/2021.tacl-1.87/).
The data is organised into 8 columns:
- lp: language pair
- src: input text
- mt: translation
- ref: reference translation
- score: MQM score
- system: MT Engine that produced the translation
- annotators: number of annotators
- domain: domain of the input text (e.g. news)
- year: collection year
You can also find the original data [here](https://github.com/google/wmt-mqm-human-evaluation). We recommend using the original repo if you are interested in annotation spans and not just the final score.
## Python usage:
```python
from datasets import load_dataset
dataset = load_dataset("RicardoRei/wmt-mqm-human-evaluation", split="train")
```
There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :
```python
# split by year
data = dataset.filter(lambda example: example["year"] == 2022)
# split by LP
data = dataset.filter(lambda example: example["lp"] == "en-de")
# split by domain
data = dataset.filter(lambda example: example["domain"] == "ted")
```
## Citation Information
If you use this data please cite the following works:
- [Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation](https://aclanthology.org/2021.tacl-1.87/)
- [Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain](https://aclanthology.org/2021.wmt-1.73/)
- [Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust](https://aclanthology.org/2022.wmt-1.2/)
|
RicardoRei/wmt-mqm-human-evaluation
|
[
"size_categories:100K<n<1M",
"language:en",
"language:de",
"language:ru",
"language:zh",
"license:apache-2.0",
"mt-evaluation",
"WMT",
"region:us"
] |
2023-02-16T17:14:16+00:00
|
{"language": ["en", "de", "ru", "zh"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "tags": ["mt-evaluation", "WMT"]}
|
2023-02-16T18:29:11+00:00
|
f95ffe58983638bf31a5c80d43687fdf1cba56e0
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you used the datasets and models in this repository, please cite it.
```bibtex
@misc{https://doi.org/10.48550/arxiv.2302.09611,
doi = {10.48550/ARXIV.2302.09611},
url = {https://arxiv.org/abs/2302.09611},
author = {Sartipi, Amir and Fatemi, Afsaneh},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English},
publisher = {arXiv},
year = {2023},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
[More Information Needed]
|
Amir13/conll2003-persian
|
[
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|conll2003",
"language:fa",
"license:other",
"named entity recognition",
"arxiv:2302.09611",
"region:us"
] |
2023-02-16T17:36:24+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["machine-generated"], "language": ["fa"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|conll2003"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "conll2003-persian", "tags": ["named entity recognition"], "train-eval-index": [{"col_mapping": {"ner_tags": "tags", "tokens": "tokens"}, "config": "conll2003", "metrics": [{"name": "seqeval", "type": "seqeval"}], "splits": {"eval_split": "test", "train_split": "train"}, "task": "token-classification", "task_id": "entity_extraction"}]}
|
2023-02-21T06:54:17+00:00
|
97436e50a8873f5b236e1de91bb55465988fa748
|
Come collect LoRAs from CivitAI for all your generating needs!
Explore the SafeDump for SFW LoRAs or dive head-deep into the CumDump for... well, I think you get it.
Disclaimer: Absolutely none of these LoRAs belong to me. I am uploading these files here for my own personal use.
Support their creators by liking their works and following them on civitai.com
Enjoy!
---
license: other
---
|
Ubque/The_LoRA_Dump
|
[
"region:us"
] |
2023-02-16T17:43:24+00:00
|
{}
|
2023-03-01T03:32:57+00:00
|
00b823e0df95a3ad565436da3a9874cb18bf625e
|
# Dataset Card for "miniwobplusplus_episodes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LucasThil/miniwobplusplus_episodes
|
[
"region:us"
] |
2023-02-16T17:59:24+00:00
|
{"dataset_info": {"features": [{"name": "episodes", "dtype": "string"}, {"name": "actions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3384285009, "num_examples": 16794}], "download_size": 276652178, "dataset_size": 3384285009}}
|
2023-02-16T18:09:20+00:00
|
ab45e601c6c268738bbce1c95974000f96cdf294
|
# Dataset Card for "icdar2023vqabd-small-tables-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joytafty/icdar2023vqabd-small-tables-train
|
[
"region:us"
] |
2023-02-16T18:09:16+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3153797.0, "num_examples": 244}], "download_size": 2872591, "dataset_size": 3153797.0}}
|
2023-02-16T18:09:21+00:00
|
bc3acfc5a0fda60e6705161469e54a204381a1b4
|
# Dataset Card for "icdar2023vqabd-small-tables-val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joytafty/icdar2023vqabd-small-tables-val
|
[
"region:us"
] |
2023-02-16T18:09:22+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "validation", "num_bytes": 305631.0, "num_examples": 19}], "download_size": 274240, "dataset_size": 305631.0}}
|
2023-02-16T18:09:27+00:00
|
bad9ab569d347d88e8e48bc69c15af32c6ce8495
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you used the datasets and models in this repository, please cite it.
```bibtex
@misc{https://doi.org/10.48550/arxiv.2302.09611,
doi = {10.48550/ARXIV.2302.09611},
url = {https://arxiv.org/abs/2302.09611},
author = {Sartipi, Amir and Fatemi, Afsaneh},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English},
publisher = {arXiv},
year = {2023},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
[More Information Needed]
|
Amir13/ontonotes5-persian
|
[
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|conll2012_ontonotesv5",
"language:fa",
"license:other",
"named entity recognition",
"arxiv:2302.09611",
"region:us"
] |
2023-02-16T18:21:35+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["fa"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|conll2012_ontonotesv5"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "ontonotes5-persian", "tags": ["named entity recognition"]}
|
2023-02-21T06:54:46+00:00
|
7c2a91c95ae7ebaaa8d24092754b1069afdff612
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you used the datasets and models in this repository, please cite it.
```bibtex
@misc{https://doi.org/10.48550/arxiv.2302.09611,
doi = {10.48550/ARXIV.2302.09611},
url = {https://arxiv.org/abs/2302.09611},
author = {Sartipi, Amir and Fatemi, Afsaneh},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English},
publisher = {arXiv},
year = {2023},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
[More Information Needed]
|
Amir13/wnut2017-persian
|
[
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:fa",
"license:other",
"named entity recognition",
"arxiv:2302.09611",
"region:us"
] |
2023-02-16T18:25:53+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["machine-generated"], "language": ["fa"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "wnut2017-persian", "tags": ["named entity recognition"]}
|
2023-02-21T06:55:18+00:00
|
c337ba66b452b10b7fafc8e4da54302c6f785e2d
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you used the datasets and models in this repository, please cite it.
```bibtex
@misc{https://doi.org/10.48550/arxiv.2302.09611,
doi = {10.48550/ARXIV.2302.09611},
url = {https://arxiv.org/abs/2302.09611},
author = {Sartipi, Amir and Fatemi, Afsaneh},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English},
publisher = {arXiv},
year = {2023},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
[More Information Needed]
|
Amir13/ncbi-persian
|
[
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|ncbi_disease",
"language:fa",
"license:other",
"named entity recognition",
"arxiv:2302.09611",
"region:us"
] |
2023-02-16T18:31:51+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["fa"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|ncbi_disease"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "ncbi-persian", "tags": ["named entity recognition"], "train-eval-index": [{"col_mapping": {"ner_tags": "target", "tokens": "text"}, "config": "ncbi_disease", "metrics": [{"name": "Accuracy", "type": "accuracy"}, {"args": {"average": "macro"}, "name": "F1 macro", "type": "f1"}, {"args": {"average": "micro"}, "name": "F1 micro", "type": "f1"}, {"args": {"average": "weighted"}, "name": "F1 weighted", "type": "f1"}, {"args": {"average": "macro"}, "name": "Precision macro", "type": "precision"}, {"args": {"average": "micro"}, "name": "Precision micro", "type": "precision"}, {"args": {"average": "weighted"}, "name": "Precision weighted", "type": "precision"}, {"args": {"average": "macro"}, "name": "Recall macro", "type": "recall"}, {"args": {"average": "micro"}, "name": "Recall micro", "type": "recall"}, {"args": {"average": "weighted"}, "name": "Recall weighted", "type": "recall"}], "splits": {"eval_split": "test", "train_split": "train"}, "task": "token-classification"}]}
|
2023-02-21T06:55:44+00:00
|
18f9115288791e4fa26a675c19cf9b19a57b458a
|
PlinStudios/plynkz
|
[
"license:cc",
"region:us"
] |
2023-02-16T18:42:44+00:00
|
{"license": "cc"}
|
2023-02-16T19:20:50+00:00
|
|
301de385bf05b0c00a8f4be74965e186164dd425
|
# Dataset Summary
This dataset contains all DA human annotations from previous WMT News Translation shared tasks.
The data is organised into 8 columns:
- lp: language pair
- src: input text
- mt: translation
- ref: reference translation
- score: z score
- raw: direct assessment
- annotators: number of annotators
- domain: domain of the input text (e.g. news)
- year: collection year
You can also find the original data for each year in the results section https://www.statmt.org/wmt{YEAR}/results.html e.g: for 2020 data: [https://www.statmt.org/wmt20/results.html](https://www.statmt.org/wmt20/results.html)
## Python usage:
```python
from datasets import load_dataset
dataset = load_dataset("RicardoRei/wmt-da-human-evaluation", split="train")
```
There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :
```python
# split by year
data = dataset.filter(lambda example: example["year"] == 2022)
# split by LP
data = dataset.filter(lambda example: example["lp"] == "en-de")
# split by domain
data = dataset.filter(lambda example: example["domain"] == "news")
```
Note that most data is from News domain.
## Citation Information
If you use this data please cite the WMT findings from previous years:
- [Findings of the 2017 Conference on Machine Translation (WMT17)](https://aclanthology.org/W17-4717.pdf)
- [Findings of the 2018 Conference on Machine Translation (WMT18)](https://aclanthology.org/W18-6401.pdf)
- [Findings of the 2019 Conference on Machine Translation (WMT19)](https://aclanthology.org/W19-5301.pdf)
- [Findings of the 2020 Conference on Machine Translation (WMT20)](https://aclanthology.org/2020.wmt-1.1.pdf)
- [Findings of the 2021 Conference on Machine Translation (WMT21)](https://aclanthology.org/2021.wmt-1.1.pdf)
- [Findings of the 2022 Conference on Machine Translation (WMT22)](https://aclanthology.org/2022.wmt-1.1.pdf)
|
RicardoRei/wmt-da-human-evaluation
|
[
"size_categories:1M<n<10M",
"language:bn",
"language:cs",
"language:de",
"language:en",
"language:et",
"language:fi",
"language:fr",
"language:gu",
"language:ha",
"language:hi",
"language:is",
"language:ja",
"language:kk",
"language:km",
"language:lt",
"language:lv",
"language:pl",
"language:ps",
"language:ru",
"language:ta",
"language:tr",
"language:uk",
"language:xh",
"language:zh",
"language:zu",
"license:apache-2.0",
"mt-evaluation",
"WMT",
"41-lang-pairs",
"region:us"
] |
2023-02-16T18:49:07+00:00
|
{"language": ["bn", "cs", "de", "en", "et", "fi", "fr", "gu", "ha", "hi", "is", "ja", "kk", "km", "lt", "lv", "pl", "ps", "ru", "ta", "tr", "uk", "xh", "zh", "zu"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "tags": ["mt-evaluation", "WMT", "41-lang-pairs"]}
|
2023-02-17T10:41:18+00:00
|
ce9bd9084eb48db58311f5b5dd8f5cbd942d9039
|
# Dataset Card for "miniwobplusplus_ready"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LucasThil/miniwobplusplus_ready
|
[
"region:us"
] |
2023-02-16T19:09:31+00:00
|
{"dataset_info": {"features": [{"name": "episodes", "dtype": "string"}, {"name": "actions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3387412534, "num_examples": 815482}], "download_size": 288210838, "dataset_size": 3387412534}}
|
2023-02-16T19:25:21+00:00
|
90fdd110608a6eed95977ed0962be8d2804ca1d7
|
nicoco404/AITA_labeled_posts
|
[
"region:us"
] |
2023-02-16T20:10:19+00:00
|
{}
|
2023-02-16T22:35:49+00:00
|
|
e4c0747eef9da8eb14de6036a1396f3e8c9634cf
|
# Dataset Card for "home_depot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
[source](https://www.kaggle.com/competitions/home-depot-product-search-relevance)
Dataset Description
This data set contains a number of products and real customer search terms from Home Depot's website. The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters.
The relevance is a number between 1 (not relevant) to 3 (highly relevant). For example, a search for "AA battery" would be considered highly relevant to a pack of size AA batteries (relevance = 3), mildly relevant to a cordless drill battery (relevance = 2), and not relevant to a snow shovel (relevance = 1).
Each pair was evaluated by at least three human raters. The provided relevance scores are the average value of the ratings. There are three additional things to know about the ratings:
The specific instructions given to the raters is provided in relevance_instructions.docx.
Raters did not have access to the attributes.
Raters had access to product images, while the competition does not include images.
Your task is to predict the relevance for each pair listed in the test set. Note that the test set contains both seen and unseen search terms.
|
bstds/home_depot
|
[
"region:us"
] |
2023-02-16T20:34:34+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "entity_id", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "relevance", "dtype": "float64"}, {"name": "description", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 74803048, "num_examples": 74067}], "download_size": 32449185, "dataset_size": 74803048}}
|
2023-02-16T20:35:36+00:00
|
3f63b31100a09af8e3f2c320f27c8eadaa0e910d
|
# Dataset Card for "large-algae-wirs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
samitizerxu/large-algae-wirs
|
[
"region:us"
] |
2023-02-16T20:49:51+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "1", "1": "2", "2": "3", "3": "4", "4": "5", "5": "test"}}}}, {"name": "uid", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 390000704.53, "num_examples": 17035}, {"name": "test", "num_bytes": 140940912.244, "num_examples": 6494}], "download_size": 520667798, "dataset_size": 530941616.7739999}}
|
2023-02-17T02:44:23+00:00
|
8eb50ecb70db4ac1a7184b5506396cc542b4c664
|
# Dataset Card for "large-algae-rgb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
samitizerxu/large-algae-rgb
|
[
"region:us"
] |
2023-02-16T20:54:05+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "1", "1": "2", "2": "3", "3": "4", "4": "5", "5": "test"}}}}, {"name": "uid", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 344037940.735, "num_examples": 17035}, {"name": "test", "num_bytes": 128411265.258, "num_examples": 6494}], "download_size": 461637680, "dataset_size": 472449205.99300003}}
|
2023-02-17T02:39:49+00:00
|
96beb9a3390e597560c1fedb72b81e244bc00856
|
# Dataset Card for "sq-babi_nli_indefinite-knowledge"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
niv-al/sq-babi_nli_indefinite-knowledge
|
[
"language:sq",
"region:us"
] |
2023-02-16T20:58:43+00:00
|
{"language": ["sq"], "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "not-entailed", "1": "entailed"}}}}], "splits": [{"name": "train", "num_bytes": 160256, "num_examples": 1000}, {"name": "validation", "num_bytes": 23468, "num_examples": 144}, {"name": "test", "num_bytes": 23128, "num_examples": 144}], "download_size": 41242, "dataset_size": 206852}}
|
2023-02-18T20:00:06+00:00
|
b52e930385cf5ed7f063072c3f7bd17b599a16cf
|
# Dataset Card for AfriSenti Dataset
<p align="center">
<img src="https://raw.githubusercontent.com/afrisenti-semeval/afrisent-semeval-2023/main/images/afrisenti-twitter.png", width="700" height="500">
--------------------------------------------------------------------------------
## Dataset Description
- **Homepage:** https://github.com/afrisenti-semeval/afrisent-semeval-2023
- **Repository:** [GitHub](https://github.com/afrisenti-semeval/afrisent-semeval-2023)
- **Paper:** [AfriSenti: AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages](https://arxiv.org/pdf/2302.08956.pdf)
- **Paper:** [SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)](https://arxiv.org/pdf/2304.06845.pdf)
- **Paper:** [NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis](https://arxiv.org/pdf/2201.08277.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** [shamsuddeen Muhammad]([email protected])
### Dataset Summary
AfriSenti is the largest sentiment analysis dataset for under-represented African languages, covering 110,000+ annotated tweets in 14 African languages (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and Yoruba).
The datasets are used in the first Afrocentric SemEval shared task, SemEval 2023 Task 12: Sentiment analysis for African languages (AfriSenti-SemEval). AfriSenti allows the research community to build sentiment analysis systems for various African languages and enables the study of sentiment and contemporary language use in African languages.
### Supported Tasks and Leaderboards
The AfriSenti can be used for a wide range of sentiment analysis tasks in African languages, such as sentiment classification, sentiment intensity analysis, and emotion detection. This dataset is suitable for training and evaluating machine learning models for various NLP tasks related to sentiment analysis in African languages.
[SemEval 2023 Task 12 : Sentiment Analysis for African Languages](https://codalab.lisn.upsaclay.fr/competitions/7320)
### Languages
14 African languages (Amharic (amh), Algerian Arabic (ary), Hausa(hau), Igbo(ibo), Kinyarwanda(kin), Moroccan Arabic/Darija(arq), Mozambican Portuguese(por), Nigerian Pidgin (pcm), Oromo (oro), Swahili(swa), Tigrinya(tir), Twi(twi), Xitsonga(tso), and Yoruba(yor)).
## Dataset Structure
### Data Instances
For each instance, there is a string for the tweet and a string for the label. See the AfriSenti [dataset viewer](https://huggingface.co/datasets/shmuhammad/AfriSenti/viewer/shmuhammad--AfriSenti/train) to explore more examples.
```
{
"tweet": "string",
"label": "string"
}
```
### Data Fields
The data fields are:
```
tweet: a string feature.
label: a classification label, with possible values including positive, negative and neutral.
```
### Data Splits
The AfriSenti dataset has 3 splits: train, validation, and test. Below are the statistics for Version 1.0.0 of the dataset.
| | ama | arq | hau | ibo | ary | orm | pcm | pt-MZ | kin | swa | tir | tso | twi | yo |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| train | 5,982 | 1,652 | 14,173 | 10,193 | 5,584| - | 5,122 | 3,064 | 3,303 | 1,811 | - | 805 | 3,482| 8,523 |
| dev | 1,498 | 415 | 2,678 | 1,842 | 1,216 | 397 | 1,282 | 768 | 828 | 454 | 399 | 204 | 389 | 2,091 |
| test | 2,000 | 959 | 5,304 | 3,683 | 2,962 | 2,097 | 4,155 | 3,663 | 1,027 | 749 | 2,001 | 255 | 950 | 4,516 |
| total | 9,483 | 3,062 | 22,155 | 15,718 | 9,762 | 2,494 | 10,559 | 7,495 | 5,158 | 3,014 | 2,400 | 1,264 | 4,821 | 15,130 |
### How to use it
```python
from datasets import load_dataset
# you can load specific languages (e.g., Amharic). This download train, validation and test sets.
ds = load_dataset("shmuhammad/AfriSenti-twitter-sentiment", "amh")
# train set only
ds = load_dataset("shmuhammad/AfriSenti-twitter-sentiment", "amh", split = "train")
# test set only
ds = load_dataset("shmuhammad/AfriSenti-twitter-sentiment", "amh", split = "test")
# validation set only
ds = load_dataset("shmuhammad/AfriSenti-twitter-sentiment", "amh", split = "validation")
```
## Dataset Creation
### Curation Rationale
AfriSenti Version 1.0.0 aimed to be used in the first Afrocentric SemEval shared task **[SemEval 2023 Task 12: Sentiment analysis for African languages (AfriSenti-SemEval)](https://afrisenti-semeval.github.io)**.
### Source Data
Twitter
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
We anonymized the tweets by replacing all *@mentions* by *@user* and removed all URLs.
## Considerations for Using the Data
### Social Impact of Dataset
The Afrisenti dataset has the potential to improve sentiment analysis for African languages, which is essential for understanding and analyzing the diverse perspectives of people in the African continent. This dataset can enable researchers and developers to create sentiment analysis models that are specific to African languages, which can be used to gain insights into the social, cultural, and political views of people in African countries. Furthermore, this dataset can help address the issue of underrepresentation of African languages in natural language processing, paving the way for more equitable and inclusive AI technologies.
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
AfriSenti is an extension of NaijaSenti, a dataset consisting of four Nigerian languages: Hausa, Yoruba, Igbo, and Nigerian-Pidgin. This dataset has been expanded to include other 10 African languages, and was curated with the help of the following:
| Language | Dataset Curators |
|---|---|
| Algerian Arabic (arq) | Nedjma Ousidhoum, Meriem Beloucif |
| Amharic (ama) | Abinew Ali Ayele, Seid Muhie Yimam |
| Hausa (hau) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
| Igbo (ibo) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
| Kinyarwanda (kin)| Samuel Rutunda |
| Moroccan Arabic/Darija (ary) | Oumaima Hourrane |
| Mozambique Portuguese (pt-MZ) | Felermino Dário Mário António Ali |
| Nigerian Pidgin (pcm) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
| Oromo (orm) | Abinew Ali Ayele, Seid Muhie Yimam, Hagos Tesfahun Gebremichael, Sisay Adugna Chala, Hailu Beshada Balcha, Wendimu Baye Messell, Tadesse Belay |
| Swahili (swa) | Davis Davis |
| Tigrinya (tir) | Abinew Ali Ayele, Seid Muhie Yimam, Hagos Tesfahun Gebremichael, Sisay Adugna Chala, Hailu Beshada Balcha, Wendimu Baye Messell, Tadesse Belay |
| Twi (twi) | Salomey Osei, Bernard Opoku, Steven Arthur |
| Xithonga (tso) | Felermino Dário Mário António Ali |
| Yoruba (yor) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
### Licensing Information
This AfriSenti is licensed under a Creative Commons Attribution 4.0 International License
### Citation Information
```
@inproceedings{Muhammad2023AfriSentiAT,
title={AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages},
author={Shamsuddeen Hassan Muhammad and Idris Abdulmumin and Abinew Ali Ayele and Nedjma Ousidhoum and David Ifeoluwa Adelani and Seid Muhie Yimam and Ibrahim Sa'id Ahmad and Meriem Beloucif and Saif Mohammad and Sebastian Ruder and Oumaima Hourrane and Pavel Brazdil and Felermino D'ario M'ario Ant'onio Ali and Davis Davis and Salomey Osei and Bello Shehu Bello and Falalu Ibrahim and Tajuddeen Gwadabe and Samuel Rutunda and Tadesse Belay and Wendimu Baye Messelle and Hailu Beshada Balcha and Sisay Adugna Chala and Hagos Tesfahun Gebremichael and Bernard Opoku and Steven Arthur},
year={2023}
}
```
```
@article{muhammad2023semeval,
title={SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)},
author={Muhammad, Shamsuddeen Hassan and Abdulmumin, Idris and Yimam, Seid Muhie and Adelani, David Ifeoluwa and Ahmad, Ibrahim Sa'id and Ousidhoum, Nedjma and Ayele, Abinew and Mohammad, Saif M and Beloucif, Meriem},
journal={arXiv preprint arXiv:2304.06845},
year={2023}
}
```
### Contributions
[More Information Needed]
|
shmuhammad/AfriSenti-twitter-sentiment
|
[
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:semantic-similarity-classification",
"task_ids:semantic-similarity-scoring",
"multilinguality:monolingual",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"language:amh",
"language:ary",
"language:ar",
"language:arq",
"language:hau",
"language:ibo",
"language:kin",
"language:por",
"language:pcm",
"language:eng",
"language:oro",
"language:swa",
"language:tir",
"language:twi",
"language:tso",
"language:yor",
"sentiment analysis, Twitter, tweets",
"sentiment",
"arxiv:2302.08956",
"arxiv:2304.06845",
"arxiv:2201.08277",
"region:us"
] |
2023-02-16T21:02:20+00:00
|
{"language": ["amh", "ary", "ar", "arq", "hau", "ibo", "kin", "por", "pcm", "eng", "oro", "swa", "tir", "twi", "tso", "yor"], "multilinguality": ["monolingual", "multilingual"], "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "task_ids": ["sentiment-analysis", "sentiment-classification", "sentiment-scoring", "semantic-similarity-classification", "semantic-similarity-scoring"], "pretty_name": "AfriSenti", "tags": ["sentiment analysis, Twitter, tweets", "sentiment"]}
|
2023-09-03T08:59:15+00:00
|
f85b9387a7f10190376806d3c7d959e201ef21b2
|
# Dataset Card for "class_dataset_real"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LFBMS/class_dataset_real
|
[
"region:us"
] |
2023-02-16T22:01:29+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bilanz_h", "1": "bilanz_v", "2": "guv", "3": "kontennachweis_bilanz", "4": "kontennachweis_guv", "5": "other", "6": "text"}}}}], "splits": [{"name": "train", "num_bytes": 330330968.875, "num_examples": 1117}, {"name": "test", "num_bytes": 99656474.0, "num_examples": 280}], "download_size": 400425817, "dataset_size": 429987442.875}}
|
2023-02-16T22:03:16+00:00
|
bb0d3c02dc82d9d5a24be1b92661753634268191
|
# Dataset Card for "class_dataset_real2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LFBMS/class_dataset_real2
|
[
"region:us"
] |
2023-02-16T22:04:23+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bilanz_h", "1": "bilanz_v", "2": "guv", "3": "kontennachweis_bilanz", "4": "kontennachweis_guv", "5": "other"}}}}], "splits": [{"name": "train", "num_bytes": 345218235.409, "num_examples": 1117}, {"name": "test", "num_bytes": 87105530.0, "num_examples": 280}], "download_size": 400622867, "dataset_size": 432323765.409}}
|
2023-02-16T22:06:16+00:00
|
42c411e39c022f293eaae02d651fa8ec4ad2869f
|
# Dataset Card for "class_dataset_real3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LFBMS/class_dataset_real3
|
[
"region:us"
] |
2023-02-16T22:18:06+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bilanz", "1": "guv", "2": "kontennachweis_bilanz", "3": "kontennachweis_guv", "4": "other"}}}}], "splits": [{"name": "train", "num_bytes": 328417078.735, "num_examples": 1117}, {"name": "test", "num_bytes": 99582960.0, "num_examples": 280}], "download_size": 400600544, "dataset_size": 428000038.735}}
|
2023-02-16T22:19:27+00:00
|
cb0283cf4334bb144010dbfa18769254c86afecd
|
Eneru2/text-to-svsprites
|
[
"license:wtfpl",
"region:us"
] |
2023-02-16T22:20:54+00:00
|
{"license": "wtfpl"}
|
2023-02-16T22:35:33+00:00
|
|
57d34fad1cf7d4d9e71774ea17c4c1a7f57af8d4
|
# Implicit Hate Speech
_Latent Hatred: A Benchmark for Understanding Implicit Hate Speech_
[[Read the Paper]](https://aclanthology.org/2021.emnlp-main.29/) | [[Take a Survey to Access the Data]](https://forms.gle/QxCpEbVp91Z35hWFA) | [[Download the Data]](https://www.dropbox.com/s/24meryhqi1oo0xk/implicit-hate-corpus.zip?dl=0)
<img src="frontpage.png" alt="frontpage" width="650"/>
## *Why Implicit Hate?*
It is important to consider the subtle tricks that many extremists use to mask their threats and abuse. These more implicit forms of hate speech may easily go undetected by keyword detection systems, and even the most advanced architectures can fail if they have not been trained on implicit hate speech ([Caselli et al. 2020](https://aclanthology.org/2020.lrec-1.760/)).
## *Where can I download the data?*
If you have not already, please first complete a short [survey](https://forms.gle/QxCpEbVp91Z35hWFA). Then follow [this link to download](https://www.dropbox.com/s/p1ctnsg3xlnupwr/implicit-hate-corpus.zip?dl=0) (2 MB, expands to 6 MB).
## *What's 'in the box?'*
This dataset contains **22,056** tweets from the most prominent extremist groups in the United States; **6,346** of these tweets contain *implicit hate speech.* We decompose the implicit hate class using the following taxonomy (distribution shown on the left).
* (24.2%) **Grievance:** frustration over a minority group's perceived privilege.
* (20.0%) **Incitement:** implicitly promoting known hate groups and ideologies (e.g. by flaunting in-group power).
* (13.6%) **Inferiority:** implying some group or person is of lesser value than another.
* (12.6%) **Irony:** using sarcasm, humor, and satire to demean someone.
* (17.9%) **Stereotypes:** associating a group with negative attribute using euphemisms, circumlocution, or metaphorical language.
* (10.5%) **Threats:** making an indirect commitment to attack someone's body, well-being, reputation, liberty, etc.
* (1.2%) **Other**
Each of the 6,346 implicit hate tweets also has free-text annotations for *target demographic group* and an *implied statement* to describe the underlying message (see banner image above).
## *What can I do with this data?*
State-of-the-art neural models may be able to learn from our data how to (1) classify this more difficult class of hate speech and (3) explain implicit hate by generating descriptions of both the *target* and the *implied message.* As our [paper baselines](#) show, neural models still have a ways to go, especially with classifying *implicit hate categories*, but overall, the results are promising, especially with *implied statement generation,* an admittedly challenging task.
We hope you can extend our baselines and further our efforts to understand and address some of these most pernicious forms of language that plague the web, especially among extremist groups.
## *How do I cite this work?*
**Citation:**
> ElSherief, M., Ziems, C., Muchlinski, D., Anupindi, V., Seybolt, J., De Choudhury, M., & Yang, D. (2021). Latent Hatred: A Benchmark for Understanding Implicit Hate Speech. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)_.
**BibTeX:**
```tex
@inproceedings{elsherief-etal-2021-latent,
title = "Latent Hatred: A Benchmark for Understanding Implicit Hate Speech",
author = "ElSherief, Mai and
Ziems, Caleb and
Muchlinski, David and
Anupindi, Vaishnavi and
Seybolt, Jordyn and
De Choudhury, Munmun and
Yang, Diyi",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.29",
pages = "345--363"
}
```
|
SALT-NLP/ImplicitHate
|
[
"region:us"
] |
2023-02-16T22:45:19+00:00
|
{}
|
2023-02-16T23:00:38+00:00
|
a3fe78950263236caa5b6d8e94a9936020212cbb
|
# `peptides-functional`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| Peptides-func | Chemistry | Graph Classification | Atom Encoder (9) | Bond Encoder (3) | AP
| Dataset | # Graphs | # Nodes | μ Nodes | μ Deg. | # Edges | μ Edges | μ Short. Path | μ Diameter
|---|---:|---:|---:|:---:|---:|---:|---:|---:|
| Peptides-func | 15,535 | 2,344,859 | 150.94 | 2.04 | 4,773,974 | 307.30 | 20.89±9.79 | 56.99±28.72 |
## Additional Information
### Dataset Curators
* Vijay Prakash Dwivedi ([vijaydwivedi75](https://github.com/vijaydwivedi75))
### Citation Information
```
@article{dwivedi2022LRGB,
title={Long Range Graph Benchmark},
author={Dwivedi, Vijay Prakash and Rampášek, Ladislav and Galkin, Mikhail and Parviz, Ali and Wolf, Guy and Luu, Anh Tuan and Beaini, Dominique},
journal={arXiv:2206.08164},
year={2022}
}
```
|
LRGB/peptides-functional
|
[
"task_categories:graph-ml",
"size_categories:1M<n<10M",
"license:cc-by-nc-4.0",
"lrgb",
"region:us"
] |
2023-02-16T23:28:39+00:00
|
{"license": "cc-by-nc-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["graph-ml"], "tags": ["lrgb"]}
|
2023-02-16T23:32:21+00:00
|
11eeb2144def00a55deb2a4f8fada24ea7b207af
|
# `peptides-functional`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| Peptides-struct | Chemistry | Graph Regression | Atom Encoder (9) | Bond Encoder (3) | MAE |
| Dataset | # Graphs | # Nodes | μ Nodes | μ Deg. | # Edges | μ Edges | μ Short. Path | μ Diameter
|---|---:|---:|---:|:---:|---:|---:|---:|---:|
| Peptides-struct | 15,535 | 2,344,859 | 150.94 | 2.04 | 4,773,974 | 307.30 | 20.89±9.79 | 56.99±28.72 |
## Additional Information
### Dataset Curators
* Vijay Prakash Dwivedi ([vijaydwivedi75](https://github.com/vijaydwivedi75))
### Citation Information
```
@article{dwivedi2022LRGB,
title={Long Range Graph Benchmark},
author={Dwivedi, Vijay Prakash and Rampášek, Ladislav and Galkin, Mikhail and Parviz, Ali and Wolf, Guy and Luu, Anh Tuan and Beaini, Dominique},
journal={arXiv:2206.08164},
year={2022}
}
```
|
LRGB/peptides-structural
|
[
"task_categories:graph-ml",
"size_categories:1M<n<10M",
"license:cc-by-nc-4.0",
"lrgb",
"region:us"
] |
2023-02-16T23:35:22+00:00
|
{"license": "cc-by-nc-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["graph-ml"], "tags": ["lrgb"]}
|
2023-02-16T23:37:39+00:00
|
c15bc4801b8444245a88afc4fd024f3b45f95117
|
# `peptides-functional`
### Dataset Summary
| Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
|---|---|---|---|---|---|
| PCQM-Contact | Quantum Chemistry | Link Prediction | Atom Encoder (9) | Bond Encoder (3) | Hits@K, MRR
| Dataset | # Graphs | # Nodes | μ Nodes | μ Deg. | # Edges | μ Edges | μ Short. Path | μ Diameter
|---|---:|---:|---:|:---:|---:|---:|---:|---:|
| PCQM-Contact | 529,434 | 15,955,687 | 30.14 | 2.03 | 32,341,644 | 61.09 |4.63±0.63 | 9.86±1.79 |
## Additional Information
### Dataset Curators
* Vijay Prakash Dwivedi ([vijaydwivedi75](https://github.com/vijaydwivedi75))
### Citation Information
```
@article{dwivedi2022LRGB,
title={Long Range Graph Benchmark},
author={Dwivedi, Vijay Prakash and Rampášek, Ladislav and Galkin, Mikhail and Parviz, Ali and Wolf, Guy and Luu, Anh Tuan and Beaini, Dominique},
journal={arXiv:2206.08164},
year={2022}
}
```
|
LRGB/PCQM-Contact
|
[
"task_categories:graph-ml",
"size_categories:1M<n<10M",
"license:cc-by-4.0",
"lrgb",
"region:us"
] |
2023-02-16T23:38:03+00:00
|
{"license": "cc-by-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["graph-ml"], "tags": ["lrgb"]}
|
2023-02-17T01:55:38+00:00
|
0a5be68bc47be0ffcdbe098cce5f738db81782a7
|
01:00:21 BTW AlDS: Smashing!
01:28:47 BTW AlDS: Smashing!
14:01:13 BTW AlDS: Smashing!
15:04:01 BTW AlDS: Smashing!
21:22:56 CaptainOnLSD: Adaadaaa
21:34:53 CaptainOnLSD: Dadaaaddddw123dd
22:01:19 CaptainOnLSD: Aaaadad
22:01:50 <img=2>BTW AlDS: !Ttm
22:58:19 <img=2>Gym Tool: No jaw?
22:58:24 <img=2>KC Chiefz: 0nope
22:58:30 <img=2>Gym Tool: Y
22:58:38 <img=2>Gym Tool: No cgaunt?!!?
22:58:53 <img=2>KC Chiefz: I've only gotten like 3 basalisk tasks
22:59:15 <img=2>KC Chiefz: I don't like bossing lol
22:59:22 <img=2>Gym Tool: Oof
22:59:38 <img=2>KC Chiefz: Maybe I'll try more after maxing
23:00:40 <img=2>KC Chiefz: Little discouraging not getting a b.P. Til 1840 kc lol
03:26:15 ShivaRio: Ok
03:26:16 SmokeTreesK: I can convert it
03:26:16 ZombieJJB: Zero
03:26:19 ShivaRio: Doubling money max 1m
03:26:20 ZombieJJB: Zer9
03:26:20 Frost S1Q2: Gòrâkpùrê hâs bëën Pãíd: 90K @ 03:26:20
03:26:25 KINDPRINCE: Doubling money
03:26:25 Frost S1Q2: <lt>31<gt> Fròsty Bëts! F2P-drëssëd 'bòts' cán't trådë pàyòùts!
03:26:28 FreeCrackNub: I only have 18k
03:26:28 ofl86k2k1cu: Doubling money
03:26:31 KINDPRINCE: Doubling money
03:26:32 FreeCrackNub: ;/
03:26:32 Frost S1Q2: <lt>32<gt> Fròsty Bëts! Rècëívë mystèry bòxës ëvëry 5 bëts <lt><gt> (10K-500M) <lt><gt>
03:26:34 I Dumbledore: He is scamming
03:26:35 ShivaRio: Doubling money max 1m
03:26:35 humajutt jan: 10m amx
03:26:38 KINDPRINCE: Doubling money
03:26:42 L88r: Show 20m
03:26:43 Returnjosh: Can i have cape
03:26:44 chagmon: Sell full gilded 24m
03:26:46 little zalim: Hi
03:26:50 POV stepsis: Rat papi 50m pls i need i por
03:26:50 Carsuper: Np
03:26:51 Rat Papi: Geoff you look good
03:26:51 Lil Guah: Hi
03:26:51 HumblyThot: R u discord mod??
03:26:52 CraelNutella: ().34 u s d | M ----``" O s r s g d , C O M ``"no verification
03:26:53 Puretheif310: <lt>3
03:26:53 Carsuper: Lol
03:26:55 GeoffKelly11: Posing and everything
03:26:56 Carsuper: <lt>3
03:26:56 QUEEN 1000 S: Hi
03:26:58 Rat Papi: You need 50 m
03:26:58 Puretheif310: Xd
03:27:00 Rat Papi: I got you sec
03:27:01 POV stepsis: Pls
03:27:02 QUEEN 1000 S: Grjytti
03:27:02 GeoffKelly11: Just trying to be like you
03:27:03 POV stepsis: I por.
03:27:04 L88r: How u gonna scam successfully if u cant even show
03:27:10 chagmon: Sell full gilded 24m
03:27:11 little zalim: Bor halp
03:27:11 Carsuper: I make miracles happen
03:27:15 Carsuper: Xd
03:27:16 QUEEN 1000 S: J
03:27:18 Bakaribz: U are nt half my coin
03:27:19 Scream pie: Halfing any gp 1 trade
03:27:19 FreeCrackNub: Anyone so are a bind so I can go to sand crabs ?
03:27:19 GoreQuench: Can anyone please spare me a bond
03:27:20 ItsAramir: Anyone got a rune set?
03:27:23 McBushes: Oooh
03:27:25 humajutt jan: I got 5b
03:27:26 McBushes: Tryana scam
03:27:27 ItsAramir: Please
03:27:29 L88r: No u aint got shit lol
03:27:30 AlmightyYose: A rock
03:27:30 yoink07: Tf u scamming on a 126 for anyways
03:27:30 Bakaribz: Scammer <gt>
03:27:30 ShivaRio: Doubling money max 1m
03:27:31 sourpatch89: All items must go
03:27:31 KINDPRINCE: Doubling money
03:27:34 ItsAramir: Pretty please
03:27:36 L88r: 10k if ur lucky
03:27:37 ZombieJJB: What you mean
03:27:37 KINDPRINCE: Doubling money
03:27:39 FreeCrackNub: Can anyone donate a bond so I can go to sand crabs ?
03:27:39 Coin2p: No
03:27:43 Brentyr: For 250k, which in game pet I hate the most?
03:27:46 Msuomi69: Buy bond 6m
03:27:46 ShivaRio: Doubling money
03:27:47 ZombieJJB: I show you fiwh
03:27:49 KINDPRINCE: Doubling money
03:27:49 Trading4you: Buying burnt food 80 per
03:27:51 Rat Papi: Love you too
03:27:53 Carsuper: U look hot af
03:27:54 GeoffKelly11: Thats what im saying
03:27:59 ggk0kid: Ok i need do this gg guys
03:28:01 ATHENA-M11: Ofc
03:28:04 ggk0kid: Peace
03:28:08 4BetFold: Gllll pce
03:28:45 4BetFold: Or items
03:28:46 ATHENA-M11: Lol he scaming
03:29:15 7 Wise 2719: Se|l g()|d-----``" O s r s g d , C O M ``" No id ~No wait
03:29:31 Brentyr: Yeah, baby mole
03:29:31 Trading4you: Buying burnt food 80 per
03:29:32 KINDPRINCE: Doubling money
03:29:34 GUTS0493: Buy burnt food and flyer
03:29:34 Brentyr: Fk that mole
03:29:34 humajutt jan: 10m for 30m
03:29:35 ggk0kid: Hahah
03:29:35 Trading4you: Buying burnt food 80 per
03:29:36 SmokeTreesK: I have a untradable bond
03:29:36 L88r: Naked mole bis
03:29:37 AlexEdsell: Ayoo
03:29:37 KINDPRINCE: Doubling money
03:29:37 HumblyThot: I see
03:29:38 BreadDorito1: Can someone give me some coins
03:29:38 Trading4you: Buying burnt food 80 per
03:29:39 KINDPRINCE:
03:29:40 L88r: Ok 1 trade
03:29:43 Anya Starr: Need any donations pleasee
03:29:43 KINDPRINCE:
03:29:43 KINDPRINCE:
03:29:44 Msuomi69: Buy bond 6m
03:29:45 Luckyxxday: Enjoy
03:29:45 ZombieJJB: Ye need g9 mining actually
03:29:46 Returnjosh: Ty
03:29:48 GUTS0493: Buy burnt food and flyer
03:29:49 AlexEdsell: Yea frick that mole
03:29:50 humajutt jan: No
03:29:52 humajutt jan: 2 trade
03:29:52 Trading4you: ?
03:29:52 GoreQuench: Nice levels
03:29:53 L88r: Kk
03:29:55 jack skills: Lamb chops
03:29:55 SmokeTreesK: Selling untradable bond
03:29:56 Luckyxxday: Doubling 1m min
03:29:59 Msuomi69: ?
03:29:59 ShortyGemini: Help plz
03:30:00 Lambs Chop: Hi
03:30:00 GoreQuench: Can anyone please spare me a bond
03:30:02 GUTS0493: Buy burnt food and flyer
03:30:02 Dope Deala: Thx g
03:30:04 Returnjosh: Whoa
03:30:05 Dope Deala: Nice gear
03:30:05 GoreQuench: Willing to pay 1m extra in p2p
03:30:07 BreadDorito1: Can someone give me some coins
03:30:07 Anya Starr: Need any donations pleasee
03:30:07 Brentyr: Almost 7k mole kc and no pet
03:30:07 ZombieJJB: S3lling crack
03:30:07 AlmightyYose: Poor man need money to feed family
03:30:07 SmokeTreesK: Woah
03:30:07 Returnjosh: Thats gotta be rare?
03:30:08 ofl86k2k1cu: Doubling money 5k max
03:30:10 Bakaribz: 1 trade ?
03:30:11 L88r: Wtf
03:30:11 GoreQuench: Thnaks man
03:30:13 jack skills: Would you like to join my clan?
03:30:14 AntanasQ: Buy rune items
03:30:15 SmokeTreesK: I cant convert it
03:30:16 GUTS0493: Buy burnt food and flyer
03:30:16 AlexEdsell: Thats unreal lol
03:30:16 Z7z7Zzz: How long?
03:30:17 Luckyxxday: Doubling 1m minn50m max
03:30:17 L88r: On what acc
03:30:18 SmokeTreesK: 500k pls
03:30:19 ShivaRio: ???
03:30:19 Brentyr: Yeah wtf
03:30:19 KINDPRINCE: Doubling money
03:30:20 jakkal_81: Don't trust Kindprince
03:30:20 ZombieJJB: Sellin meth
03:30:21 humajutt jan: 4k accpe
03:30:21 GoreQuench: Can anyone please spare me a bond
03:30:22 worksuks: Get it?
03:30:23 Lambs Chop: I'm good. Thanks
03:30:24 Brentyr: Veratyr
03:30:24 Dope Deala: I would but cant take the risk
03:30:25 GoreQuench: Fml
03:30:26 AntanasQ: Buy rune items
03:30:27 jack skills: Enjoy
03:30:27 Luckyxxday: Doubling 1m min
03:30:27 KINDPRINCE: Doubling money
03:30:28 GoreQuench: No sir
03:30:28 ShivaRio: Doubling money last 500k
03:30:29 GUTS0493: Buy burntnfood and flyer
03:30:29 Trading4you: Buying burnt food 80 per
03:30:30 ZombieJJB: S3lling meth
03:30:32 ofl86k2k1cu: Doubling money 5k max
03:30:32 Anya Starr: Need any donations pleaseee
03:30:33 L88r: Nuts
03:30:34 AlmightyYose: Poor man need money to feed family
03:30:34 FreeCrackNub: I need a bond too
03:30:36 Brentyr: I think it's at 6500 kc actually
03:30:40 humajutt jan: Enjoy
03:30:41 FreeCrackNub: I wanna go to sand crabs
03:30:42 AntanasQ: Buy rune items
03:30:42 Z7z7Zzz: How long?
03:30:43 GUTS0493: Buy burnt food and flyer
03:30:43 Returnjosh: Will u buy me the wolf cloak
03:30:44 L88r: Give it up
03:30:47 L88r: Made bank atleast
03:30:47 AntanasQ: Buy rune items
03:30:47 KINDPRINCE: Dobuling money
03:30:47 ShivaRio: Doubling money last 500k
03:30:48 Z7z7Zzz: How long?
03:30:49 Luckyxxday: Doubking for 5 mins 1 m min
03:30:49 humajutt jan: 10m for 30m
03:30:49 Dope Deala: Ty
03:30:49 L88r: U got diarys?
03:30:50 Charso Beees: Njoy
03:30:50 humajutt jan: 1mf or 3m
03:30:53 Brentyr: Yeah
03:30:53 ZombieJJB: Sellin meth
03:30:53 Trading4you: Buying burnt food 80 per
03:30:53 GoreQuench: 13 bucks is too mcuh man
03:30:53 KINDPRINCE: Doubling money
03:30:55 L88r: Good
03:30:56 Msuomi69: Buy bond 6m
03:30:56 GoreQuench: Idkkk abotu this
03:30:56 Trading4you: Buying burnt food 80 per
03:30:59 FreeCrackNub: Yeah it is
03:30:59 Msuomi69: Buy bond 6m
03:31:00 ofl86k2k1cu: Doubling money 5k max
03:31:02 sourpatch89: $$$$
03:31:02 ZombieJJB: Wat you want
03:31:02 GUTS0493: Buy burntnfood and flyer
03:31:02 humajutt jan: 1m max
03:31:02 KINDPRINCE: Doublig money
03:31:03 Msuomi69: Buy bond 6m
03:31:05 Brentyr: Didn't help at all
03:31:05 Anya Starr: Need donations pleasee
03:31:07 jakkal_81: Don't trust Kikindkindprince
03:31:08 Msuomi69: Buy bond 6m
03:31:09 Brentyr: Fking bs
03:31:10 ZombieJJB: Meth or crack
03:31:10 Returnjosh: Dabb
03:31:11 Luckyxxday: 2 trade sir.
03:31:12 FreeCrackNub: Can anyone spare bond ? I wanna go to snow crabs
03:31:12 GUTS0493: Buy burntnfood wne flyer
03:31:13 GoreQuench: Can anyone please spare me a bond
03:31:13 Msuomi69: Buy bond 6m
03:31:13 KINDPRINCE: Double bonds
03:31:17 AntanasQ: Buy rune items
03:31:17 KINDPRINCE: Double bonds
03:31:18 Bakaribz: 1 trade
03:31:19 L88r: Lol its just good for loot
03:31:20 L88r: Noted shit
03:31:23 KINDPRINCE: Double bondds
03:31:24 jakkal_81: Don't trust kindprince
03:31:25 ZombieJJB: Sell8n meth
03:31:25 ShivaRio: Doubling money last 500k
03:31:26 GUTS0493: Buy bunt good and flyer
03:31:32 AlmightyYose: Lady vixen how much
03:31:32 Brentyr: Kinda rage quitted after I saw plenty of my clans members getting in 100kc or so
03:31:35 ShivaRio: Sorry double trade only
03:31:36 HumblyThot: Rat u look lovely
03:31:40 Bakaribz: Lol
03:31:52 GUTS0493: Buy burnt food snd flyer
03:31:52 HumblyThot: Cya <lt>3
03:31:53 FancyBunny: Cya im heading outt too
03:31:53 GlizzySlurpn: Nice
03:31:53 KINDPRINCE: Doubling money
03:31:56 Brentyr: Farming pet is being a nuisance as well
03:31:56 ShivaRio: I have successfully doubled 5 players today
03:31:57 KINDPRINCE: Doubling money
03:31:58 HumblyThot: Allright gn
03:31:59 ZombieJJB: Meth expensive ?
03:32:01 FancyBunny: Have fun
03:32:01 KINDPRINCE: Doubling money
03:32:01 GUTS0493: Buy burnt good and flyer
03:32:01 L88r: Agree
03:32:02 L88r: !Pets
03:32:03 Heromunch: Bruh why am i so bored rn
03:32:05 KINDPRINCE: Doubling money
03:32:06 POV stepsis: Rat papi what kind of drip is that
03:32:08 POV stepsis: You need bronze.
03:32:08 jakkal_81: Don't trust kindprince
03:32:10 L88r: Oh i got it on here lmfao
03:32:10 humajutt jan: Lol
03:32:11 KINDPRINCE: Doubling money
03:32:12 Msuomi69: Buy bond 6m
03:32:13 humajutt jan: Fake pets
03:32:13 L88r: Been grinding it on my other acc
03:32:14 Msuomi69: Buy bond 6m
03:32:14 ZombieJJB: Not free honey go
03:32:14 GlizzySlurpn: No
03:32:14 Brentyr: ¬¬
03:32:17 GUTS0493: Buy burnt food and Flyer
03:32:17 Msuomi69: Buy bond 6m
03:32:19 ZombieJJB: Get money
03:32:23 Bakaribz: !Pet
03:32:25 Msuomi69: Buy bond 6m
03:32:25 POV stepsis: Selling bond 7m
03:32:25 ZombieJJB: Come back
03:32:26 Gimi: Cool cape and fit
03:32:26 BreadDorito1: Can someone give some coins
03:32:29 Msuomi69: Buy bond 6m
03:32:29 L88r: !Dbstats
03:32:31 AntanasQ: Buy rune items
03:32:31 Puretheif310: Hii
03:32:32 Joge the 3rd: Lookin for donations
03:32:32 Brentyr: When I get time I'll get back to grinding pets
03:32:32 HumblyThot: We look like bushy ballz
03:32:32 L88r: !Bdstats
03:32:35 GlizzySlurpn: Oh, ur trying to lure me
03:32:36 Msuomi69: Buy bond 6m
03:32:37 MagicMark97: Thanks man
03:32:37 Gimi: I want to get 99 craft
03:32:38 Joge the 3rd: Lookin for donations
03:32:38 HumblyThot: Ooop
03:32:38 AntanasQ: Buy rune itens
03:32:38 GUTS0493: Buy burnt food and flyer
03:32:40 Brentyr: Think I got only 7 pets or so
03:32:41 ZombieJJB: Then get free crack
03:32:42 POV stepsis: Get it
03:32:42 AntanasQ: Buy rune items
03:32:42 GlizzySlurpn: Lol
03:32:42 Puretheif310: Hii
03:32:42 McBushes: He just needs time to count money\
03:32:43 Bakaribz: Pet!
03:32:43 L88r: Been doing champion scrolls lately
03:32:45 Bakaribz: Pet!
03:32:45 ShivaRio: Pls spread the word
03:32:46 HumblyThot: Just 1 ball
03:32:46 McBushes: Thats why 2 trades
03:32:46 Joge the 3rd: Lookin for donations please
03:32:47 Brentyr: Dang
03:32:48 MagicMark97: Suh
03:32:49 Brentyr: That's hardcore
03:32:50 Gimi: Need to earn more first
03:32:50 ShivaRio: Enjoy
03:32:51 GUTS0493: Buy burntnfood ane flyer
03:32:51 McBushes: Lmfao
03:32:53 L88r: 3/7 or whatever
03:32:56 HumblyThot: Oop
03:32:57 worksuks: I gotcha
03:32:58 Singlewood: 5 k
03:32:58 ZombieJJB: Buy me5h free crqck
03:32:58 Gimi: Then my next goal:D
03:32:59 L88r: Got jogre in 500 kills last night
03:33:02 HumblyThot: Nice
03:33:04 Brentyr: Dope
03:33:05 POV stepsis: Heromunch said hell lend you some money
03:33:05 HumblyThot: Successs
03:33:06 ShivaRio: Doubling money last 500k
03:33:08 POV stepsis: For your crafting journey
03:33:09 Shunakoma: Trading up
03:33:10 Joge the 3rd: Anybody wanna lend me some dough
03:33:13 Heromunch: :D
03:33:17 SHAHBAZ DAG: Fp200k
03:33:17 Rat Papi: Buying gf
03:33:25 Gimi: Damn thanks
03:33:28 POV stepsis: Selling organic bananas 300k ea.
03:33:28 HumblyThot: Ehm ehm
03:33:40 Nunney43: Do u have to be men to get 99 cape
03:33:40 4BetFold: Selling rune scimitar 1 trillion gp
03:33:51 Joge the 3rd: Lookin for gp loan or gp donations joge
03:33:51 SHAHBAZ DAG: Fp60b
03:33:52 POV stepsis: 1 trillion too much
03:33:53 Rat Papi: Buying gf
03:34:06 Heromunch: Hey yose
03:34:07 AlmightyYose: My man hero!!
03:34:10 worksuks: Rat I gotta wwig
03:34:10 SHAHBAZ DAG: Sty
03:34:10 HumblyThot: U lil stupid ahh bih i aint fukin with uuuu
03:34:14 Joge the 3rd: Lookin for gp loan or donation
03:34:14 SHAHBAZ DAG: Sry
03:34:16 Rat Papi: Hmmm
03:34:16 4BetFold: Buying a bidet
03:34:17 AlmightyYose: Wassup king
03:34:19 Heromunch: Hows things?
03:34:20 Rat Papi: I'll take it
03:34:21 POV stepsis: Does anyone have superbowl bets that'll make me 100k from $10
03:34:24 Nunney43: Do u have to be a member to get a 99 cape
03:34:26 worksuks: Sold
03:34:28 Joge the 3rd: Sup jose
03:34:30 AlmightyYose: Same old hbu
03:34:32 Heromunch: I can make you one
03:34:33 BTW AlDS: Smashing!
03:34:33 worksuks: Yes
03:34:37 Heromunch: Will take time though
03:34:40 Heromunch: Hahahaha
03:34:42 3Daph: ?
03:34:42 Nunney43: Damn okay
03:34:44 POV stepsis: How many legs
03:34:49 Heromunch: Alot
03:34:49 4BetFold: 4
03:35:13 worksuks: Hm are claws now? 80m?
03:35:14 Rat Papi: Hi
03:35:16 Rat Papi: Love you
03:35:20 POV stepsis: Do you watch nfl tho
03:35:20 Rat Papi: 90m rn
03:35:25 worksuks: Jesus
03:35:25 Queen Amelia: Lol
03:35:27 HumblyThot: 10gp from django
03:35:28 Heromunch: Come on maaaaaan
03:35:38 Heromunch: Of course i do
03:35:41 Lil Guah: Ay let me hold those claws
03:35:41 HumblyThot: If u dc about brands
03:35:43 POV stepsis: Brother im fucking listening then
03:35:44 GlizzySlurpn: Selling my ass west varrock basement
03:35:44 POV stepsis: Because i dont
03:35:47 Joge the 3rd: Anybody willin to lend me a few mill?
03:35:50 Heromunch: Hahahahaha
03:35:51 Rat Papi: Lets go
03:35:52 POV stepsis: Im waiting for afl to start
03:35:55 Rat Papi: How much
03:35:55 4BetFold: 3 way
03:36:05 McSwaggertro: Yo Ulooked Awful!
03:36:06 glitterpig: Iknow hahah
03:36:10 GlizzySlurpn: Selling my ass in west varrock basement
03:36:20 Rat Papi: How much
03:36:41 GlizzySlurpn: Hmm
03:36:49 Rat Papi: Go easy on me im poor
03:36:57 GlizzySlurpn: Cum
03:37:14 GlizzySlurpn: Sit
03:37:43 GlizzySlurpn: Give me ur bonds
03:37:44 Rat Papi: Is this enough
03:37:55 GlizzySlurpn: Hmm
03:38:02 Rat Papi: Or do i need to go get more
03:38:09 GlizzySlurpn: I only7 need a dime bag
03:38:20 GlizzySlurpn: Get back intk the real world n shit
03:38:29 Rat Papi: I feel ya
03:39:44 Twxct: I only have gilded axe
03:42:09 Scream pie: Cold milks the R******
03:42:10 Scream pie: Hahahha
03:42:13 Cold Milks: Daa\
03:42:15 Cold Milks: Buy more
03:42:25 Cold Milks: L0l
03:42:27 Cold Milks: Haters
03:42:50 Heromunch: Yep
03:42:50 AlmightyYose: He gave me 5000
03:42:52 ZombieJJB: Ill report you if anything
03:42:55 Frost S1Q9: <lt>13<gt> Fròsty Bëts! Dòësn't shòw fîrst tràdë G.P? Thë fåkè cån't påyòüt
03:42:59 royal 1G5N: Tridding Humajatt Jan
03:42:59 Heromunch: Its a little rough but i like the odds
03:43:00 SmokeTreesK: Fk u
03:43:00 Frost S1Q9: <lt>14<gt> Fròsty Bëts! Yòü wîll gët à vërífîcòín ånd Í.D! <lt><gt> (10K-500M) <lt><gt>
03:43:02 POV stepsis: Odds at 50
03:43:02 AlmightyYose: Smoke go
03:43:02 McBushes: Yo
03:43:05 SmokeTreesK: Jesus is watching
03:43:08 POV stepsis: I like it
03:43:08 Frost S1Q9: <lt>15<gt> Fròsty Bëts! Gâmès: (!H !L !W !F !D !S !C !J) <lt><gt> (10K-500M) <lt><gt>
03:43:09 royal 1G5N: Green:Cashed! 3M
03:43:10 Joge the 3rd: Mmm
03:43:11 AlmightyYose: Cool beans
03:43:11 A5IA: Can anyone spare some lgs
03:43:12 SmokeTreesK: Scammers
03:43:14 Heromunch: I put 50 on
03:43:14 McBushes: \scroll: taking big fat donations cash plz
03:43:17 royal 1G5N: Cashed! 3M
03:43:19 Brentyr: Fail
03:43:20 Heromunch: So fingers crossed
03:43:21 AlmightyYose: Tell Jesus to eat up
03:43:23 McBushes: Lol
03:43:26 S0RCERERKING: Wont back up ur comment ?
03:43:27 A5IA: Anyone spare logs pls?Uwu
03:43:28 Beerus 2017: Lmfao whats going on
03:43:28 POV stepsis: Ill put 50 aswell
03:43:31 POV stepsis: Lets get it baby
03:43:31 Z7z7Zzz: What crased?
03:43:34 royal 1G5N: Chaeds Yes Won 6M
03:43:36 SmokeTreesK: I got scammed
03:43:42 SmokeTreesK: He ssid he would double
03:43:42 AlmightyYose: No smoke didn't
03:43:45 SmokeTreesK: Took my 130k
03:43:47 royal 1G5N: 6M Pid Won
03:43:47 McBushes: Taking fat donations cash plz
03:43:48 Heromunch: I like to make couple roughies
03:43:51 humajutt jan: Thanks
03:43:53 A5IA: Anyone spare logs pls uwj?
03:43:55 AlmightyYose: Smoke just complaining
03:43:59 Brentyr: First to trade gets Thor's Mjolnir for free
03:44:05 Beerus 2017: Its 2023 and people still think doubling happens..
03:44:11 Returnjosh: Much better
03:44:14 Rat Papi: =]
03:44:17 McBushes: That bot zoomed u see that lol
03:44:24 Brentyr: Gratz now you are the new god of thunder
03:44:34 AlmightyYose: Lies
03:44:38 <img=2>IMvorkathlic: Lies
03:44:38 Rat Papi: Drippy
03:44:42 AlmightyYose: You just want attention
03:44:42 McBushes: He just alched it bro
03:44:50 AlmightyYose: No I didn't
03:44:50 A5IA: Anyone spare some logs pls?
03:44:50 S0RCERERKING: Yep
03:44:51 Returnjosh: Yeaaa
03:44:53 KEKBARNIEKEK: Hello there
03:44:55 the f2p tour: Ok i found one
03:44:55 Brentyr: Yeah kratos will get his ass anyway
03:44:55 McBushes: Loooooooooooool
03:44:56 <img=2>IMvorkathlic: 2007 called, it wants its scam back
03:45:01 Rat Papi: Tyty
03:45:02 Gentlyfew365: I need bread
03:45:04 Returnjosh: Looking good now boss, and the flex is still there
03:45:10 McBushes: Here
03:45:16 the f2p tour: To find a item u first need to buy it for higher then normal
03:45:19 ShivaRio: Doubling money last 200k
03:45:22 McBushes: U can be god of swordfish
03:45:25 the f2p tour: Just to see for what price it buys
03:45:28 Gentlyfew365: Dread?
03:45:29 AlmightyYose: Far from it
03:45:31 Brentyr: Dont starve brah
03:45:32 Rat Papi: Im poor D:
03:45:35 infeln4: Si manoo
03:45:35 Gentlyfew365: Bread?
03:45:37 Th0th C0smic: I need help with bond pls
03:45:39 Brentyr: Have these seasong cooked meat
03:45:40 Rat Papi: Lol
03:45:42 AlmightyYose: Smoke you better go smoke weirdo
03:45:43 Brentyr: Seasoned*
03:45:44 the f2p tour: Then u sell for lower then normal to see for how much u can buy it in
03:45:44 Returnjosh: Lol
03:45:46 Rat Papi: Im gonna go annoy people
03:45:48 Dig G: Sell bonds
03:45:49 Gentlyfew365: Thank you
03:45:51 Returnjosh: Same
03:45:53 Returnjosh: Gl
03:45:54 the f2p tour: In this case i found out
03:45:54 <img=2>IMvorkathlic: I dare someone to use a bond on me..
03:45:55 Dig G: Selling bonds
03:45:57 Rat Papi: Can i have a bond
03:45:57 Gentlyfew365: I need bread
03:45:58 Dig G: Sellings bonds
03:45:58 McBushes: Yo
03:45:59 Th0th C0smic: I need help with bond pls
03:46:03 Th0th C0smic: Ill sell item member
03:46:05 Dig G: Selling bods
03:46:05 Rat Papi: For free
03:46:05 Gentlyfew365: For a steak sandwich
03:46:07 the f2p tour: A swordfish is bought at for 241 coins each.
03:46:10 Rat Papi: Oh
03:46:10 Returnjosh: Tiger
03:46:11 Rat Papi: Uh
03:46:12 McBushes: Warning Need Money Plz Donate
03:46:13 Besliuth: Here we go
03:46:13 Returnjosh: What are you cooking
03:46:16 the f2p tour: And i can sell it for 247 each
03:46:16 Rat Papi: I dont have money
03:46:22 Th0th C0smic: Can someone help me with bond pls
03:46:25 Dig G: Selling bonds
03:46:25 Th0th C0smic: Ill sell item and add some more idc
03:46:28 Universe188: A q p
03:46:28 Universe188: W
03:46:29 <img=2>IMvorkathlic: Iidc lol
03:46:31 A5IA: Anyone donate logs pls?
03:46:33 Th0th C0smic: Can someone help me with bond pls
03:46:36 Gentlyfew365: I need bread
03:46:38 Dig G: Selligs bonds
03:46:40 LordOldNSad: Nice
03:46:46 Th0th C0smic: Can someone help me with bond il add more idc
03:46:47 Th0th C0smic: Please
03:46:48 the f2p tour: So the flip here is: buy a cooked swordfish 241, sell 247
03:46:49 gorakpure: !S
03:46:51 Gentlyfew365: Loki g for bread
03:46:51 McBushes: Doing Strange For Change Plz Donate
03:46:58 Returnjosh: Ses
03:47:02 Returnjosh: What are u cooking
03:47:03 Th0th C0smic: Its someone couol dhandle me a bond
03:47:10 the f2p tour: Doesnt seem much but with 1 mill its a 40k profit
03:47:11 Th0th C0smic: Ill sell items p2p and return it ill add more right now no worries
03:47:17 Gentlyfew365: Anyone got bread
03:47:19 Besliuth: Man yesterday was my day
03:47:21 the f2p tour: Without doing anything
03:47:21 AlmightyYose: Let me have a bond
03:47:28 Blackdead16: I do that with gold bars atm
03:47:29 Heromunch: Jalen hurts
03:47:31 Rat Papi: You gonna get it chopped
03:47:34 Heromunch: A.J brown
03:47:35 Joge the 3rd: Yes
03:47:35 Rat Papi: Just for me
03:47:40 Joge the 3rd: Ofc
03:47:44 Rat Papi: How kind of you
03:47:48 AlmightyYose: Hero you like the eagles?!
03:47:49 Joge the 3rd: But he chargeds me a lot more then usual
03:47:53 KaizenOG: Jeez
03:47:54 Rat Papi: Get me a bond and i'll concider it
03:47:55 Joge the 3rd: Charges*
03:47:55 Heromunch: Kansas 1-13
03:47:58 GlizzySlurpn: May a big daddy come push his lil bond inside of me?
03:48:02 AlmightyYose: Devonte smith
03:48:03 Joge the 3rd: Haha i got u on the 28th
03:48:07 worksuks: Kc is going down
03:48:07 Joge the 3rd: If u dont have it by then
03:48:07 Rat Papi: Lol
03:48:07 KaizenOG: This is what coming back to f2p looks like
03:48:18 POV stepsis: Odds wont go any higher
03:48:22 POV stepsis: Capped at 2501 odds for me
03:48:25 POV stepsis: At 6 legs
03:48:31 GlizzySlurpn: Itsw me
03:48:38 GlizzySlurpn: Ur mistress, n i want ur money
03:48:43 Rat Papi: Ohh
03:48:44 GlizzySlurpn: For the 3 mins
03:48:48 Rat Papi: Why didnt you just say so
03:48:50 Heromunch: Bahaha
03:48:51 AlmightyYose: What's your picks hero?
03:48:56 Heromunch: Dang hold up
03:49:11 POV stepsis: Nah i like it im gonna place
03:49:13 POV stepsis: $5
03:49:24 Duhkilluh: Lol
03:49:29 Joge the 3rd: Gg
03:49:34 Joge the 3rd: Can a brotha borrow some coins
03:49:36 ggk0kid: Yo
03:49:43 Heromunch: I would run the cap 6 leg
03:49:47 Heromunch: 5 legs
03:49:47 ggk0kid: I need 1.3 mil for my bond homie lol
03:49:52 Heromunch: Sorry $5
03:49:53 KaizenOG: Whats the best way to make money in f2p l0l
03:49:54 GlizzySlurpn: Shit bro, its like we live on streets man
03:50:07 ggk0kid: Sorry i neeed kll shit]
03:50:09 POV stepsis: $5 paying 12.5k ill take it
03:50:11 cutthroatmom: Hazzah!! Rejoice!!
03:50:16 Nunney43: What does the ring of 3rd age do?
03:50:17 luvhex: Ball sack!!!!!!!!!!!!!!
03:50:18 prodigy pang: Here we are
03:50:19 Heromunch: That is a roughieeeeeee
03:50:24 POV stepsis: I know
03:50:27 Heromunch: Nek minut we both loaded
03:50:28 POV stepsis: This roughie will hit.
03:50:30 POV stepsis: Manifest it
03:50:37 Trusty RNG: Shut up meg
03:50:42 ofl86k2k1cu: Doubling money
03:50:43 ZombieJJB: Hurry
03:51:03 SmokeTreesK: Omgg
03:51:04 cutthroatmom: The land of gold
03:51:08 SmokeTreesK: Now i got wolf cloak
03:51:12 Longdickky: Anyone selling a bond on the lows
03:51:12 Charso Beees: Free
03:51:15 luvhex: Thanks for showing me this place
03:51:24 luvhex: Im about to make so much doubling my money!
03:51:27 Returnjosh: Tyvm!!!
03:51:31 SmokeTreesK: Np
03:51:35 Charso Beees: Enjoy
03:51:37 Returnjosh: <lt>3
03:51:37 SmokeTreesK: I did quit
03:51:39 KaizenOG: Nice
03:51:40 iburyabones: Yum
03:51:40 SmokeTreesK: Now im back
03:51:45 Sloth Def: Horny 4 cock
03:51:48 Charso Beees: Want more?
03:51:48 Returnjosh: Kaizen
03:51:51 Enslisig: Yuuuh
03:51:52 KaizenOG: Yeah
03:51:52 SmokeTreesK: Yes
03:51:52 Returnjosh: Like my cloak?
03:51:54 Enslisig: Woow
03:51:54 Charso Beees: U can have
03:51:56 iburyabones: I full thnx
03:51:57 KaizenOG: Nope
03:52:01 Returnjosh: Tf man
03:52:02 KaizenOG: Now
03:52:03 Charso Beees: Ok
03:52:06 ZombieJJB: Never
03:52:09 KaizenOG: Tell me whats the best way to make money
03:52:13 KaizenOG: On f2p server
03:52:16 SmokeTreesK: Alch
03:52:18 Returnjosh: On f2p?
03:52:19 KaizenOG: So i can get the fuck outta here
03:52:19 ZombieJJB: Bruh
03:52:19 Returnjosh: Idk alch?
03:52:23 Enslisig: Bring me luck!
03:52:25 ofl86k2k1cu: Doubling all money
03:52:28 Charso Beees: Yum in the tum
03:52:28 ofl86k2k1cu: Doubling all moeny
03:52:30 iburyabones: Ty
03:52:30 ZombieJJB: 2ik
03:52:33 ofl86k2k1cu: Doubling all money
03:52:38 iburyabones: Needed tht
03:52:40 S0RCERERKING: Show money
03:52:40 McBushes: Whats this
03:52:42 Charso Beees: This will get me a bond
03:52:42 Sloth Def: I love big cock
03:52:44 Enslisig: May need to take a loan lol, i can't stop
03:52:45 <img=2>IMvorkathlic: Just beg for a lil bit
03:52:46 Longdickky: Buying bond for 5.7mplease
03:52:52 KaizenOG: I rather get cancer
03:52:54 KaizenOG: Than beg
03:52:55 Charso Beees: Free pizzas
03:52:56 ZombieJJB: Bruh buy th3s3 arrows
03:52:56 SmokeTreesK: Fish?
03:52:58 Nunney43: Buying 3rd age ring
03:53:00 Returnjosh: Well u can start by getting a cool wolf cloak
03:53:02 <img=2>IMvorkathlic: Then don't ask for the best gp method ya crank
03:53:03 S0RCERERKING: Show money
03:53:04 chaos t i t: Hi
03:53:07 Nunney43: Buying 3rd age ring
03:53:10 Charso Beees: Free pizzas
03:53:14 KaizenOG: I need to sell my tasset
03:53:16 prodigy pang: Wtf how u hit lvl 126
03:53:18 ggk0kid: 50 ??
03:53:19 Nunney43: Buying 3rd age ring
03:53:19 ofl86k2k1cu: To muhc
03:53:21 ggk0kid: Money bal ;p;
03:53:24 SmokeTreesK: Fk these guys
03:53:24 ofl86k2k1cu: Cant doublke that
03:53:26 ZombieJJB: I need coin
03:53:27 prodigy pang: Pls teach me ur ways
03:53:30 SmokeTreesK: Giving all my money away
03:53:31 2_1z1: Thanks
03:53:31 Charso Beees: Enjoy
03:53:31 Brian4755: Swag swag
03:53:34 Succubus Imp: Buying Rune 2 hander 35k
03:53:34 Returnjosh: Whats up smoke
03:53:34 McBushes: Taking donations
03:53:36 Nunney43: Buying 3rd age ring
03:53:36 Charso Beees: Free pizzas
03:53:36 Sloth Def: Big cocks msg me plsssss
03:53:37 ofl86k2k1cu: Doubling all money
03:53:37 ZombieJJB: K
03:53:38 Returnjosh: Whyyyyyyy
03:53:45 ofl86k2k1cu: Doubling all money
03:53:45 <img=2>IMvorkathlic: Cos it's an attention seeker
03:53:48 ofl86k2k1cu: Doubling all moeny
03:53:49 Charso Beees: Free pizzas
03:53:50 ZombieJJB: Here
03:53:50 <img=2>IMvorkathlic: Quits twice in 2 mins
03:53:53 Returnjosh: Nono he bought me cloak
03:53:54 <img=2>IMvorkathlic: Siiiiiick
03:54:03 whosTK: Hola
03:54:04 Succubus Imp: Buying Rune 2 hander 35k
03:54:10 Blackdead16: 1 trade
03:54:51 abc def fgh: Nice robes
03:54:56 Wokie: Tyty
03:55:16 abc def fgh: No worries
03:55:16 ofl86k2k1cu: Needing money donation
03:55:19 ofl86k2k1cu: Needing money donations please
03:55:23 ofl86k2k1cu: Need money donations please
03:55:29 ofl86k2k1cu: Any spare change will help
03:55:34 abc def fgh: Wuu2
03:55:35 Blackdead16: Ty
03:55:42 ofl86k2k1cu: Needing money donations please
03:55:43 Wokie: Nmnm just hanging
03:55:45 Chaboni: Yw
03:55:49 abc def fgh: Sweet
03:55:54 Wokie: Wby
03:56:02 abc def fgh: I'm going for u0 magic
03:56:08 abc def fgh: 70
03:56:12 quiiiiip: !Price bronze 2h
03:56:14 Wokie: Ooo nice dude
03:56:17 abc def fgh: 7 levels off
03:56:27 abc def fgh: Ran out of nats lol
03:56:28 Wokie: Won't take long mate
03:56:31 Nunney43: Buying 3rd age ring
03:56:39 Succubus Imp: Buying 2 hander 35k
03:56:42 Nunney43: Buying 3rd age ring
03:56:42 Shunakoma: Trading up
03:56:48 Piloten25: Long time ago i played osrs
03:56:55 Nahkanaamari: Same
03:57:00 Besliuth: !J
03:57:07 Shunakoma: Trade me up
03:57:08 Succubus Imp: Buying Rune 2 hander 35k
03:57:13 Shunakoma: Trade me up
03:57:13 Piloten25: 5 yrars ago
03:57:19 Shunakoma: Trade me up need a gilded pic
03:57:21 Longdickky: Hey boss
03:57:27 Longdickky: You tryna bless me 800k
03:57:27 ofl86k2k1cu: Need amulate accusarry
03:57:42 Shunakoma: Nitty
03:57:48 Shunakoma: Trading up
03:57:49 BigHMoe099: Can someone help me get a bind pls
03:57:53 Succubus Imp: Anyone have a rune 2hander??
03:57:54 Shunakoma: Help me trade up plz
03:58:03 SesAvci: Can anyone double my 50k pls?
03:58:06 BigHMoe099: Can someone help me get a bond pls
03:58:11 Shunakoma: Plz help me trade up to a gilded pic axe
03:58:22 Returnjosh: Shun
03:58:25 Shunakoma: Yo bro
03:58:25 BigHMoe099: Can someone help me get an one pls
03:58:30 Returnjosh: Can i have one of those armor sets now
03:58:36 Piloten25: 1 trade?
03:58:36 Returnjosh: Youve been trying all night
03:58:42 Shunakoma: Nah g I need to trade them up
03:58:46 SesAvci: Yes pls
03:58:46 abc def fgh: Can anyone lend me 5k nats
03:58:46 ofl86k2k1cu: 2 trade
03:58:46 Succubus Imp: Wtb Rune 2hander
03:58:48 Shunakoma: Red a gilded pic
03:58:54 BigHMoe099: Someone help me get a bond pls
03:58:55 Shunakoma: Need a gilded pic axe
03:59:01 Returnjosh: I dont think anyone has one
03:59:04 Returnjosh: Too rare
03:59:06 whosTK: 1 i had one
03:59:06 SesAvci: Tysm <lt>3
03:59:09 Shunakoma: I need cash
03:59:11 Piloten25: Np
03:59:12 Shunakoma: And il buy it
03:59:13 whosTK: I had a pic
03:59:18 BigHMoe099: Can anyone help me get a bond pls
03:59:21 ofl86k2k1cu: Accepting donations
03:59:21 maxwestsideg: You want only 20k right?
03:59:23 Returnjosh: Oh they are like 4.5m
03:59:24 ofl86k2k1cu: Acceting donatiosn
03:59:25 whosTK: Got given it for free
03:59:28 Shunakoma: Yh
03:59:30 ofl86k2k1cu: Need donations please
03:59:33 Shunakoma: So I'm 3mill off
03:59:34 Puretheif310: Yes
03:59:36 Returnjosh: Maybe i f u gimme an armor set
03:59:41 whosTK: Got given that and few bonds
03:59:41 Returnjosh: Karma will be good
03:59:43 maxwestsideg: 20k is not mucch so i will give it to you
03:59:43 Puretheif310: O +
03:59:51 ofl86k2k1cu: Need 10k please
03:59:56 ofl86k2k1cu: Need 10k please
03:59:57 BigHMoe099: Someone help me get a bond pls
04:00:00 ofl86k2k1cu: Anyone able to help
04:00:00 Puretheif310: No hablo mucho ingles
04:00:13 maxwestsideg: Here you go
04:00:15 ofl86k2k1cu: Need help 10k please
04:00:16 Puretheif310: Thanks
04:00:19 maxwestsideg: No problem
04:00:22 S0RCERERKING: U need help?
04:00:22 ofl86k2k1cu: Needing help 10k please
04:00:28 Returnjosh: Like it can be one of the inexpensive ones
04:00:28 Puretheif310: I loviu
04:00:28 BigHMoe099: Yes pls lol
04:00:30 Returnjosh: I just want to match
04:00:34 maxwestsideg: I love you too
04:00:36 BigHMoe099: Tired of f2p
04:00:38 Shunakoma: Trading up
04:00:39 whosTK: What u got
04:00:41 ofl86k2k1cu: Need 10k donations please
04:00:54 Shunakoma: Help plz
04:00:57 S0RCERERKING: Cool, go play the game and make some money
04:00:58 Returnjosh: Tk will u buy me an armor set that he has
04:01:00 Shunakoma: Need a gilded pic
04:01:04 Shunakoma: Can you help me get 1?
04:01:09 BigHMoe099: Thanks for advice lol
04:01:19 Returnjosh: Wow thanks
04:01:22 BigHMoe099: Can someone help me get a bind pls
04:01:24 whosTK: <lt>3
04:01:25 Shunakoma: ?
04:01:31 Shunakoma: Any for me plz love
04:01:33 whosTK: He needed armour
04:01:36 whosTK: You have
04:01:39 Shunakoma: I really need a gilded pic axe
04:01:40 McBushes: W0000
04:01:40 whosTK: He wanted what you have
04:01:43 Shunakoma: He had srmour
04:01:44 Blackdead16: How much u got
04:01:47 McBushes: Awhh
04:01:51 BigHMoe099: A mill
04:01:53 whosTK: Now he has more
04:01:58 Returnjosh: :)
04:02:00 whosTK: Cause he asked nicely
04:02:00 Returnjosh: <lt>3 ty
04:02:01 Blackdead16: Mage lvl
04:02:03 whosTK: Manners goes along way
04:02:11 Shunakoma: Did I not say please or something
04:02:19 BigHMoe099: 27
04:02:24 whosTK: Vibes are off
04:02:24 Mzxs1: 8m need me guys
04:02:30 whosTK: Bye
04:02:31 Shunakoma: Yh for real
04:02:31 Mzxs1: 8m ened
04:02:33 McBushes: Awhh
04:02:36 Blackdead16: Train to 55 and go high alch
04:02:38 McBushes: Im out
04:02:40 Shunakoma: Wow
04:02:40 Longdickky: Yo
04:02:43 BigHMoe099: Yea but issue is
04:02:49 whosTK: Welcome back
04:02:50 BigHMoe099: Runes are so expensive
04:02:51 Mzxs1: 8m need me gusy
04:02:55 Shunakoma: Hi
04:02:58 Shunakoma: You ok
04:03:02 whosTK: Always
04:03:08 Shunakoma: Nice to meet you
04:03:08 BigHMoe099: Death and chaos runes so expensive
04:03:10 Blackdead16: Use a staff and a lower sspl
04:03:12 whosTK: Likewise
04:03:16 humajutt jan: !S
04:03:18 Shunakoma: How's your day
04:03:25 whosTK: Swell
04:03:26 Legolas NZ: Will pay 4 people 25k each if they follow me an join my clan
04:03:28 McBushes: Here comes the lure
04:03:31 Blackdead16: Till 55 want take long
04:03:40 Shunakoma: Morning or night for you
04:03:42 BigHMoe099: How much can high alchemy make
04:03:48 Blackdead16: Got like ²20m now
04:03:54 whosTK: Night
04:03:57 BigHMoe099: Wth
04:04:02 humajutt jan: Thanks
04:04:04 Blackdead16: On 200 kites 100k profit
04:04:04 BigHMoe099: How long did that take u
04:04:11 Shunakoma: Ga'day mate
04:04:17 whosTK: Huh?
04:04:27 Shunakoma: Thought you was Australian hahaha
04:04:30 BigHMoe099: 200 rune kites?
04:04:36 whosTK: Why?
04:04:42 Mzxs1: 8m ened
04:04:42 Shunakoma: Time zone
04:04:45 Blackdead16: Yes or ²200 full helms
04:04:50 And goodbye: It is positively popping I here
04:04:51 Calistar99: Anyone have any extra runes i can have ?
04:04:51 BigHMoe099: Gah dayum
04:04:55 BigHMoe099: Wth
04:05:15 S0RCERERKING: Good money
04:05:18 Shunakoma: Can I ask you for a favour my new friend Tk
04:05:20 BigHMoe099: Wait 200 runs kites
04:05:23 And goodbye: Trivia game!! Win Gppp
04:05:25 whosTK: Depends on what it is
04:05:27 BigHMoe099: U make 100k?
04:05:30 And goodbye: Trivia game win gp
04:05:32 Calistar99: Could someone help me with runes please
04:05:34 SPAESATO: I list 250k
04:05:37 Blackdead16: Yes cost 15 min
04:05:40 SPAESATO: Scammer
04:05:45 BigHMoe099: Dang wth
04:05:46 Shunakoma: It would be to help me toward getting a gilded pic axe my good friend
04:05:49 Mzxs1: 7m need
04:06:03 S0RCERERKING: 1200 Alcs/ hr approx 300 go ea.
04:06:03 BigHMoe099: I'm boutta try leveling up magic
04:06:03 whosTK: You should high alch
04:06:08 And goodbye: Trivia game 500k total
04:06:11 Tobi994: For what
04:06:15 Shunakoma: I tried but I can never buy items
04:06:19 Shunakoma: Even with over paying
04:06:28 Shunakoma: And over pay to much it ain't proper ctsboe lol
04:06:33 Shunakoma: Profitable
04:06:45 And goodbye: Trivia game here 500k!!!
04:06:49 whosTK: Profit is profit
04:06:52 humajutt jan: Tyytytyytyt
04:06:56 whosTK: Gotta keep at it
04:07:03 Shunakoma: Yh it don't let me buy the items
04:07:11 whosTK: There's a time limit
04:07:12 Shunakoma: Even if it made 10gp each I would do it
04:07:13 whosTK: Gotta get creative
04:07:18 Shunakoma: What's the time limit?
04:07:24 Duhkilluh: ?
04:07:24 whosTK: Depends on the item
04:07:32 Shunakoma: Rune?
04:07:37 whosTK: Anything
04:07:46 whosTK: Some have time limit
04:07:46 ShivaRio: 500k trivia game
04:07:49 whosTK: Some don't
04:07:49 And goodbye: 500k trivia fall in all poors
04:07:51 Shunakoma: Ok
04:08:07 whosTK: Gotta use rune lite and do research
04:08:16 whosTK: Cause it can change, but there's always profit
04:08:22 Shunakoma: Ok thanks
04:08:26 Shunakoma: Would you help me ?
04:08:27 whosTK: Np
04:08:45 whosTK: There's a clan chat but not sure what its called
04:08:55 whosTK: Im a noob
16:58:03 Mot Ponsta: Gz noob
21:29:31 Cow Father: Gzzz
21:29:37 Cow Father: Wc
21:59:56 BTW AlDS: Smashing!
22:15:24 <img=2>BTW AlDS: .//imagine
23:19:39 <img=41>IronMuffn: G
12:27:19 CitySound: Saving for runite scim anything helps <lt>3
12:27:50 CitySound: Saving for runite scim anything helps
12:28:16 Jade927: Well ashes dude
12:28:19 CitySound: Saving for runite scim anything helps
12:28:22 Jade927: Sell
12:28:42 blikje coki: These fking beggers
12:28:42 blikje coki: These fking beggers
12:28:52 CitySound: Saving for rune scim anything helps
12:28:52 CitySound: Saving for rune scim anything helps
12:29:28 CitySound: Saving for runite scim anything helps
12:29:28 CitySound: Saving for runite scim anything helps
12:29:29 mahiyo11: Hey
12:29:29 mahiyo11: Hey
12:29:52 mahiyo11: Hey dear
12:29:52 mahiyo11: Hey dear
12:29:53 CitySound: Saving for runite scim anything helps
12:29:53 CitySound: Saving for runite scim anything helps
12:30:10 mahiyo11: Bro
12:30:10 mahiyo11: Bro
12:30:10 power mag3r: Yo
12:30:10 power mag3r: Yo
12:30:19 mahiyo11: I need help
12:30:19 mahiyo11: I need help
12:30:23 power mag3r: With
12:30:23 power mag3r: With
12:30:31 mahiyo11: Gold
12:30:31 mahiyo11: Gold
12:30:34 CitySound: Saving for runite scim anything helps ty <lt>3
12:30:34 CitySound: Saving for runite scim anything helps ty <lt>3
12:30:50 mahiyo11: Just 500k
12:30:50 mahiyo11: Just 500k
12:30:56 power mag3r: Lol
12:30:56 power mag3r: Lol
12:31:01 elhombreyeso: Xddddd
12:31:01 elhombreyeso: Xddddd
12:31:05 blikje coki: Stop begging for gold losers
12:31:05 blikje coki: Stop begging for gold losers
12:31:07 power mag3r: I'm save it for bond
12:31:07 power mag3r: I'm save it for bond
12:31:16 CitySound: Ok
12:31:16 power mag3r: Have some wine
12:31:16 CitySound: Ok
12:31:16 power mag3r: Have some wine
12:31:22 blikje coki: Just earn ur own
12:31:22 blikje coki: Just earn ur own
12:31:26 CitySound: Ok
12:31:26 CitySound: Ok
12:31:39 CitySound: Saving for runite scim anything helps
12:31:39 CitySound: Saving for runite scim anything helps
12:31:41 mahiyo11: Thanks dear brother
12:31:41 mahiyo11: Thanks dear brother
12:31:47 Jade927: A full inventory of ashes is almost 5000 gp
12:31:47 Jade927: A full inventory of ashes is almost 5000 gp
12:31:56 elhombreyeso: No way
12:31:56 elhombreyeso: No way
12:32:10 Jade927: Yes
12:32:10 Jade927: Yes
12:32:13 CitySound: Saving for runite scim anything helps
12:32:13 CitySound: Saving for runite scim anything helps
12:32:31 blikje coki: This is just a bot spamming though
12:32:31 blikje coki: This is just a bot spamming though
12:32:35 Jade927: I just made 50k off these guys and their fires
12:32:35 Jade927: I just made 50k off these guys and their fires
12:32:38 CitySound: No it's not
12:32:38 CitySound: No it's not
12:32:43 CitySound: I'm on mobile lol
12:32:43 CitySound: I'm on mobile lol
12:32:45 mahiyo11: But this so thanks dear heart
12:32:45 mahiyo11: But this so thanks dear heart
12:32:47 blikje coki: Lol
12:32:47 blikje coki: Lol
12:32:55 mahiyo11: Love you brothar
12:32:55 mahiyo11: Love you brothar
12:33:05 power mag3r: Np should help
12:33:05 power mag3r: Np should help
12:33:06 blikje coki: U can pick up ashes on mobile too
12:33:06 blikje coki: U can pick up ashes on mobile too
12:33:11 CitySound: I'm trying to get a scimmy to train str
12:33:11 CitySound: I'm trying to get a scimmy to train str
12:33:14 CitySound: I am
12:33:14 CitySound: I am
12:33:32 blikje coki: Ull be there soon
12:33:32 blikje coki: Ull be there soon
12:44:44 sipwell: Hi
12:44:55 sipwell: <lt>hi scemmys
12:45:01 Scemmys: Hey there
12:45:07 sipwell: How are you ?
12:48:52 sipwell: 12
12:49:01 sipwell: Just got back to it
12:49:11 Detoxy: At 27 u can use lvl 2 enchant
12:49:11 sipwell: Dont got much money
12:49:16 sipwell: Or runes
12:53:55 Pker595: Finally 2277 ttl, hosting a huge party up to 523mil, Y0tube - Zet9158
12:54:35 Roxicotten: Taco please.
12:58:07 Final Hit077: Finally 2277 ttl, hosting a huge party up to 524mil, Y0tube - Zet9158
13:09:58 schwamalam: This is my nightmare lol
13:13:47 Crumby Brad: Ive gotten a few for 100k+
13:13:54 2Cue: Good shit
14:32:16 <img=2>Hrathi: Gzzz
16:09:00 baragouda: Anyone know
16:09:15 baragouda: ?
16:09:40 baragouda: Does anyone know wher i can find rogues
16:09:57 GlizzySlurpn: Go ask chatgpt
16:10:00 GlizzySlurpn: Noob
16:23:23 BTW AlDS: Smashing!
17:24:46 Godss Dangle: Hahah
17:49:25 Hi Anxiety: Well this is a convenient spot
17:56:29 <img=41>TiltedIguana: Lol
17:56:31 <img=41>TiltedIguana: I just noticed
18:12:24 Gatorz24: Lol
18:29:22 <img=2>I M Camry: 11111111111
18:31:57 <img=2>I M Camry: 111
18:39:16 <img=2>I M Camry: 11111111111
18:41:18 <img=2>I M Camry: 1111
18:41:51 <img=2>I M Camry: 111
18:43:25 <img=2>I M Camry: 1111111111111111111
18:44:42 <img=2>I M Camry: 1111111111111111
19:02:56 F33l m3 up: 1111111111111
19:13:39 BTW AlDS: Smashing!
17:11:24 poon fisher: !Lvl mining
17:21:18 SteamAdvent: P
18:21:51 C Engineer: Level up: completed.
18:22:26 <img=2>BTW AlDS: !Ttm
|
Jelloleaf/gotr
|
[
"region:us"
] |
2023-02-16T23:53:11+00:00
|
{}
|
2023-02-16T23:56:38+00:00
|
96dfd7bd75d769f70e7e6ff1b84464fe432d4eda
|
Joe02/satou_kuuki_refs
|
[
"license:other",
"region:us"
] |
2023-02-17T00:19:07+00:00
|
{"license": "other"}
|
2023-02-17T00:19:19+00:00
|
|
ea7c6e29c1c28f8d09674ec22fe11a7dfcbf541c
|
# Birds of Australia
As described on RAWPIXEL:
Considered the “Father of bird study in Australia”, John Gould (1804–1881) is one of the most celebrated publications on ornithology worldwide. His book "Birds of Australia" (1840–1848) illustrated by his wife, Elizabeth Gould (1804–1841) introduced more than 300 new birds to the world. His work also contributed to the much revered Charles Darwin’s book ‘On the Origins of Species’. Available under the Creative Commons 0 license.
Created from CC-0 files on RawPixel.com
Image file can either be downloaded with your own script using the direct url column, or use the image data saved directly into the image column.
<https://www.rawpixel.com/search?page=1&sort=curated&tags=%24thebirdsofaustralia&topic=%24thebirdsofaustralia&topic_group=%24publicdomain>
Parquet file created here: <https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/tools/images2parq.py>
File can also be extracted from here: <https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/tools/parq2folder.py>
|
Mediocreatmybest/John_Gould_Birds_of_Australia
|
[
"language:en",
"license:cc0-1.0",
"region:us"
] |
2023-02-17T01:58:01+00:00
|
{"language": ["en"], "license": "cc0-1.0"}
|
2023-02-25T10:50:57+00:00
|
a27dd8c5395c3c899a9f75c1a65fc44f87a26939
|
# Do what you will with the data this is old photos of crafts I used to make - just abide by the liscence above and you good to go!
|
Capsekai/Badge_crafts
|
[
"task_categories:text-to-image",
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:creativeml-openrail-m",
"badges",
"crafts",
"region:us"
] |
2023-02-17T02:57:41+00:00
|
{"language": ["en"], "license": "creativeml-openrail-m", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image", "image-classification"], "pretty_name": "Badge Craft Dataset", "tags": ["badges", "crafts"]}
|
2023-02-26T10:34:30+00:00
|
665fe0ced5dc3fb2d4be0750edd3f308c9910ba4
|
kn568/ussupremecourt_75cases
|
[
"license:cc-by-nc-4.0",
"region:us"
] |
2023-02-17T03:10:43+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-02-17T03:11:35+00:00
|
|
1a2c4250e19f042e4e2655d386ee4bb004790c3c
|
A dataset of AI-generated images or images modified from them.
Products using this dataset
- [empty-eyes-LoRAs](https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs)
|
xenon3134-mc/empty-eyes-dataset
|
[
"size_categories:n<1K",
"license:mit",
"region:us"
] |
2023-02-17T03:17:51+00:00
|
{"license": "mit", "size_categories": ["n<1K"]}
|
2023-02-17T03:46:02+00:00
|
6f2bec3903eea34c0302fe4427b7fe200af5e954
|
# Dataset Card for "pile"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lsb/pile
|
[
"region:us"
] |
2023-02-17T03:26:26+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "pile_set_name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1311748175503, "num_examples": 210607728}, {"name": "validation", "num_bytes": 1348824258, "num_examples": 214670}, {"name": "test", "num_bytes": 1317125199, "num_examples": 214584}], "download_size": 539336008819, "dataset_size": 1314414124960}}
|
2023-02-18T10:00:39+00:00
|
3a67e1f11995589f4c00b67eaef4caa12c740ade
|
<div style='background: #ffeec0; border: 1px solid #ffd86d; padding:1em; border-radius:3px;'>
<h3 style='margin:0'>Outdated!</h3>
<p style='margin:0'>This dataset has been superseded by:</p>
<p style='margin:0'><a style="font-size: 2em;" href='https://huggingface.co/datasets/hearmeneigh/e621-rising-v3-curated'>E621 Rising V3 Curated Image Dataset</a></p>
</div>
**Warning: THIS dataset is NOT suitable for use by minors. The dataset contains X-rated/NFSW content.**
# E621 Rising: Curated Image Dataset v2
**285,466** images (~125GB) downloaded from `e621.net` with [tags](https://huggingface.co/datasets/hearmeneigh/e621-rising-v2-curated/raw/main/meta/tag-counts.by-name.json).
This is a curated dataset, picked from the E621 Rising: Raw Image Dataset v2 [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v2-raw).
## Image Processing
* Only `jpg` and `png` images were considered
* Image width and height have been clamped to `(0, 4096]px`; larger images have been resized to meet the limit
* Alpha channels have been removed
* All images have been converted to `jpg` format
* All images have been converted to TrueColor `RGB`
* All images have been verified to load with `Pillow`
* Metadata from E621 is [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v2-raw/tree/main/meta)
## Tags
Comprehensive list of tags and counts:
* [By name](https://huggingface.co/datasets/hearmeneigh/e621-rising-v2-curated/raw/main/meta/tag-counts.by-name.json)
* [By count](https://huggingface.co/datasets/hearmeneigh/e621-rising-v2-curated/raw/main/meta/tag-counts.by-count.json)
### Changes From E621
* Tag names have been rewritten to `[a-z0-9_]` or `<category>:[a-z0-9_]`, e.g.
* `digital_media_(artwork)` => `meta:digital_media_artwork`
* `half-closed_eyes` => `halfclosed_eyes`
* Symbols have been prefixed with `symbol:`, e.g. `symbol:<3`
* Aspect ratio has been prefixed with `aspect_ratio:`, e.g. `aspect_ratio:16_9`
* All categories except `general` have been prefixed with the category name, e.g. `artist:somename`. The categories are:
* `artist`
* `copyright`
* `character`
* `species`
* `invalid`
* `meta`
* `lore`
### Additional Tags
* Image rating
* `rating:explicit`
* `rating:questionable`
* `rating:safe`
|
hearmeneigh/e621-rising-v2-curated
|
[
"size_categories:100K<n<1M",
"furry",
"anthro",
"nsfw",
"e621",
"not-for-all-audiences",
"region:us"
] |
2023-02-17T04:43:33+00:00
|
{"size_categories": ["100K<n<1M"], "pretty_name": "E621 Rising: Curated Image Dataset v2", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 135370373465.422, "num_examples": 285466}], "download_size": 133991087241, "dataset_size": 135370373465.422}, "viewer": false, "tags": ["furry", "anthro", "nsfw", "e621", "not-for-all-audiences"]}
|
2023-10-09T17:56:52+00:00
|
0243f2e6d606a615ccc68744869108d4de27d869
|
# Dataset Card for "wikipedia.reorder.natural.pl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.natural.pl
|
[
"region:us"
] |
2023-02-17T08:27:57+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1958124685, "num_examples": 1772445}], "download_size": 523553918, "dataset_size": 1958124685}}
|
2023-02-17T12:00:27+00:00
|
ee478e9afd9139966912c12161b4597124dd3349
|
# Dataset Card for "augment_train_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MohammedNasri/augment_train_dataset
|
[
"region:us"
] |
2023-02-17T09:47:16+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8532425445.0, "num_examples": 81760}, {"name": "eval", "num_bytes": 304561718.0, "num_examples": 10440}], "download_size": 8179433148, "dataset_size": 8836987163.0}}
|
2023-02-17T09:53:57+00:00
|
49bad0c575d4f8a9de6a51afff3484651a582567
|
# Dataset Card for "class_dataset_real_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LFBMS/class_dataset_real_donut
|
[
"region:us"
] |
2023-02-17T09:51:22+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bilanz_h", "1": "bilanz_v", "2": "guv", "3": "kontennachweis_bilanz", "4": "kontennachweis_guv", "5": "other", "6": "text"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 327762478.0, "num_examples": 1117}, {"name": "test", "num_bytes": 99667843.0, "num_examples": 280}], "download_size": 400428133, "dataset_size": 427430321.0}}
|
2023-02-17T09:51:49+00:00
|
c3e3f423063e1c822b25d5c677370991b612ead7
|
# Dataset Card for "class_dataset_real2_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LFBMS/class_dataset_real2_donut
|
[
"region:us"
] |
2023-02-17T10:00:13+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bilanz_h", "1": "bilanz_v", "2": "guv", "3": "kontennachweis_bilanz", "4": "kontennachweis_guv", "5": "other"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 340313532.0, "num_examples": 1117}, {"name": "test", "num_bytes": 87116926.0, "num_examples": 280}], "download_size": 400625159, "dataset_size": 427430458.0}}
|
2023-02-17T10:00:38+00:00
|
d794c4fec04839020f54b071d3871f17e944638f
|
An imitation learning environment for the atari_pong environment, sample for the policy atari_2B_atari_pong_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
edbeeching/prj_gia_dataset_atari_2B_atari_pong_1111
|
[
"deep-reinforcement-learning",
"reinforcement-learning",
"gia",
"multi-task",
"multi-modal",
"imitation-learning",
"offline-reinforcement-learning",
"region:us"
] |
2023-02-17T10:04:25+00:00
|
{"library_name": "gia", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "gia", "multi-task", "multi-modal", "imitation-learning", "offline-reinforcement-learning"]}
|
2023-02-21T17:07:48+00:00
|
b395321beb1b0eb4305283b7657248c794c68916
|
# Dataset Card for "class_dataset_real3_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LFBMS/class_dataset_real3_donut
|
[
"region:us"
] |
2023-02-17T10:14:22+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bilanz", "1": "guv", "2": "kontennachweis_bilanz", "3": "kontennachweis_guv", "4": "other"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 327835672.0, "num_examples": 1117}, {"name": "test", "num_bytes": 99594248.0, "num_examples": 280}], "download_size": 400602803, "dataset_size": 427429920.0}}
|
2023-02-17T10:14:49+00:00
|
2ca3012ef85a60143a8f97b83a45bb1a7b5c2244
|
# VGGSound
VGG-Sound is an audio-visual correspondent dataset consisting of short clips of audio sounds, extracted from videos uploaded to YouTube.
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/vggsound/
- **Paper:** https://arxiv.org/abs/2004.14368
- **Github:** https://github.com/hche11/VGGSound
## Analysis
- **310+ classes:** VGG-Sound contains audios spanning a large number of challenging acoustic environments and noise characteristics of real applications.
- **200,000+ videos:** All videos are captured "in the wild" with audio-visual correspondence in the sense that the sound source is visually evident.
- **550+ hours:** VGG-Sound consists of both audio and video. Each segment is 10 seconds long.

## Download
We provide a csv file. For each YouTube video, we provide YouTube URLs, time stamps, audio labels and train/test split. Each line in the csv file has columns defined by here.
```
# YouTube ID, start seconds, label, train/test split.
```
And you can download VGGSound directly from this [repository](https://huggingface.co/datasets/Loie/VGGSound/tree/main).
## License
The VGG-Sound dataset is available to download for commercial/research purposes under a Creative Commons Attribution 4.0 International License. The copyright remains with the original owners of the video. A complete version of the license can be found [here](https://thor.robots.ox.ac.uk/datasets/vggsound/license_vggsound.txt).
## Citation
Please cite the following if you make use of the dataset.
```
@InProceedings{Chen20,
author = "Honglie Chen and Weidi Xie and Andrea Vedaldi and Andrew Zisserman",
title = "VGGSound: A Large-scale Audio-Visual Dataset",
booktitle = "International Conference on Acoustics, Speech, and Signal Processing (ICASSP)",
year = "2020",
}
```
|
Loie/VGGSound
|
[
"task_categories:audio-classification",
"size_categories:100B<n<1T",
"arxiv:2004.14368",
"region:us"
] |
2023-02-17T10:27:55+00:00
|
{"size_categories": ["100B<n<1T"], "task_categories": ["audio-classification"]}
|
2023-03-26T12:25:40+00:00
|
6104284646a7a83f493f1825830c4b13f751ea2a
|
# Dataset Summary
In 2022, several changes were made to the annotation procedure used in the WMT Translation task. In contrast to the standard DA (sliding scale from 0-100) used in previous years, in 2022 annotators performed DA+SQM (Direct Assessment + Scalar Quality Metric). In DA+SQM, the annotators still provide a raw score between 0 and 100, but also are presented with seven labeled tick marks. DA+SQM helps to stabilize scores across annotators (as compared to DA).
The data is organised into 8 columns:
- lp: language pair
- src: input text
- mt: translation
- ref: reference translation
- score: direct assessment
- system: MT engine that produced the `mt`
- annotators: number of annotators
- domain: domain of the input text (e.g. news)
- year: collection year
You can also find the original data [here](https://www.statmt.org/wmt22/results.html)
## Python usage:
```python
from datasets import load_dataset
dataset = load_dataset("RicardoRei/wmt-sqm-human-evaluation", split="train")
```
There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :
```python
# split by year
data = dataset.filter(lambda example: example["year"] == 2022)
# split by LP
data = dataset.filter(lambda example: example["lp"] == "en-de")
# split by domain
data = dataset.filter(lambda example: example["domain"] == "news")
```
Note that, so far, all data is from [2022 General Translation task](https://www.statmt.org/wmt22/translation-task.html)
## Citation Information
If you use this data please cite the WMT findings:
- [Findings of the 2022 Conference on Machine Translation (WMT22)](https://aclanthology.org/2022.wmt-1.1.pdf)
|
RicardoRei/wmt-sqm-human-evaluation
|
[
"size_categories:1M<n<10M",
"language:cs",
"language:de",
"language:en",
"language:hr",
"language:ja",
"language:liv",
"language:ru",
"language:sah",
"language:uk",
"language:zh",
"license:apache-2.0",
"mt-evaluation",
"WMT",
"12-lang-pairs",
"region:us"
] |
2023-02-17T10:42:46+00:00
|
{"language": ["cs", "de", "en", "hr", "ja", "liv", "ru", "sah", "uk", "zh"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "tags": ["mt-evaluation", "WMT", "12-lang-pairs"]}
|
2023-02-17T11:10:39+00:00
|
61c12eca3fc3748f1473bf5350037171782686da
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[mpii](http://human-pose.mpi-inf.mpg.de/)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
HighCWu/mpii_100_openpose
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"language:en",
"license:bsd",
"region:us"
] |
2023-02-17T10:45:11+00:00
|
{"language": ["en"], "license": "bsd", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "guide", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51273540, "num_examples": 100}], "download_size": 49905504, "dataset_size": 51273540}}
|
2023-02-17T10:54:59+00:00
|
de82c2a34623438152d6260c3218c5a2db1a8382
|
mimbres/testset
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-17T11:19:06+00:00
|
{"license": "apache-2.0"}
|
2023-02-17T11:19:06+00:00
|
|
fe1ded4cdee0afb20020d59b5146c7643de2571e
|
toto10/edogos
|
[
"license:openrail",
"doi:10.57967/hf/0378",
"region:us"
] |
2023-02-17T11:45:00+00:00
|
{"license": "openrail"}
|
2023-02-17T12:01:17+00:00
|
|
988afa241a2743b0c2fb4fbfd32ad2fa2e92a2e4
|
DEEPBIND v0.11
--------------
The deepbind command-line executable can be used to score DNA/RNA sequences
according to any RBP/TF model listed in the DeepBind web repository:
http://tools.genes.toronto.edu/deepbind
For each input sequence, the deepbind executable scores each subsequence
of a pre-determined length (e.g. 20) and returns only the maximum or the
average over these per-position scores.
Larger scores indicated stronger binding. The scores themselves are on an
arbitrary scale, and vary from model to model due to variation in the
quality of training data for different proteins.
EXAMPLE
-------
To generate predictions with DeepBind, you need two things:
1) a list of model IDs, and
2)
3) a list of DNA/RNA sequences.
The file example.ids contains 4 example model IDs, one
on each line, reproduced here:
* D00210.001 # RBFOX1 (RNAcompete)
* D00120.001 # MBNL1 (RNAcompete)
* D00410.003 # GATA3 (SELEX)
* D00328.003 # CTCF (SELEX)
The file example.seq contains 4 example sequences, which
were chosen such that the nth sequence scores highly for
the nth model. The file example.seq is reproduced here:
* AGGUAAUAAUUUGCAUGAAAUAACUUGGAGAGGAUAGC
* AGACAGAGCUUCCAUCAGCGCUAGCAGCAGAGACCAUU
* GAGGTTACGCGGCAAGATAA
* TACCACTAGGGGGCGCCACC
To generate 16 predictions (4 models, 4 sequences), run
the deepbind executable as follows:
% deepbind example.ids < example.seq
|D00210.001| D00120.001| D00410.003| D00328.003|
| :----:| :----: | :----: |:----: |
| 7.451420 | -0.166146 | -0.408751| -0.026180|
| -0.155398 | 4.113817 | 0.516956| -0.248167|
| -0.140683 | 0.181295 | 5.885349| -0.026180|
| -0.174985 | -0.152521 | -0.379695| 17.682623|
To see details of each ID, use the --dump-info flag:
% deepbind --dump-info example.ids
|ID | Protein | Type | Species | Family | Class Experiment |
| :----:| :----: | :----: |:----: | :----: | :----: |
| D00210.001 |RBFOX1 |RBP |Homo sapiens |RRM |RNAcompete |
| D00120.001 |MBNL1 |RBP |Homo sapiens |Znf |RNAcompete |
| D00410.003 |GATA3 |TF |Homo sapiens |GATA |SELEX |
| D00328.003 |CTCF |TF |Homo sapiens |C2H2 ZF |SELEX |
CHANGES v0.1 -> v0.11
---------------------
- Fixed bug where last position in input sequence was
not evaluated for a score; suggested by Irene Kaplow.
- Added --window-size and --average flags based on feedback.
|
thewall/DeepBindWeight
|
[
"license:openrail",
"region:us"
] |
2023-02-17T11:56:41+00:00
|
{"license": "openrail"}
|
2023-04-18T08:28:48+00:00
|
555da8cef2a33698b8779e4b3389c0a4958d68a5
|
# Dataset Card for "wikipedia.reorder.svo.pl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.svo.pl
|
[
"region:us"
] |
2023-02-17T12:01:28+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1958124685, "num_examples": 1772445}], "download_size": 546155672, "dataset_size": 1958124685}}
|
2023-02-17T12:02:16+00:00
|
879a22f6fc5c157b6dc1c70b23f0148dba5140e9
|
# Dataset Card for "wikipedia.reorder.vos.pl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.vos.pl
|
[
"region:us"
] |
2023-02-17T12:03:12+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1958124685, "num_examples": 1772445}], "download_size": 548528129, "dataset_size": 1958124685}}
|
2023-02-17T12:04:00+00:00
|
d117136d39daf9cabb078a697f8510eed4e5d02e
|
# Dataset Card for "wikipedia.reorder.osv.pl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.osv.pl
|
[
"region:us"
] |
2023-02-17T12:04:54+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1958124685, "num_examples": 1772445}], "download_size": 548655232, "dataset_size": 1958124685}}
|
2023-02-17T12:05:42+00:00
|
883551164f43a3e79dcea520b62279987e562438
|
# Dataset Card for "wikipedia.reorder.sov.pl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.sov.pl
|
[
"region:us"
] |
2023-02-17T12:06:34+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1958124685, "num_examples": 1772445}], "download_size": 549518463, "dataset_size": 1958124685}}
|
2023-02-17T12:07:22+00:00
|
fec0ef9498c2c9d73a18cf13f8c61d5e2bdf9bd1
|
# Dataset Card for "wikipedia.reorder.vso.pl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.vso.pl
|
[
"region:us"
] |
2023-02-17T12:08:18+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1958124685, "num_examples": 1772445}], "download_size": 546698042, "dataset_size": 1958124685}}
|
2023-02-17T12:09:07+00:00
|
e695c8582575dc316ec4c7aa0c44013845241a66
|
# Dataset Card for "wikipedia.reorder.ovs.pl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lshowway/wikipedia.reorder.ovs.pl
|
[
"region:us"
] |
2023-02-17T12:10:04+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1958124685, "num_examples": 1772445}], "download_size": 547217506, "dataset_size": 1958124685}}
|
2023-02-17T12:10:53+00:00
|
c130c205afb7631713b07ad9758431966a6a2c5f
|
steinhaug/regularization
|
[
"license:mit",
"region:us"
] |
2023-02-17T12:32:10+00:00
|
{"license": "mit"}
|
2023-06-06T14:34:55+00:00
|
|
5fa5090bf70b26e6bee09f4ee6a04c363f24b724
|
# Dataset Card for "miniwobplusplus_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LucasThil/miniwobplusplus_train
|
[
"region:us"
] |
2023-02-17T12:32:47+00:00
|
{"dataset_info": {"features": [{"name": "episodes", "dtype": "string"}, {"name": "actions", "dtype": "string"}, {"name": "refs", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2707524886, "num_examples": 652385}, {"name": "test", "num_bytes": 338733634, "num_examples": 81549}, {"name": "validate", "num_bytes": 339687103, "num_examples": 81548}], "download_size": 607473807, "dataset_size": 3385945623}}
|
2023-02-17T12:40:36+00:00
|
7e0d5fad90f665702280e08718d77275f13bbf82
|
# Definición de campos
1. **uci_id**: UniChEM identifier.
2. **chembl_id**: ChEMBL identifier.
3. **molecule_type**: Type of molecule (Small molecule, Protein, Antibody, Oligosaccharide, Oligonucleotide, Cell, Unknown).⁶
4. **alogp**: Calculated ALogP. Ghose-Crippen-Viswanadhan octanol-water partition coefficient (ALogP).¹ ²
5. **aromatic_rings**: number of aromatic rings. Aromatic rings are common structural components of polymers.
6. **cx_logd**: The calculated octanol/water distribution coefficient at pH7.4 using ChemAxon v17.29.0.³
7. **cx_logp**: The calculated octanol/water partition coefficient using ChemAxon v17.29.0.³
8. **cx_most_apka**: The most acidic pKa calculated using ChemAxon v17.29.0.³
9. **cx_most_bpka**: The most basic pKa calculated using ChemAxon v17.29.0.³
10. **full_molformula**: Molecular formula for the full compound (including any salt).⁴
11. **full_mwt**: Molecular weight of the full compound including any salts.⁴
12. **hba**: Number hydrogen bond acceptors.⁴
13. **hba_lipinski**: Number of hydrogen bond acceptors calculated according to Lipinski's original rules (i.e., N + O count)).⁴
14. **hbd**: Number hydrogen bond donors.⁴
15. **hbd_lipinski**: Number of hydrogen bond donors calculated according to Lipinski's original rules (i.e., NH + OH count).⁴
16. **heavy_atoms**: Number of heavy (non-hydrogen) atoms.⁴
17. **molecular_species**: Indicates whether the compound is an acid/base/neutral.⁵
18. **mw_freebase**: Molecular weight of parent compound.⁴
19. **mw_monoisotopic**: Monoisotopic parent molecular weight.⁴
20. **num_lipinski_ro5_violations**: Number of violations of Lipinski's rule of five using HBA_LIPINSKI and HBD_LIPINSKI counts.⁵
21. **num_ro5_violations**: Number of violations of Lipinski's rule-of-five, using HBA and HBD definitions.⁵
22. **psa**: Polar surface area.⁴
23. **qed_weighted**: Weighted quantitative estimate of drug likeness (as defined by Bickerton et al., Nature Chem 2012).⁴
24. **ro3_pass**: Indicates whether the compound passes the rule-of-three (mw < 300, logP < 3 etc).⁵
25. **rtb**: Number rotatable bonds.⁴
26. **canonical_smiles**: Canonical smiles, with no stereochemistry information. Generated using pipeline pilot.⁵
27. **standard_inchi**: IUPAC standard InChI for the compound.⁵
28. **standard_inchi_key**: IUPAC standard InChI key for the compound.⁵
29. **natural_product**: Indicates whether the compound is natural product-derived (currently curated only for drugs).⁶
30. **inorganic_flag**: Indicates whether the molecule is inorganic (i.e., containing only metal atoms and <2 carbon atoms).⁶
31. **therapeutic_flag**: Indicates that a drug has a therapeutic application (as opposed to e.g., an imaging agent, additive etc).⁶
32. **biotherapeutic**: A single related resource. Can be either a URI or set of nested resource data.⁶
33. **polymer_flag**: Indicates whether a molecule is a small molecule polymer (e.g., polistyrex).⁶
34. **prodrug**: Indicates that the molecule is a pro-drug (see molecule hierarchy for active component, where known).⁶
35. **kegg_id**: KEGG identifier.
36. **formula**: Molecular formula for the full compound.
37. **exact_mass**: Mass of the compound (from KEGG).
38. **mol_weight**: mass of a molecule of a substance, based on 12 as the atomic weight of carbon-12.⁸
39. atom: An ATOM entry represents KEGG Atom Type .¹⁰
40. **bond**: A BOND entry is defined as a pair of ATOM entries that form a chemical bond in a molecule, corresponding to many named bonds in organic chemistry and biochemistry. ¹⁰
41. **chebi_id**: ChEBI identifier.
42. **definition**: A simple definition of this compound.
43. **mass**: Returns the average mass. The relative masses are calculated from tables of relative atomic masses (atomic weights) published by IUPAC. (from CheBI).⁷
44. **mol**: ChEBI stores the two-dimensional or three-dimensional structural diagrams as connection tables in MDL molfile format.⁷
45. **smiles**: The simplified molecular-input line-entry system (SMILES) is a specification in the form of a line notation for describing the structure of chemical species using short ASCII strings.
46. **inchi**: The International Chemical Identifier (InChI) is a textual identifier for chemical substances, designed to provide a standard way to encode molecular information and to facilitate the search for such information in databases and on the web.
47. **inchi_key**: The InChIKey, sometimes referred to as a hashed InChI, is a fixed length (27 character) condensed digital representation of the InChI that is not human-understandable.
48. **cas_id**: CAS Registry Number. A CAS Registry Number is a unique and unambiguous identifier for a specific substance that allows clear communication and, with the help of CAS scientists, links together all available data and research about that substance.
49. **substance**: Full substance name as recognized by CFSAN (FDA). ⁹
50. **regs**: Code of Federal Regulations associated numbers of this compound (FDA). ⁹
51. **syns**: Synonyms of the compound (FDA).
52. **used_for**: The physical or technical effect(s) the substance has in or on food; see 21 CFR 170.3(o) for definitions. (FDA). ⁹
¹ http://chemgps.bmc.uu.se/help/dragonx/GhoseCrippenViswanadhanAlogP.html
² http://www.talete.mi.it/help/dproperties_help/index.html?molecular_properties.htm
³ http://chembl.blogspot.com/2020/03/chembl-26-released.html
⁴ https://micha-protocol.org/glossary/index
⁵ https://www.ebi.ac.uk/chembl/api/data/drug/schema
⁶ https://www.ebi.ac.uk/chembl/api/data/molecule/schema
⁷ http://libchebi.github.io/libChEBI%20API.pdf
⁸ https://www.britannica.com/science/molecular-weight
⁹ https://www.cfsanappsexternal.fda.gov/scripts/fdcc/?set=FoodSubstances&sort=Used_for_Technical_Effect
¹⁰ https://bmcsystbiol.biomedcentral.com/articles/10.1186/1752-0509-7-S6-S2
|
blux-food/compounds
|
[
"region:us"
] |
2023-02-17T12:42:10+00:00
|
{}
|
2023-05-22T01:32:45+00:00
|
daabe51383f4196e9e8df6171dc74f16a4a96984
|
# Dataset Card for "class_dataset_real_donut_train_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LFBMS/class_dataset_real_donut_train_val
|
[
"region:us"
] |
2023-02-17T12:54:48+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bilanz_h", "1": "bilanz_v", "2": "guv", "3": "kontennachweis_bilanz", "4": "kontennachweis_guv", "5": "other", "6": "text"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 294898200.8863026, "num_examples": 1005}, {"name": "test", "num_bytes": 32864277.113697402, "num_examples": 112}], "download_size": 307756703, "dataset_size": 327762478.0}}
|
2023-02-17T12:54:59+00:00
|
33cbb68734aa1b49688cfe174ea53eb587d35799
|
# Dataset Card for "salvadoran-news-elsalvadorgram"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
justinian336/salvadoran-news-elsalvadorgram
|
[
"region:us"
] |
2023-02-17T13:25:16+00:00
|
{"dataset_info": {"features": [{"name": "image_src", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": {"class_label": {"names": {"0": "Internacional", "1": "Nacional", "2": "Arte y Cultura", "3": "Espect\u00e1culos", "4": "Trends", "5": "Econom\u00eda", "6": "Negocios", "7": "Tips", "8": "Deportes", "9": "Pol\u00edtica", "10": "Cine y TV", "11": "Turismo"}}}}, {"name": "link", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3423673, "num_examples": 1998}], "download_size": 1930392, "dataset_size": 3423673}}
|
2023-06-26T00:24:13+00:00
|
7e0fd00dc883470dc0f962692c03606b39b08abc
|
# Dataset Card for "santacoder-token-usage"
Token usage count per language when tokenizing the `"bigcode/stack-dedup-alt-comments"` dataset with the `santacoder` tokenizer. There are less tokens than in the tokenizer because of vocabulary mismatch between the datasets used to train the tokenizer and the ones that ended up being used to train the model.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bigcode/santacoder-token-usage
|
[
"region:us"
] |
2023-02-17T14:51:59+00:00
|
{"dataset_info": {"features": [{"name": "token", "dtype": "int64"}, {"name": "Java", "dtype": "int64"}, {"name": "JavaScript", "dtype": "int64"}, {"name": "Python", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1571808, "num_examples": 49119}], "download_size": 1165252, "dataset_size": 1571808}}
|
2023-02-17T14:53:34+00:00
|
47fcb1e525a966b4bc1bd64226d3a0c61b85da8b
|
p1atdev/nijijourney
|
[
"license:cc0-1.0",
"region:us"
] |
2023-02-17T15:19:26+00:00
|
{"license": "cc0-1.0"}
|
2023-02-19T10:03:56+00:00
|
|
b1e1fd0f6afae67b7ed711122cb0059083cf3c21
|
twigwam/fuego-20230217-163523-5ea371
|
[
"fuego",
"region:us"
] |
2023-02-17T15:35:24+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230217-163523-5ea371", "status": "done", "script": "run_glue.py", "requirements_file": "requirements.txt", "space_id": "twigwam/fuego-20230217-163523-5ea371", "space_hardware": "cpu-basic", "github_repo_id": "huggingface/transformers", "github_repo_branch": "main", "github_repo_sha": "a8eb4f79f946c5785f0e91b356ce328248916a05"}}
|
2023-02-17T20:55:14+00:00
|
|
94c4fcbe9a68086362cae1abfda6a4b3ca51379b
|
# Dataset Card for "RSD46-WHU"
## Dataset Description
- **Paper** [Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks](https://ieeexplore.ieee.org/iel7/36/7880748/07827088.pdf)
- **Paper** [High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective](https://www.mdpi.com/209338)
- **Split** Validation
## Split Information
This HuggingFace dataset repository contains just the Validation split.
### Licensing Information
[Free for education, research and commercial use.](https://github.com/RSIA-LIESMARS-WHU/RSD46-WHU)
## Citation Information
[Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks](https://ieeexplore.ieee.org/iel7/36/7880748/07827088.pdf)
[High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective](https://www.mdpi.com/209338)
```
@article{long2017accurate,
title = {Accurate object localization in remote sensing images based on convolutional neural networks},
author = {Long, Yang and Gong, Yiping and Xiao, Zhifeng and Liu, Qing},
year = 2017,
journal = {IEEE Transactions on Geoscience and Remote Sensing},
publisher = {IEEE},
volume = 55,
number = 5,
pages = {2486--2498}
}
@article{xiao2017high,
title = {High-resolution remote sensing image retrieval based on CNNs from a dimensional perspective},
author = {Xiao, Zhifeng and Long, Yang and Li, Deren and Wei, Chunshan and Tang, Gefu and Liu, Junyi},
year = 2017,
journal = {Remote Sensing},
publisher = {MDPI},
volume = 9,
number = 7,
pages = 725
}
```
|
jonathan-roberts1/RSD46-WHU
|
[
"license:other",
"region:us"
] |
2023-02-17T15:41:45+00:00
|
{"license": "other", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "airplane", "1": "airport", "2": "artificial dense forest land", "3": "artificial sparse forest land", "4": "bare land", "5": "basketball court", "6": "blue structured factory building", "7": "building", "8": "construction site", "9": "cross river bridge", "10": "crossroads", "11": "dense tall building", "12": "dock", "13": "fish pond", "14": "footbridge", "15": "graff", "16": "grassland", "17": "irregular farmland", "18": "low scattered building", "19": "medium density scattered building", "20": "medium density structured building", "21": "natural dense forest land", "22": "natural sparse forest land", "23": "oil tank", "24": "overpass", "25": "parking lot", "26": "plastic greenhouse", "27": "playground", "28": "railway", "29": "red structured factory building", "30": "refinery", "31": "regular farmland", "32": "scattered blue roof factory building", "33": "scattered red roof factory building", "34": "sewage plant-type-one", "35": "sewage plant-type-two", "36": "ship", "37": "solar power station", "38": "sparse residential area", "39": "square", "40": "steelworks", "41": "storage land", "42": "tennis court", "43": "thermal power plant", "44": "vegetable plot", "45": "water"}}}}], "splits": [{"name": "train", "num_bytes": 1650045051.96, "num_examples": 17516}], "download_size": 2184490825, "dataset_size": 1650045051.96}}
|
2023-03-31T13:43:55+00:00
|
6c79cd8536e2f3ace62c869ce1ae09fa85b517d3
|
# Dataset Card for "Optimal-31"
## Dataset Description
- **Paper** [Scene classification with recurrent attention of VHR remote sensing images](https://ieeexplore.ieee.org/iel7/5/8045830/07891544.pdf)
### Licensing Information
[No license for now, cite the paper below.]
## Citation Information
[Scene classification with recurrent attention of VHR remote sensing images](https://ieeexplore.ieee.org/iel7/5/8045830/07891544.pdf)
```
@article{wang2018scene,
title = {Scene classification with recurrent attention of VHR remote sensing images},
author = {Wang, Qi and Liu, Shaoteng and Chanussot, Jocelyn and Li, Xuelong},
year = 2018,
journal = {IEEE Transactions on Geoscience and Remote Sensing},
publisher = {IEEE},
volume = 57,
number = 2,
pages = {1155--1167}
}
```
|
jonathan-roberts1/Optimal-31
|
[
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] |
2023-02-17T15:53:58+00:00
|
{"license": "other", "task_categories": ["image-classification", "zero-shot-image-classification"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "airplane", "1": "airport", "2": "baseball diamond", "3": "basketball court", "4": "beach", "5": "bridge", "6": "chaparral", "7": "church", "8": "circular farmland", "9": "commercial area", "10": "dense residential", "11": "desert", "12": "forest", "13": "freeway", "14": "golf course", "15": "ground track field", "16": "harbor", "17": "industrial area", "18": "intersection", "19": "island", "20": "lake", "21": "meadow", "22": "medium residential", "23": "mobile home park", "24": "mountain", "25": "overpass", "26": "parking lot", "27": "railway", "28": "rectangular farmland", "29": "roundabout", "30": "runway"}}}}], "splits": [{"name": "train", "num_bytes": 25100636.72, "num_examples": 1860}], "download_size": 25105452, "dataset_size": 25100636.72}}
|
2023-03-31T16:06:29+00:00
|
7d50e8214fcc2f4d5fb0fd6b6835114987fb436c
|
# Dataset Card for "Airbus-Wind-Turbines-Patches"
## Dataset Description
- **Paper** [Airbus Wind Turbine Patches](https://www.kaggle.com/datasets/airbusgeo/airbus-wind-turbines-patches)
- **Split** Validation
## Split Information
This HuggingFace dataset repository contains just the Validation split.
### Licensing Information
[CC BY-NC-SA 4.0](https://www.kaggle.com/datasets/airbusgeo/airbus-wind-turbines-patches)
## Citation Information
[Airbus Wind Turbine Patches](https://www.kaggle.com/datasets/airbusgeo/airbus-wind-turbines-patches)
```
@misc{kaggle_awtp,
author = {Airbus DS GEO S.A.},
title = {Airbus Wind Turbine Patches},
howpublished = {\url{https://www.kaggle.com/datasets/airbusgeo/airbus-wind-turbines-patches}},
year = {2021},
version = {1.0}
}
```
|
jonathan-roberts1/Airbus-Wind-Turbines-Patches
|
[
"license:other",
"region:us"
] |
2023-02-17T15:56:30+00:00
|
{"license": "other", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "no wind turbine", "1": "wind turbine"}}}}], "splits": [{"name": "train", "num_bytes": 169946184.648, "num_examples": 71504}], "download_size": 147716132, "dataset_size": 169946184.648}}
|
2023-03-31T14:23:50+00:00
|
784d8e198f48b745fa3705b1e19d10e735d039d8
|
# Dataset Card for "SRV-Europarl-ST-processed-mt-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/SRV-Europarl-ST-processed-mt-es
|
[
"region:us"
] |
2023-02-17T15:59:36+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 133686385.86889735, "num_examples": 553896}, {"name": "valid", "num_bytes": 17228528.617501996, "num_examples": 74770}, {"name": "test", "num_bytes": 17351036.302417863, "num_examples": 77952}], "download_size": 132237051, "dataset_size": 168265950.78881723}}
|
2023-02-17T18:04:59+00:00
|
d4ca5fb9fe47dabf9386606354c19b6a9c2ffdc4
|
# Dataset Card for "SRV-Europarl-ST-processed-mt-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/SRV-Europarl-ST-processed-mt-de
|
[
"region:us"
] |
2023-02-17T16:00:21+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 142212387.7873306, "num_examples": 570077}, {"name": "valid", "num_bytes": 18480669.707563575, "num_examples": 77255}, {"name": "test", "num_bytes": 18441786.554772235, "num_examples": 79827}], "download_size": 137284138, "dataset_size": 179134844.0496664}}
|
2023-02-17T18:05:50+00:00
|
71ef61dbd1bfdfd8369aec868564dfdc1ab42675
|
# Dataset Card for "SRV-Europarl-ST-processed-mt-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/SRV-Europarl-ST-processed-mt-en
|
[
"region:us"
] |
2023-02-17T16:01:09+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 159929144.55095986, "num_examples": 602605}, {"name": "valid", "num_bytes": 21162053.230128862, "num_examples": 81968}, {"name": "test", "num_bytes": 22144424.302616265, "num_examples": 86170}], "download_size": 138665727, "dataset_size": 203235622.08370498}}
|
2023-02-17T18:06:42+00:00
|
487f9f41aff3d7d32a9cbd3160666d7214641bb5
|
# Dataset Card for "SRV-Europarl-ST-processed-mt-fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/SRV-Europarl-ST-processed-mt-fr
|
[
"region:us"
] |
2023-02-17T16:01:57+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 141253195.15623656, "num_examples": 560866}, {"name": "valid", "num_bytes": 17488315.87666781, "num_examples": 74712}, {"name": "test", "num_bytes": 17809265.33287921, "num_examples": 77906}], "download_size": 134077385, "dataset_size": 176550776.36578357}}
|
2023-02-17T18:07:31+00:00
|
4703af7fb3e5b9fb9c894c5331cfa4d38ae47980
|
# Dataset Card for "SRV-Europarl-ST-processed-mt-nl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/SRV-Europarl-ST-processed-mt-nl
|
[
"region:us"
] |
2023-02-17T16:02:43+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 129720449.98097737, "num_examples": 545306}, {"name": "valid", "num_bytes": 16521264.566455696, "num_examples": 73282}, {"name": "test", "num_bytes": 16814900.166492514, "num_examples": 76545}], "download_size": 127174917, "dataset_size": 163056614.7139256}}
|
2023-02-17T18:08:26+00:00
|
d3a0e889a857aea070327116cd771cee035b9434
|
# Dataset Card for "SRV-Europarl-ST-processed-mt-pl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/SRV-Europarl-ST-processed-mt-pl
|
[
"region:us"
] |
2023-02-17T16:03:28+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 131997833.26896666, "num_examples": 552558}, {"name": "valid", "num_bytes": 16413231.013342457, "num_examples": 73364}, {"name": "test", "num_bytes": 17199836.022855934, "num_examples": 77684}], "download_size": 132441622, "dataset_size": 165610900.30516505}}
|
2023-02-17T18:09:13+00:00
|
b3f6f9c34147c2013c07e25663ab0a0cea7cbf96
|
# Dataset Card for "SRV-Europarl-ST-processed-mt-pt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/SRV-Europarl-ST-processed-mt-pt
|
[
"region:us"
] |
2023-02-17T16:04:16+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 131601003.22950283, "num_examples": 549976}, {"name": "valid", "num_bytes": 16576935.543191927, "num_examples": 73404}, {"name": "test", "num_bytes": 17257821.503147982, "num_examples": 77286}], "download_size": 129352823, "dataset_size": 165435760.27584276}}
|
2023-02-17T18:10:02+00:00
|
3500fbb407ff985a7dff60482af5cdef4412306b
|
# Dataset Card for "SRV-Europarl-ST-processed-mt-ro"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/SRV-Europarl-ST-processed-mt-ro
|
[
"region:us"
] |
2023-02-17T16:05:00+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120639323.35744976, "num_examples": 514205}, {"name": "valid", "num_bytes": 14691546.077845251, "num_examples": 66754}, {"name": "test", "num_bytes": 14869510.309176764, "num_examples": 69702}], "download_size": 120668347, "dataset_size": 150200379.7444718}}
|
2023-02-17T18:10:50+00:00
|
9c8ed09afa42bb9abc4d3bf02cf5be8e904196e4
|
# Dataset Card for "SRV-Europarl-ST-processed-mt-it"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/SRV-Europarl-ST-processed-mt-it
|
[
"region:us"
] |
2023-02-17T16:05:47+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 121979892.4315265, "num_examples": 504773}, {"name": "valid", "num_bytes": 15246425.496728532, "num_examples": 67701}, {"name": "test", "num_bytes": 15677401.348182635, "num_examples": 70814}], "download_size": 118670951, "dataset_size": 152903719.27643767}}
|
2023-02-17T18:11:38+00:00
|
dc712549b309cf07d54446f2fc21dc16dcf7400d
|
# Dataset Card for "enwiki20230101"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lsb/enwiki20230101
|
[
"region:us"
] |
2023-02-17T16:31:59+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20961930875, "num_examples": 6593739}], "download_size": 8418949922, "dataset_size": 20961930875}}
|
2023-02-17T17:12:32+00:00
|
9bd9b97cd026d93274e735e9489565f519130deb
|
Ubque/The_Model_Dump
|
[
"license:other",
"region:us"
] |
2023-02-17T16:36:56+00:00
|
{"license": "other"}
|
2023-02-18T22:06:26+00:00
|
|
9de1d4221701600b0f3f8ae0e0367506ce63b493
|
# Dataset Card for "Ships-In-Satellite-Imagery"
## Dataset Description
- **Paper:** [Ships in Satellite Imagery](https://www.kaggle.com/datasets/rhammell/ships-in-satellite-imagery)
### Licensing Information
CC BY-SA 4.0
## Citation Information
[Ships in Satellite Imagery](https://www.kaggle.com/datasets/rhammell/ships-in-satellite-imagery)
```
@misc{kaggle_sisi,
author = {Hammell, Robert},
title = {Ships in Satellite Imagery},
howpublished = {\url{https://www.kaggle.com/datasets/rhammell/ships-in-satellite-imagery}},
year = {2018},
version = {9.0}
}
```
|
jonathan-roberts1/Ships-In-Satellite-Imagery
|
[
"license:cc-by-sa-4.0",
"region:us"
] |
2023-02-17T16:48:59+00:00
|
{"license": "cc-by-sa-4.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "an entire ship", "1": "no ship or part of a ship"}}}}], "splits": [{"name": "train", "num_bytes": 41806886, "num_examples": 4000}], "download_size": 0, "dataset_size": 41806886}}
|
2023-03-31T13:38:12+00:00
|
ae5610f3c92dd06f8c8e0024a6df56b042a63eb1
|
Ubque/The_Hypernetwork_Dump
|
[
"license:other",
"region:us"
] |
2023-02-17T17:03:52+00:00
|
{"license": "other"}
|
2023-02-18T05:15:33+00:00
|
|
4bb60de01b2d9e323d364d76d8b08c2eaeef1a64
|
# Dataset Card for "Satellite-Images-of-Hurricane-Damage"
## Dataset Description
- **Paper** [Deep learning based damage detection on post-hurricane satellite imagery](https://arxiv.org/pdf/1807.01688.pdf)
- **Data** [IEEE-Dataport](https://ieee-dataport.org/open-access/detecting-damaged-buildings-post-hurricane-satellite-imagery-based-customized)
- **Split** Train_another
- **GitHub** [DamageDetection](https://github.com/qcao10/DamageDetection)
## Split Information
This HuggingFace dataset repository contains just the Train_another split.
### Licensing Information
[CC BY 4.0](https://ieee-dataport.org/open-access/detecting-damaged-buildings-post-hurricane-satellite-imagery-based-customized)
## Citation Information
[Deep learning based damage detection on post-hurricane satellite imagery](https://arxiv.org/pdf/1807.01688.pdf)
[IEEE-Dataport](https://ieee-dataport.org/open-access/detecting-damaged-buildings-post-hurricane-satellite-imagery-based-customized)
```
@misc{sdad-1e56-18,
title = {Detecting Damaged Buildings on Post-Hurricane Satellite Imagery Based on Customized Convolutional Neural Networks},
author = {Cao, Quoc Dung and Choe, Youngjun},
year = 2018,
publisher = {IEEE Dataport},
doi = {10.21227/sdad-1e56},
url = {https://dx.doi.org/10.21227/sdad-1e56}
}
@article{cao2018deep,
title={Deep learning based damage detection on post-hurricane satellite imagery},
author={Cao, Quoc Dung and Choe, Youngjun},
journal={arXiv preprint arXiv:1807.01688},
year={2018}
}
```
|
jonathan-roberts1/Satellite-Images-of-Hurricane-Damage
|
[
"license:cc-by-4.0",
"arxiv:1807.01688",
"region:us"
] |
2023-02-17T17:22:30+00:00
|
{"license": "cc-by-4.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "flooded or damaged buildings", "1": "undamaged buildings"}}}}], "splits": [{"name": "train", "num_bytes": 25588780, "num_examples": 10000}], "download_size": 26998688, "dataset_size": 25588780}}
|
2023-03-31T13:53:28+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.