sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
e2081415d9557bf26b115d021ebfefc65268e72c | oz117/xinyan | [
"license:openrail",
"region:us"
]
| 2023-01-13T19:56:37+00:00 | {"license": "openrail"} | 2023-01-13T19:58:34+00:00 |
|
0856ebb9a85405303a2227fbf41ae814f49fe7d0 |
# textures-color-normal-1k
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The `textures-color-normal-1k` dataset is an image dataset of 1000+ color and normal map textures in 512x512 resolution.
The dataset was created for use in image to image tasks.
It contains a combination of CC0 procedural and photoscanned PBR materials from [ambientCG](https://ambientcg.com/).
## Dataset Structure
### Data Instances
Each data point contains a 512x512 color texture and the corresponding 512x512 normal map.
### Data Fields
* `color`: the color texture as a PIL image
* `normal`: the normal map as a PIL image
### Data Splits
| | train |
| -- | ----- |
| ambientCG | 1426 |
## Dataset Creation
### Curation Rationale
`textures-color-normal-1k` was created to provide an accesible source of data for automating 3D-asset creation workflows.
The [Dream Textures](https://github.com/carson-katri/dream-textures) add-on is one such tool providing AI automation in Blender.
By training models designed for image to image tasks, this particular use-case can be more accurately automated.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained from [ambientCG](https://ambientcg.com/)'s CC0 textures. Only the color and normal maps were included in this dataset.
## Additional Information
### Dataset Curators
The dataset was created by Carson Katri, with the images being provided by [ambientCG](https://ambientcg.com/).
### Licensing Information
All of the images used in this dataset are CC0.
### Citation Information
[N/A]
### Contributions
Thanks to [@carson-katri](https://github.com/carson-katri) for adding this dataset. | dream-textures/textures-color-normal-1k | [
"task_categories:image-to-image",
"size_categories:1K<n<10K",
"license:cc0-1.0",
"region:us"
]
| 2023-01-13T21:14:42+00:00 | {"license": "cc0-1.0", "size_categories": ["1K<n<10K"], "task_categories": ["image-to-image"], "dataset_info": {"features": [{"name": "color", "dtype": "image"}, {"name": "normal", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 110631687.194, "num_examples": 1426}], "download_size": 111043422, "dataset_size": 110631687.194}} | 2023-01-13T21:20:22+00:00 |
dec357dbbae0a9f4b6bc67c88181671df4da6140 | This dataset contains a pre-processed version from Wikipedia suitable for semantic search.
You can load the dataset like this:
```python
from datasets import load_dataset
lang = 'en'
data = load_dataset(f"Cohere/wikipedia-22-12", lang, split='train', streaming=True)
for row in data:
print(row)
break
```
This will load the dataset in a streaming mode (so that you don't need to download the whole dataset) and you can process it row-by-row.
The articles are splitted into paragraphs. Further, for each article we added statistics on the page views in 2022 as well as in how many other languages an article is available.
The dataset is sorted by page views, so that the most popular Wikipedia articles come first. So if you e.g. read the top-100k rows, you get quite a good coverage on topics that
are broadly interesting for people.
## Semantic Search Embeddings
We also provide versions where documents have been embedded using the [cohere multilingual embedding model](https://txt.cohere.ai/multilingual/),
e.g. [wikipedia-22-12-en-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings) contains the paragraphs and their respective embeddings for English.
You can find the embeddings for other languages in the datasets `wikipedia-22-12-{lang}-embeddings`.
## Dataset Creation
The [XML data dumps](https://dumps.wikimedia.org/backup-index.html) from December 20th, 2022 where downloaded and processed
with [wikiextractor](https://github.com/attardi/wikiextractor) (with Version: 2.75) and the following command:
```
python WikiExtractor.py --json -s --lists ../dumps/dewiki-20210101-pages-articles.xml.bz2 -o text_de
```
To count in how many languages an article is available, we downloaded the SQL files with language links from:
```
https://dumps.wikimedia.org/{lang}wiki/{datestr}/{filename}
```
And processed the SQL file to read for each article the outbound links.
Pageviews where downloaded from:
```
https://dumps.wikimedia.org/other/pageviews/{year}/{year}-{month_str}/pageviews-{year}{month_str}{day_str}-{hour_str}0000.gz
```
We downloaded for each day the pageviews for a random hour. We then computed the harmonic mean of page views. We used harmonic mean to address cases where articles receive
a very high number of page views at e.g. a certain time point. We use the log scores for the page views to increase the numerical stability.
Code to compute the page views was:
```python
import gzip
import sys
from collections import Counter, defaultdict
import math
import tqdm
import json
title_views = {}
#Score: Harmonic mean (View_Day_1 * View_Day_2 * View_day_3)
# Add log for better numerical stabilitiy
# Add +1 to avoid log(0)
# Compare the sum, so that days without view are counted as 0 views
for filepath in tqdm.tqdm(sys.argv[1:]):
with gzip.open(filepath, "rt") as fIn:
for line in fIn:
splits = line.strip().split()
if len(splits) == 4:
lang, title, views, _ = line.strip().split()
lang = lang.lower()
if lang.endswith(".m"): #Add mobile page scores to main score
lang = lang[0:-2]
if lang.count(".") > 0:
continue
if lang not in title_views:
title_views[lang] = {}
if title not in title_views[lang]:
title_views[lang][title] = 0.0
title_views[lang][title] += math.log(int(views)+1)
#Save results
for lang in title_views:
with open(f"pageviews_summary/{lang}.json", "w") as fOut:
fOut.write(json.dumps(title_views[lang]))
```
We filter out paragraphs that start with `BULLET::::`, `Section::::`, `<templatestyles`, or `[[File:`.
Further, we also only include paragraphs with at least 100 characters (using Python len method=.
| Cohere/wikipedia-22-12 | [
"region:us"
]
| 2023-01-13T21:52:20+00:00 | {} | 2023-02-22T15:58:09+00:00 |
358f6039936813bf23d8dc75bd5b81a8b7786f46 | cmudrc/MegaFlow2D | [
"license:apache-2.0",
"region:us"
]
| 2023-01-13T21:58:54+00:00 | {"license": "apache-2.0"} | 2023-07-20T16:39:03+00:00 |
|
fca1d1cabed1772e1791de4ed4ed688eb4495b60 | madhavdutta/xbeshDS | [
"license:mit",
"region:us"
]
| 2023-01-13T22:35:08+00:00 | {"license": "mit"} | 2023-01-13T22:35:08+00:00 |
|
0699fb7d2b88b13b296b5cbf0ea0a10b84e99b3f |
# Wikipedia (hi) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (hi)](https://hi.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-hi-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-hi-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-hi-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-hi-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:hi",
"license:apache-2.0",
"region:us"
]
| 2023-01-13T23:14:15+00:00 | {"annotations_creators": ["expert-generated"], "language": ["hi"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:53:57+00:00 |
4f804e9c5125f60783ac45d15ed2687c69489f07 |
# Wikipedia (simple English) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (simple English)](https://simple.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-simple-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-simple-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-simple-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-simple-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-01-13T23:25:25+00:00 | {"language": ["en"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:56:34+00:00 |
caf814d284f0c7cdf873c1d8d091a3d3b7d9e6db |
# Wikipedia (ko) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ko)](https://ko.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-ko-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:ko",
"license:apache-2.0",
"region:us"
]
| 2023-01-13T23:51:11+00:00 | {"language": ["ko"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:55:35+00:00 |
864ed9e578765742ee3bb0ee5713090bf6a8a31a |
# Wikipedia (zh) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (zh)](https://zh.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-zh-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-zh-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-zh-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-zh-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:zh",
"license:apache-2.0",
"region:us"
]
| 2023-01-14T00:44:03+00:00 | {"language": ["zh"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:55:57+00:00 |
5994ad42934e6ab586a26ff95e1f36fe1d60eeb4 | corey4593/H | [
"license:openrail",
"region:us"
]
| 2023-01-14T01:32:29+00:00 | {"license": "openrail"} | 2023-01-14T01:32:29+00:00 |
|
b4043b5fde2b869c13651745b1789ba731d928e3 | Hantao/ChemReactionImageRE | [
"license:gpl-3.0",
"region:us"
]
| 2023-01-14T01:35:01+00:00 | {"license": "gpl-3.0"} | 2023-01-15T02:33:23+00:00 |
|
ea5f00014bd7626aa55affb07de57d519ab3309a |
# Wikipedia (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ar)](https://ar.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-ar-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ar",
"license:apache-2.0",
"region:us"
]
| 2023-01-14T02:00:24+00:00 | {"annotations_creators": ["expert-generated"], "language": ["ar"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:52:28+00:00 |
40586a9887f2d274e10e7d365c349b69eb4a03e4 |
# Wikipedia (ja) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ja)](https://ja.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-ja-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:ja",
"license:apache-2.0",
"region:us"
]
| 2023-01-14T03:52:53+00:00 | {"language": ["ja"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:55:06+00:00 |
c6d28bb2d58ca7d0b9ebc20196c2acf47afa5270 | # Dataset Card for "bookcorpus_compact_512_shard6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saibo/bookcorpus_compact_512_shard6_of_10 | [
"region:us"
]
| 2023-01-14T04:59:41+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 804636647, "num_examples": 121933}], "download_size": 401996995, "dataset_size": 804636647}} | 2023-01-14T05:00:29+00:00 |
cf0ab57fee5fbdf26d83e5859c988a2deb62d20d | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | miguelinc/oratorialab | [
"task_categories:image-classification",
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-01-14T05:38:53+00:00 | {"license": "cc-by-sa-4.0", "task_categories": ["image-classification"]} | 2023-01-14T06:03:58+00:00 |
394b69bb560bd38fadb4d4ea046533f7344d16e3 | WHITjason/cortana | [
"license:other",
"region:us"
]
| 2023-01-14T06:35:54+00:00 | {"license": "other"} | 2023-01-14T06:37:25+00:00 |
|
06b607a6df4e7453140e3d0c4cd77c0c061f91f2 |
Dataset for anime person detection.
| Dataset | Train | Test | Validate | Description |
|-------------|-------|------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| v1.1 | 9255 | 460 | 877 | Annotated on the Roboflow platform, including labeled data for various types of anime images (e.g. illustrations, comics). The dataset has also undergone data augmentation techniques to enhance its diversity and quality. |
| raw | 3085 | 460 | 877 | The same as `v1.1` dataset, without any preprocess and data augmentation. Suitable for directly upload to Roboflow platform. |
| AniDet3.v3i | 16124 | 944 | 1709 | Third-party dataset, source: https://universe.roboflow.com/university-of-michigan-ann-arbor/anidet3-ai42v/dataset/3 . The dataset only contains images from anime series. This means the models directly trained on it will not perform well on illustrations and comics. |
The best practice is to combine the `AniDet3.v3i` dataset with the `v1.1` dataset for training. We provide an [online demo](https://huggingface.co/spaces/deepghs/anime_object_detection). | deepghs/anime_person_detection | [
"task_categories:object-detection",
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
]
| 2023-01-14T06:50:46+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["object-detection"], "tags": ["art"]} | 2023-05-18T15:26:42+00:00 |
7a8645307c759f22190194336b0e27c36949d1b5 |
# Wikipedia (it) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (it)](https://it.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-it-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-it-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-it-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-it-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:it",
"license:apache-2.0",
"region:us"
]
| 2023-01-14T07:01:23+00:00 | {"annotations_creators": ["expert-generated"], "language": ["it"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:54:18+00:00 |
dac7bbcb9c7deeb898b12859a2ea9d5b0c1ecc91 | # Dataset Card for "AToMiC-All-Images_wi-pixels"
## Dataset Description
- **Homepage:** [AToMiC homepage](https://trec-atomic.github.io/)
- **Source:** [WIT](https://github.com/google-research-datasets/wit)
- **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning](https://arxiv.org/abs/2103.01913)
### Languages
The dataset contains 108 languages in Wikipedia.
### Data Instances
Each instance is an image, its representation in bytes, and its associated captions.
### Intended Usage
1. Image collection for Text-to-Image retrieval
2. Image--Caption Retrieval/Generation/Translation
### Licensing Information
[CC BY-SA 4.0 international license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
TBA
### Acknowledgement
Thanks to:
[img2dataset](https://github.com/rom1504/img2dataset)
[Datasets](https://github.com/huggingface/datasets)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | TREC-AToMiC/AToMiC-Images-v0.2 | [
"size_categories:100M<n<1B",
"license:cc-by-sa-4.0",
"arxiv:2103.01913",
"region:us"
]
| 2023-01-14T08:12:44+00:00 | {"license": "cc-by-sa-4.0", "size_categories": ["100M<n<1B"], "dataset_info": {"features": [{"name": "image_url", "dtype": "string"}, {"name": "image_id", "dtype": "string"}, {"name": "language", "sequence": "string"}, {"name": "caption_reference_description", "sequence": "string"}, {"name": "caption_alt_text_description", "sequence": "string"}, {"name": "caption_attribution_description", "sequence": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 180043531167.75, "num_examples": 11019202}], "download_size": 174258428914, "dataset_size": 180043531167.75}} | 2023-02-14T21:29:39+00:00 |
578971f00bb25e8f8908d85555aa2328767dbe0f | # Dataset Card for "lesion_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pranav456/lesion_dataset | [
"region:us"
]
| 2023-01-14T09:15:59+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AK", "1": "BCC", "2": "BKL", "3": "DF", "4": "MEL", "5": "NV", "6": "SCC", "7": "VASC"}}}}], "splits": [{"name": "train", "num_bytes": 119842603.034, "num_examples": 20262}, {"name": "test", "num_bytes": 28970560.951, "num_examples": 5069}], "download_size": 142732051, "dataset_size": 148813163.98499998}} | 2023-01-14T09:16:34+00:00 |
1cf719df8656d336007786980ce361ae2a85ebdb | # Urdu Summarization
## Dataset Overview
The Urdu Summarization dataset contains news articles in Urdu language along with their summaries. The dataset contains a total of 48,071 news articles collected from the BBC Urdu website. Each article is labeled with its headline, summary, and full text.
## Dataset Details
The dataset contains the following columns:
- id (string): Unique identifier for each article
- url (string): URL for the original article
- title (string): Headline of the article
- summary (string): Summary of the article
- text (string): Full text of the article
The dataset is distributed under the MIT License.
## Data Collection
The data was collected from the BBC Urdu website using web scraping techniques. The articles were published between 2003 and 2020, covering a wide range of topics such as politics, sports, technology, and entertainment.
## Data Preprocessing
The text data was preprocessed to remove any HTML tags and non-Urdu characters. The summaries were created by human annotators, who read the full text of the articles and summarized the main points. The dataset was split into training, validation, and test sets, with 80%, 10%, and 10% of the data in each set respectively.
## Potential Use Cases
This dataset can be used for training and evaluating models for automatic summarization of Urdu text. It can also be used for research in natural language processing, machine learning, and information retrieval.
## Acknowledgements
I thank the BBC Urdu team for publishing the news articles on their website and making them publicly available. We also thank the human annotators who created the summaries for the articles.
## Relevant Papers
No papers have been published yet using this dataset.
## License
The dataset is distributed under the MIT License. | mwz/ursum | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:ur",
"license:mit",
"region:us"
]
| 2023-01-14T09:24:32+00:00 | {"language": ["ur"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["summarization", "text-generation", "text2text-generation"], "pretty_name": "ursum"} | 2023-05-14T12:03:37+00:00 |
cdd07f2970e393b42b3a1a7b5c4b24fd11737a98 |
# Wikipedia (es) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (es)](https://es.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-es-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-es-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-es-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-es-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:es",
"license:apache-2.0",
"region:us"
]
| 2023-01-14T12:01:41+00:00 | {"annotations_creators": ["expert-generated"], "language": ["es"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:53:23+00:00 |
0c69f7a4cdd1de8d61250bc9f66e317dce589bfc | # Dataset Card for "lesion_dataset_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pranav456/lesion_dataset_1 | [
"region:us"
]
| 2023-01-14T12:07:57+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AK", "1": "BCC", "2": "BKL", "3": "DF", "4": "MEL", "5": "NV", "6": "SCC", "7": "VASC"}}}}], "splits": [{"name": "train", "num_bytes": 105488287.136, "num_examples": 17728}, {"name": "test", "num_bytes": 29225882.496, "num_examples": 5062}, {"name": "validation", "num_bytes": 15175816.112, "num_examples": 2541}], "download_size": 142659177, "dataset_size": 149889985.744}} | 2023-01-14T12:08:21+00:00 |
c42ce8e80187380e25cfe7fb7a4ef049cf22bf86 |
<div align="center">
<img width="640" alt="fcakyon/crack-instance-segmentation" src="https://huggingface.co/datasets/fcakyon/crack-instance-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['cracks-and-spalling', 'object']
```
### Number of Images
```json
{'valid': 73, 'test': 37, 'train': 323}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("fcakyon/crack-instance-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/palmdetection-1cjxw/crack_detection_experiment/dataset/5](https://universe.roboflow.com/palmdetection-1cjxw/crack_detection_experiment/dataset/5?ref=roboflow2huggingface)
### Citation
```
@misc{ 400-img_dataset,
title = { 400 img Dataset },
type = { Open Source Dataset },
author = { Master dissertation },
howpublished = { \\url{ https://universe.roboflow.com/master-dissertation/400-img } },
url = { https://universe.roboflow.com/master-dissertation/400-img },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-14 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 14, 2023 at 10:08 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 433 images.
Crack-spall are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| fcakyon/crack-instance-segmentation | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"region:us"
]
| 2023-01-14T12:18:16+00:00 | {"task_categories": ["image-segmentation"], "tags": ["roboflow", "roboflow2huggingface"]} | 2023-01-14T13:08:27+00:00 |
8476c6bfcd4d4ffafcb169e1b257eb22c96a7968 | deugene/face_for_textual_inversion | [
"license:unknown",
"region:us"
]
| 2023-01-14T12:36:42+00:00 | {"license": "unknown"} | 2023-01-14T12:42:07+00:00 |
|
b004b6a7be5f2bee712f8763e42b0aeadf19d586 |
<div align="center">
<img width="640" alt="fcakyon/pokemon-classification" src="https://huggingface.co/datasets/fcakyon/pokemon-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Golbat', 'Machoke', 'Omastar', 'Diglett', 'Lapras', 'Kabuto', 'Persian', 'Weepinbell', 'Golem', 'Dodrio', 'Raichu', 'Zapdos', 'Raticate', 'Magnemite', 'Ivysaur', 'Growlithe', 'Tangela', 'Drowzee', 'Rapidash', 'Venonat', 'Pidgeot', 'Nidorino', 'Porygon', 'Lickitung', 'Rattata', 'Machop', 'Charmeleon', 'Slowbro', 'Parasect', 'Eevee', 'Starmie', 'Staryu', 'Psyduck', 'Dragonair', 'Magikarp', 'Vileplume', 'Marowak', 'Pidgeotto', 'Shellder', 'Mewtwo', 'Farfetchd', 'Kingler', 'Seel', 'Kakuna', 'Doduo', 'Electabuzz', 'Charmander', 'Rhyhorn', 'Tauros', 'Dugtrio', 'Poliwrath', 'Gengar', 'Exeggutor', 'Dewgong', 'Jigglypuff', 'Geodude', 'Kadabra', 'Nidorina', 'Sandshrew', 'Grimer', 'MrMime', 'Pidgey', 'Koffing', 'Ekans', 'Alolan Sandslash', 'Venusaur', 'Snorlax', 'Paras', 'Jynx', 'Chansey', 'Hitmonchan', 'Gastly', 'Kangaskhan', 'Oddish', 'Wigglytuff', 'Graveler', 'Arcanine', 'Clefairy', 'Articuno', 'Poliwag', 'Abra', 'Squirtle', 'Voltorb', 'Ponyta', 'Moltres', 'Nidoqueen', 'Magmar', 'Onix', 'Vulpix', 'Butterfree', 'Krabby', 'Arbok', 'Clefable', 'Goldeen', 'Magneton', 'Dratini', 'Caterpie', 'Jolteon', 'Nidoking', 'Alakazam', 'Dragonite', 'Fearow', 'Slowpoke', 'Weezing', 'Beedrill', 'Weedle', 'Cloyster', 'Vaporeon', 'Gyarados', 'Golduck', 'Machamp', 'Hitmonlee', 'Primeape', 'Cubone', 'Sandslash', 'Scyther', 'Haunter', 'Metapod', 'Tentacruel', 'Aerodactyl', 'Kabutops', 'Ninetales', 'Zubat', 'Rhydon', 'Mew', 'Pinsir', 'Ditto', 'Victreebel', 'Omanyte', 'Horsea', 'Pikachu', 'Blastoise', 'Venomoth', 'Charizard', 'Seadra', 'Muk', 'Spearow', 'Bulbasaur', 'Bellsprout', 'Electrode', 'Gloom', 'Poliwhirl', 'Flareon', 'Seaking', 'Hypno', 'Wartortle', 'Mankey', 'Tentacool', 'Exeggcute', 'Meowth']
```
### Number of Images
```json
{'train': 4869, 'test': 732, 'valid': 1390}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("fcakyon/pokemon-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14](https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14?ref=roboflow2huggingface)
### Citation
```
@misc{ pokedex_dataset,
title = { Pokedex Dataset },
type = { Open Source Dataset },
author = { Lance Zhang },
howpublished = { \\url{ https://universe.roboflow.com/robert-demo-qvail/pokedex } },
url = { https://universe.roboflow.com/robert-demo-qvail/pokedex },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-14 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on December 20, 2022 at 5:34 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 6991 images.
Pokemon are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 224x224 (Fit (black edges))
No image augmentation techniques were applied.
| fcakyon/pokemon-classification | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Gaming",
"region:us"
]
| 2023-01-14T12:47:57+00:00 | {"task_categories": ["image-classification"], "tags": ["roboflow", "roboflow2huggingface", "Gaming"]} | 2023-01-14T13:06:55+00:00 |
00235ee6a2cf9f43f6576327257783bcbcb1f3e2 |
# Wikipedia (fr) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (fr)](https://fr.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-fr-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fr",
"license:apache-2.0",
"region:us"
]
| 2023-01-14T13:09:16+00:00 | {"annotations_creators": ["expert-generated"], "language": ["fr"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:53:41+00:00 |
8095c7585f73e5419fbe6ee0fc59b7871a249d78 | # AutoTrain Dataset for project: books-rating-analysis
## Dataset Description
This dataset has been automatically processed by AutoTrain for project books-rating-analysis.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Unnamed: 0": 1976,
"feat_user_id": "792500e85277fa7ada535de23e7eb4c3",
"feat_book_id": 18243288,
"feat_review_id": "7f8219233a62bde2973ddd118e8162e2",
"target": 2,
"text": "This book is kind of tricky. It is pleasingly written stylistically and it's an easy read so I cruised along on the momentum of the smooth prose and the potential of what this book could have and should have been for a while before I realized that it is hollow and aimless. \n This is a book where the extraordinary is deliberately made mundane for some reason and characters are stubbornly underdeveloped. It is as if all the drama has been removed from this story, leaving a bloodless collection of 19th industrial factoids sprinkled amidst a bunch of ciphers enduring an oddly dull series of tragedies. \n Mildly entertaining for a while but ultimately unsatisfactory.",
"feat_date_added": "Mon Apr 27 11:37:36 -0700 2015",
"feat_date_updated": "Mon May 04 08:50:42 -0700 2015",
"feat_read_at": "Mon May 04 08:50:42 -0700 2015",
"feat_started_at": "Mon Apr 27 00:00:00 -0700 2015",
"feat_n_votes": 0,
"feat_n_comments": 0
},
{
"feat_Unnamed: 0": 523,
"feat_user_id": "01ec1a320ffded6b2dd47833f2c8e4fb",
"feat_book_id": 18220354,
"feat_review_id": "c19543fab6b2386df92c1a9ba3cf6e6b",
"target": 4,
"text": "4.5 stars!! I am always intrigued to read a novel written from a male POV. I am equally fascinated by pen names, and even when the writer professes to be one gender or the other (or leaves it open to the imagination such as BG Harlen), I still wonder at the back of my mind whether the author is a male or female. Do some female writers have a decidedly masculine POV? Yes, there are several that come to mind. Do some male writers have a feminine \"flavor\" to their writing? It seems so. \n And so we come to the fascinating Thou Shalt Not. I loved Luke's story, as well as JJ Rossum's writing style, and don't want to be pigeon-holed into thinking that the author is male or female. That's just me. Either way, it's a very sexy and engaging book with plenty of steamy scenes to satisfy even the most jaded erotic romance reader (such as myself). The story carries some very weighty themes (domestic violence, adultery, the nature of beauty), but the book is very fast-paced and satisfying. Will Luke keep himself out of trouble with April? Will he learn to really love someone again? No spoilers here, but the author answers these questions while exploring what qualities are really important and what makes someone worthy of love. \n This book has a very interesting conclusion that some readers will love, and some might find a little challenging. I loved it and can't wait to read more from this author. \n *ARC provided by the author in exchange for an honest review.",
"feat_date_added": "Mon Jul 29 16:04:04 -0700 2013",
"feat_date_updated": "Thu Dec 12 21:43:54 -0800 2013",
"feat_read_at": "Fri Dec 06 00:00:00 -0800 2013",
"feat_started_at": "Thu Dec 05 00:00:00 -0800 2013",
"feat_n_votes": 10,
"feat_n_comments": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Unnamed: 0": "Value(dtype='int64', id=None)",
"feat_user_id": "Value(dtype='string', id=None)",
"feat_book_id": "Value(dtype='int64', id=None)",
"feat_review_id": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['0', '1', '2', '3', '4', '5'], id=None)",
"text": "Value(dtype='string', id=None)",
"feat_date_added": "Value(dtype='string', id=None)",
"feat_date_updated": "Value(dtype='string', id=None)",
"feat_read_at": "Value(dtype='string', id=None)",
"feat_started_at": "Value(dtype='string', id=None)",
"feat_n_votes": "Value(dtype='int64', id=None)",
"feat_n_comments": "Value(dtype='int64', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2397 |
| valid | 603 |
| LewisShanghai/autotrain-data-books-rating-analysis | [
"task_categories:text-classification",
"language:en",
"region:us"
]
| 2023-01-14T13:27:44+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2023-01-14T14:31:43+00:00 |
5c5caf5f55c2eccc555f62fda2b111c408104e0a |
# Wikipedia (de) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (de)](https://de.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-de-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-de-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-de-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-de-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:de",
"license:apache-2.0",
"region:us"
]
| 2023-01-14T13:41:14+00:00 | {"annotations_creators": ["expert-generated"], "language": ["de"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:52:49+00:00 |
548ff709fd4916a8d2845379bdd9d2e107e46906 | Poison413/Goat | [
"license:unlicense",
"region:us"
]
| 2023-01-14T14:21:55+00:00 | {"license": "unlicense"} | 2023-01-14T14:22:37+00:00 |
|
7e99c707e2a35bf9926057f74f0e07d5c3df54dd | # Dataset Card for "Patents_Green_Plastics"
number of rows: 11.196
features: [title, label]
label: 0, 1
The dataset contains patent abstracts that are labeled as 1 (="Green Plastics") and 0 (="Not Green Plastics").
# Dataset Creation
The [BIGPATENT](https://huggingface.co/datasets/big_patent) dataset is the source for this dataset.
In a first step, abstracts of BIGPATENT were filtered by the terms "plastics" and "polymer". The resulting "Plastics" dataset contained 64.372 samples.
In a second step, the 64.372 samples were filtered by terms which define "green plastics".
"Green Plastics" are defined by the list of terms:
"degrada", "recycl", "bio", "compost", "bact", "waste recovery", "zero waste", "sustainab", "Bio-Based", "Bio-Degradable", "Renewable", "Green Plastics", "Renewable", "Degradable", "Compostable", "Bio-resorbable", "Bio-soluble", "Cellulose", "Biodegradable","Mycelium", "Recyclability", "Degradability", "Bio-Polymer", "reuse", "reusable", "reusing", "Degradation", "Multiple Use", "Bioplastic", "Polyhydroxyalkanoates", "PHA", "Polylactide", "PLA", "Polyglycolide", "PGA"
(some terms might repeat)
The group of "Green Plastics" containing 5.598 rows was labeled as 1.
An equal amount of samples (=5.598 rows) was randomly chosen from the "Plastics" dataset, defined as "Not Green Plastics" and labeled as 0.
Both groups ("Green Plastics" and "Not Green Plastics") were merged together. | cwinkler/patents_green_plastics | [
"size_categories:10K<n<100K",
"language:en",
"region:us"
]
| 2023-01-14T14:25:09+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "dataset_info": {"features": [{"name": "abstract", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 8088461, "num_examples": 11196}], "download_size": 4025753, "dataset_size": 8088461}} | 2023-01-16T09:50:06+00:00 |
d7eb782e625634e2d5f086e74d3724d10984209c | # Dataset Card for "PickaPic-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/PickaPic-images | [
"region:us"
]
| 2023-01-14T14:40:41+00:00 | {"dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "created_at", "dtype": "timestamp[ns]"}, {"name": "image_uid", "dtype": "string"}, {"name": "user_id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "negative_prompt", "dtype": "string"}, {"name": "seed", "dtype": "int64"}, {"name": "gs", "dtype": "float64"}, {"name": "steps", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "num_generated", "dtype": "int64"}, {"name": "scheduler_cls", "dtype": "string"}, {"name": "model_id", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 70620168, "num_examples": 109356}], "download_size": 12059565, "dataset_size": 70620168}} | 2023-02-05T11:27:36+00:00 |
6627c83e298369e8d4fe25ed6ae7afd75ba978e3 | # Dataset Card for "PickaPic-rankings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/PickaPic-rankings | [
"region:us"
]
| 2023-01-14T14:45:16+00:00 | {"dataset_info": {"features": [{"name": "ranking_id", "dtype": "int64"}, {"name": "created_at", "dtype": "timestamp[ns]"}, {"name": "user_id", "dtype": "int64"}, {"name": "image_1_uid", "dtype": "string"}, {"name": "image_2_uid", "dtype": "string"}, {"name": "image_3_uid", "dtype": "string"}, {"name": "image_4_uid", "dtype": "string"}, {"name": "best_image_uid", "dtype": "string"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7758101, "num_examples": 25355}], "download_size": 3973871, "dataset_size": 7758101}} | 2023-02-05T11:26:22+00:00 |
bf0b97abe1dc52fad9e9852045ad5186de7ce459 | # Dataset Card for "PickaPic-downloads"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/PickaPic-downloads | [
"region:us"
]
| 2023-01-14T14:54:01+00:00 | {"dataset_info": {"features": [{"name": "download_id", "dtype": "int64"}, {"name": "created_at", "dtype": "timestamp[ns]"}, {"name": "user_id", "dtype": "int64"}, {"name": "image_uid", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 734763, "num_examples": 2512}], "download_size": 299901, "dataset_size": 734763}} | 2023-02-05T11:26:41+00:00 |
2341cea6d281fd00f95eb7a94b1c2cf19a5fef78 | # Dataset Card for "mec-punctuation-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tiagoblima/punctuation-mec-bert | [
"region:us"
]
| 2023-01-14T15:03:34+00:00 | {"dataset_info": {"features": [{"name": "tag", "dtype": "string"}, {"name": "sent_id", "dtype": "int64"}, {"name": "text_id", "dtype": "int64"}, {"name": "sent_text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1075373, "num_examples": 2168}], "download_size": 313037, "dataset_size": 1075373}} | 2023-02-22T23:43:57+00:00 |
42017367613a0984f2c66415150232aa107aff8f |
Trained on 29 N/SFW Yor Forger images but don't Worry! The SFW will work unexpectedly good! | SatyamSSJ10/YorForger | [
"task_categories:image-to-text",
"size_categories:n<1K",
"license:openrail",
"region:us"
]
| 2023-01-14T15:04:07+00:00 | {"license": "openrail", "size_categories": ["n<1K"], "task_categories": ["image-to-text"], "pretty_name": "YorForger"} | 2023-01-14T15:11:06+00:00 |
80b9cd811229672e1a1146b64fb4057553bd5905 | SandipPalit/Movie_Dataset | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:sentence-similarity",
"size_categories:10K<n<100K",
"language:en",
"Movie",
"Cinema",
"Film",
"region:us"
]
| 2023-01-14T15:20:44+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "text-generation", "summarization", "sentence-similarity"], "pretty_name": "Movie Dataset", "tags": ["Movie", "Cinema", "Film"]} | 2023-01-14T15:41:07+00:00 |
|
545dff648c2d8afc56b192267c8c9b41eea02779 | torileatherman/sample | [
"license:apache-2.0",
"region:us"
]
| 2023-01-14T15:23:40+00:00 | {"license": "apache-2.0"} | 2023-01-14T15:23:40+00:00 |
|
d2a400d6b9333941ba7633f1726fbe862b63691c | # Dataset Card for "biggest_ideas_metadata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bejaeger/biggest_ideas_metadata | [
"region:us"
]
| 2023-01-14T15:49:16+00:00 | {"dataset_info": {"features": [{"name": "videoId", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "channelId", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "publishedAt", "dtype": "string"}, {"name": "likes", "dtype": "string"}, {"name": "views", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 58734, "num_examples": 48}], "download_size": 25139, "dataset_size": 58734}} | 2023-01-14T15:49:26+00:00 |
683e03dc302a4ea2c583457e0451f934a358ba7d | # Dataset Card for "biggest_ideas_transcriptions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bejaeger/biggest_ideas_transcriptions | [
"region:us"
]
| 2023-01-14T17:26:21+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "published", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "videoId", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "start", "dtype": "float64"}, {"name": "end", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 8704046, "num_examples": 32983}], "download_size": 2443020, "dataset_size": 8704046}} | 2023-02-09T05:45:17+00:00 |
fa2b2715172a3422e3fb8cdb79902d35ec416aec | # Dataset Card for "cartoon-blip-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | keron5671/cartoon-blip-captions | [
"region:us"
]
| 2023-01-14T18:35:04+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 722303.0, "num_examples": 17}], "download_size": 717339, "dataset_size": 722303.0}} | 2023-01-14T18:35:07+00:00 |
5d608aaba4862399b76607c90fa4c4eb143377a7 | eengel7/sentiment_analysis_training_test | [
"license:apache-2.0",
"region:us"
]
| 2023-01-14T19:10:40+00:00 | {"license": "apache-2.0"} | 2023-01-14T19:13:40+00:00 |
|
bc94bd1238bbc0d02471ad346b2457b441643e81 | # Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tema7707/dreambooth-hackathon-images | [
"region:us"
]
| 2023-01-14T19:38:45+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 14147658.0, "num_examples": 50}], "download_size": 0, "dataset_size": 14147658.0}} | 2023-01-14T21:09:51+00:00 |
85c2eca83d4b9dcecc043c23748cb8c1047f683f |
# Wikipedia (en) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (en)](https://en.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-en-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-en-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-en-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-en-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-01-14T20:36:11+00:00 | {"annotations_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:51:57+00:00 |
10dbc09876db4ee50a9e54051425ee343b1ae5c4 | # Dataset Card for "raven"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jkwiatkowski/raven | [
"region:us"
]
| 2023-01-14T21:25:46+00:00 | {"dataset_info": {"features": [{"name": "inputs", "dtype": {"array3_d": {"shape": [16, 160, 160], "dtype": "uint8"}}}, {"name": "target", "dtype": {"array2_d": {"shape": [16, 113], "dtype": "int8"}}}, {"name": "index", "dtype": "uint8"}], "splits": [{"name": "train", "num_bytes": 17714970000, "num_examples": 42000}, {"name": "val", "num_bytes": 5904990000, "num_examples": 14000}, {"name": "test", "num_bytes": 5904990000, "num_examples": 14000}], "download_size": 1225465267, "dataset_size": 29524950000}} | 2023-01-14T21:40:08+00:00 |
c574708ad8844fdd043c6c30917b6da8699f0a89 | # Dataset Card for "thaigov-radio-audio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | napatswift/thaigov-radio-audio | [
"region:us"
]
| 2023-01-15T05:02:59+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 828772851.0, "num_examples": 426}], "download_size": 824527615, "dataset_size": 828772851.0}} | 2023-01-15T05:05:18+00:00 |
1c1aa4ed8622db18916d912afaaaf60a8dca9775 | # Dataset Card for "copy_dataset_competitors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanjin7/copy_dataset_competitors | [
"region:us"
]
| 2023-01-15T05:49:49+00:00 | {"dataset_info": {"features": [{"name": "shop_id", "dtype": "int64"}, {"name": "ad_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 691250, "num_examples": 2884}], "download_size": 421475, "dataset_size": 691250}} | 2023-01-16T16:28:43+00:00 |
5d7f1aaf95bf2599fecc65def3461765ad9e9200 | # Dataset Card for "copy_dataset_untrimmed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanjin7/copy_dataset_untrimmed | [
"region:us"
]
| 2023-01-15T06:00:58+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28610253, "num_examples": 84352}], "download_size": 0, "dataset_size": 28610253}} | 2023-01-16T16:31:33+00:00 |
7d30d46f6097b46067c7f316457eccd8cf834054 |
<div style='background: #ffeec0; border: 1px solid #ffd86d; padding:1em; border-radius:3px;'>
<h3 style='margin:0'>Outdated!</h3>
<p style='margin:0'>This dataset has been superseded by:</p>
<p style='margin:0'><a style="font-size: 2em;" href='https://huggingface.co/datasets/hearmeneigh/e621-rising-v3-curated'>E621 Rising V3 Curated Image Dataset</a></p>
</div>
**Warning: THIS dataset is NOT suitable for use by minors. The dataset contains X-rated/NFSW content.**
# E621 Rising: Curated Image Dataset v1
**441,623** images (~200GB) downloaded from `e621.net` with [tags](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-curated/raw/main/meta/tag-counts.json).
This is a curated dataset, picked from the E621 Rising: Raw Image Dataset v1 [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-raw).
## Image Processing
* Only `jpg` and `png` images were considered
* Image width and height have been clamped to `(0, 4096]px`; larger images have been resized to meet the limit
* Alpha channels have been removed
* All images have been converted to `jpg` format
* All images have been converted to TrueColor `RGB`
* All images have been verified to load with `Pillow`
* Metadata from E621 is [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-raw/tree/main/meta)
## Tags
For a comprehensive list of tags and counts, [see here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-curated/raw/main/meta/tag-counts.json).
### Changes From E621
* Symbols have been prefixed with `symbol:`, e.g. `symbol:<3`
* Aspect ratio has been prefixed with `aspect_ratio:`, e.g. `aspect_ratio:16_9`
* All categories except `general` have been prefixed with the category name, e.g. `artist:somename`. The categories are:
* `artist`
* `copyright`
* `character`
* `species`
* `invalid`
* `meta`
* `lore`
### Additional Tags
* Image rating
* `rating:explicit`
* `rating:questionable`
* `rating:safe` | hearmeneigh/e621-rising-v1-curated | [
"size_categories:100K<n<1M",
"not-for-all-audiences",
"region:us"
]
| 2023-01-15T06:11:18+00:00 | {"size_categories": ["100K<n<1M"], "pretty_name": "E621 Rising: Curated Image Dataset v1", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192529551170.037, "num_examples": 441623}], "download_size": 190109066617, "dataset_size": 192529551170.037}, "viewer": false, "tags": ["not-for-all-audiences"]} | 2023-10-09T17:56:31+00:00 |
49d8d456e29bfc46b6886eeaffed3795b58b1adf | # Dataset Card for "copy_dataset_trimmed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanjin7/copy_dataset_trimmed | [
"region:us"
]
| 2023-01-15T06:30:47+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "text_clean", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "only_emojis", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 23566873, "num_examples": 46055}, {"name": "test", "num_bytes": 3195980, "num_examples": 6021}, {"name": "val", "num_bytes": 4095174, "num_examples": 8128}], "download_size": 21524666, "dataset_size": 30858027}} | 2023-01-16T16:35:57+00:00 |
00d5759f4e9d8824614940c68d5d33ff0e5414d3 | sayakpaul/sample-datasets | [
"license:apache-2.0",
"region:us"
]
| 2023-01-15T07:09:08+00:00 | {"license": "apache-2.0"} | 2024-02-11T04:53:45+00:00 |
|
29ce7633e32a291c7a89009fba542691a585475c | # Dataset Card for "copy_dataset_primaries"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanjin7/copy_dataset_primaries | [
"region:us"
]
| 2023-01-15T07:18:57+00:00 | {"dataset_info": {"features": [{"name": "value", "sequence": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 254708030, "num_examples": 586243}], "download_size": 21073974, "dataset_size": 254708030}} | 2023-01-16T16:27:54+00:00 |
f4934776f0c4347b4375569a21e676190c8bfece | # AutoTrain Dataset for project: soft-tissue-tumor-species
## Dataset Description
This dataset has been automatically processed by AutoTrain for project bone-tumor-species.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<512x512 RGB PIL image>",
"target": 16
},
{
"image": "<512x512 RGB PIL image>",
"target": 29
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adipose Tissue', 'Alveolar Rhabdomyosarcoma', 'Alveolar Soft Part Sarcoma', 'Angioleiomyoma', 'Angiosarcoma', 'Clear Cell Sarcoma', 'Dedifferentiated Liposarcoma', 'Dense Connective Tissue', 'Dermatofibrosarcoma Protuberans', 'Desmoplastic Small Round Cell Tumor', 'Elastic Connective Tissue', 'Elastofibroma', 'Embryonal Rhabdomyosarcoma', 'Epithelioid Hemangioendothelioma', 'Epithelioid Sarcoma', 'Extraskeletal Myxoid Chondrosarcoma', 'Fibrocartilage', 'Fibroma (of Tendon Sheath)', 'Fibromatosis', 'Fibrosarcoma', 'Fibrous Histiocytoma', 'Glomus Tumor', 'Granular Cell Tumor', 'Hemangioma', 'Heterotopic Ossification (Myositis Ossificans)', 'Hibernoma', 'Hyaline Cartilage', 'Inflammatory Myofibroblastic Tumor', 'Kaposi Sarcoma', 'Leiomyosarcoma', 'Lipoblastoma', 'Lipoma', 'Loose Connective Tissue', 'Low Grade Fibromyxoid Sarcoma', 'Malignant Peripheral Nerve Sheath Tumor', 'Myopericytoma', 'Myxofibrosarcoma', 'Myxoid Liposarcoma', 'Neurofibroma', 'Nodular Fasciitis', 'Perineurioma', 'Proliferative Fasciitis', 'Rhabdomyoma', 'Schwannoma', 'Sclerosing Epithelioid Fibrosarcoma', 'Skeletal Muscle', 'Solitary Fibrous Tumor', 'Spindle Cell Lipoma', 'Synovial Sarcoma', 'Tenosynovial Giant Cell Tumor', 'Tumoral Calcinosis', 'Undifferentiated Pleiomorphic Sarcoma'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 6268 |
| valid | 1570 |
| itslogannye/softTissueTumorousLesions | [
"task_categories:image-classification",
"region:us"
]
| 2023-01-15T08:31:08+00:00 | {"task_categories": ["image-classification"]} | 2023-01-15T09:05:06+00:00 |
31f4a29fd16f130c75be983ed9b61aef629ace44 | # Dataset Card for "wikisource-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Zombely/wikisource-small | [
"region:us"
]
| 2023-01-15T09:28:13+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24302805827.009, "num_examples": 15549}], "download_size": 19231095073, "dataset_size": 24302805827.009}} | 2023-01-15T18:48:01+00:00 |
b286c1ab71508a8a54cc6e984fbae20ee5e5784c | # Dataset Card for HateBR - Offensive Language and Hate Speech Dataset in Brazilian Portuguese
## Dataset Description
- **Homepage:** http://143.107.183.175:14581/
- **Repository:** https://github.com/franciellevargas/HateBR
- **Paper:** https://aclanthology.org/2022.lrec-1.777/
- **Leaderboard:**
- **Point of Contact:** https://franciellevargas.github.io/
### Dataset Summary
HateBR is the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection on the web and social media. The HateBR corpus was collected from Brazilian Instagram comments of politicians and manually annotated by specialists. It is composed of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level (highly, moderately, and slightly offensive messages), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. Furthermore, baseline experiments were implemented reaching 85% of F1-score outperforming the current literature models for the Portuguese language. Accordingly, we hope that the proposed expertly annotated corpus may foster research on hate speech and offensive language detection in the Natural Language Processing area.
**Relevant Links:**
* [**Demo: Brasil Sem Ódio**](http://143.107.183.175:14581/)
* [**MOL - Multilingual Offensive Lexicon Annotated with Contextual Information**](https://github.com/franciellevargas/MOL)
### Supported Tasks and Leaderboards
Hate Speech Detection
### Languages
Portuguese
## Dataset Structure
### Data Instances
```
{'instagram_comments': 'Hipocrita!!',
'offensive_language': True,
'offensiveness_levels': 2,
'antisemitism': False,
'apology_for_the_dictatorship': False,
'fatphobia': False,
'homophobia': False,
'partyism': False,
'racism': False,
'religious_intolerance': False,
'sexism': False,
'xenophobia': False,
'offensive_&_non-hate_speech': True,
'non-offensive': False,
'specialist_1_hate_speech': False,
'specialist_2_hate_speech': False,
'specialist_3_hate_speech': False
}
```
### Data Fields
* **instagram_comments**: Instagram comments.
* **offensive_language**: A classification of comments as either offensive (True) or non-offensive (False).
* **offensiveness_levels**: A classification of comments based on their level of offensiveness, including highly offensive (3), moderately offensive (2), slightly offensive (1) and non-offensive (0).
* **antisemitism**: A classification of whether or not the comment contains antisemitic language.
* **apology_for_the_dictatorship**: A classification of whether or not the comment praises the military dictatorship period in Brazil.
* **fatphobia**: A classification of whether or not the comment contains language that promotes fatphobia.
* **homophobia**: A classification of whether or not the comment contains language that promotes homophobia.
* **partyism**: A classification of whether or not the comment contains language that promotes partyism.
* **racism**: A classification of whether or not the comment contains racist language.
* **religious_intolerance**: A classification of whether or not the comment contains language that promotes religious intolerance.
* **sexism**: A classification of whether or not the comment contains sexist language.
* **xenophobia**: A classification of whether or not the comment contains language that promotes xenophobia.
* **offensive_&_no-hate_speech**: A classification of whether or not the comment is offensive but does not contain hate speech.
* **specialist_1_hate_speech**: A classification of whether or not the comment was annotated by the first specialist as hate speech.
* **specialist_2_hate_speech**: A classification of whether or not the comment was annotated by the second specialist as hate speech.
* **specialist_3_hate_speech**: A classification of whether or not the comment was annotated by the third specialist as hate speech.
### Data Splits
The original authors of the dataset did not propose a standard data split. To address this, we use the [multi-label data stratification technique](http://scikit.ml/stratification.html) implemented at the scikit-multilearn library to propose a train-validation-test split. This method considers all classes for hate speech in the data and attempts to balance the representation of each class in the split.
| name |train|validation|test|
|---------|----:|----:|----:|
|hatebr|4480|1120|1400|
## Considerations for Using the Data
### Discussion of Biases
Please refer to [the HateBR paper](https://aclanthology.org/2022.lrec-1.777/) for a discussion of biases.
### Licensing Information
The HateBR dataset, including all its components, is provided strictly for academic and research purposes. The use of the dataset for any commercial or non-academic purpose is expressly prohibited without the prior written consent of [SINCH](https://www.sinch.com/).
### Citation Information
```
@inproceedings{vargas2022hatebr,
title={HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection},
author={Vargas, Francielle and Carvalho, Isabelle and de G{\'o}es, Fabiana Rodrigues and Pardo, Thiago and Benevenuto, Fabr{\'\i}cio},
booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference},
pages={7174--7183},
year={2022}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. | ruanchaves/hatebr | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"instagram",
"doi:10.57967/hf/0274",
"region:us"
]
| 2023-01-15T11:11:33+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pt"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "HateBR - Offensive Language and Hate Speech Dataset in Brazilian Portuguese", "tags": ["instagram"]} | 2023-04-13T12:39:40+00:00 |
f2e34fe16864d3c41ab7fb375e997f74c3f7aad2 |
## CysPresso
A machine learning approach to predict the recombinant expressibility of cysteine-dense peptides in mammalian cells based on their primary sequence, compatible with multiple types of protein representations generated by deep learning solutions.
## Associated paper
CysPresso: Prediction of cysteine-dense peptide expression in mammalian cells using deep learning protein representations. BioRxiv link: https://www.biorxiv.org/content/10.1101/2022.09.17.508377v1
## Code
The CysPresso repo can be found at https://github.com/Zebreu/cyspresso/
---
license: mit
---
| TonyKYLim/CysPresso | [
"doi:10.57967/hf/0628",
"region:us"
]
| 2023-01-15T13:52:23+00:00 | {} | 2023-03-04T22:59:20+00:00 |
8b1e8a7a17eb9162d1a9e70ff834209d2e2bf9f8 | EvSz/Pokemon-by-Name-512px | [
"license:mit",
"region:us"
]
| 2023-01-15T14:01:08+00:00 | {"license": "mit"} | 2023-01-15T14:09:51+00:00 |
|
1822b80aa21684f24907e6818cbc7f665ef2b9d1 | # Dataset Card for "beautiful_interesting_spectacular_photo_model_30000_with_generated_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_model_30000_with_generated_captions | [
"region:us"
]
| 2023-01-15T14:08:05+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}, {"name": "generated_caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120069364.0, "num_examples": 228}], "download_size": 120060100, "dataset_size": 120069364.0}} | 2023-01-17T18:00:09+00:00 |
ca73c2889a0c6cb3b7493d56da6e73f6a2229d77 | # Dataset Card for "portraits3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | conorcl/portraits3 | [
"region:us"
]
| 2023-01-15T15:46:20+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35873206.596, "num_examples": 1343}], "download_size": 35191726, "dataset_size": 35873206.596}} | 2023-01-16T22:45:08+00:00 |
501f1909b6c1ff30926d991b94539f4c58165cc7 |
# Dataset Card for OpenSubtitles
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/OpenSubtitles.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2016/pdf/62_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This dataset is a subset from the en-nl open_subtitles dataset.
It contains only subtitles of tv shows that have a rating of at least 8.0 with at least 1000 votes.
The subtitles are also ordered and appended into buffers several lengths, with a maximum of 370 tokens
as tokenized by the 'yhavinga/ul2-base-dutch' tokenizer.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- en
- nl
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding the open_subtitles dataset.
| yhavinga/open_subtitles_en_nl | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories:1M<n<10M",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"language:nl",
"license:unknown",
"region:us"
]
| 2023-01-15T16:48:34+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en", "nl"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K", "1M<n<10M", "n<1K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "OpenSubtitles En Nl"} | 2023-01-15T17:02:32+00:00 |
248f2ec6df3f7de5244c3719ce74f26159d6dddd | # Dataset Card for "my_section_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Rami/my_section_5 | [
"region:us"
]
| 2023-01-15T17:31:12+00:00 | {"dataset_info": {"features": [{"name": "body", "dtype": "string"}, {"name": "question_id", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "meta_data", "struct": [{"name": "AcceptedAnswerId", "dtype": "string"}, {"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "Tags", "sequence": "string"}, {"name": "Title", "dtype": "string"}]}, {"name": "answer", "struct": [{"name": "body", "dtype": "string"}, {"name": "comments", "list": [{"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "body", "dtype": "string"}]}, {"name": "meta_data", "struct": [{"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "ParentId", "dtype": "string"}, {"name": "Score", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 557588, "num_examples": 71}], "download_size": 236408, "dataset_size": 557588}} | 2023-01-21T18:07:36+00:00 |
45e4439f9e52be06dd302c85636ecfc71e53172b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: ilos-vigil/bigbird-small-indonesian-nli
* Dataset: indonli
* Config: indonli
* Split: test_expert
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ilos-vigil](https://huggingface.co/ilos-vigil) for evaluating this model. | autoevaluate/autoeval-eval-indonli-indonli-42cf53-2902084628 | [
"autotrain",
"evaluation",
"region:us"
]
| 2023-01-15T18:37:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["indonli"], "eval_info": {"task": "natural_language_inference", "model": "ilos-vigil/bigbird-small-indonesian-nli", "metrics": [], "dataset_name": "indonli", "dataset_config": "indonli", "dataset_split": "test_expert", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}} | 2023-01-15T18:38:38+00:00 |
2f3f894574938ff122f1f8d6be289897c337c37c |
<div align="center">
<img width="640" alt="keremberke/pothole-segmentation" src="https://huggingface.co/datasets/keremberke/pothole-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['pothole']
```
### Number of Images
```json
{'test': 5, 'train': 80, 'valid': 5}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/pothole-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/imacs-pothole-detection-wo8mu/pothole-detection-irkz9/dataset/4](https://universe.roboflow.com/imacs-pothole-detection-wo8mu/pothole-detection-irkz9/dataset/4?ref=roboflow2huggingface)
### Citation
```
@misc{ pothole-detection-irkz9_dataset,
title = { Pothole Detection Dataset },
type = { Open Source Dataset },
author = { IMACS Pothole Detection },
howpublished = { \\url{ https://universe.roboflow.com/imacs-pothole-detection-wo8mu/pothole-detection-irkz9 } },
url = { https://universe.roboflow.com/imacs-pothole-detection-wo8mu/pothole-detection-irkz9 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jan },
note = { visited on 2023-01-15 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 15, 2023 at 6:38 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 90 images.
Pothole are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| keremberke/pothole-segmentation | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"Construction",
"Self Driving",
"Transportation",
"Damage Risk",
"region:us"
]
| 2023-01-15T18:38:37+00:00 | {"task_categories": ["image-segmentation"], "tags": ["roboflow", "roboflow2huggingface", "Construction", "Self Driving", "Transportation", "Damage Risk"]} | 2023-01-15T18:38:49+00:00 |
a0b014ffa0bf56b0a490676d298b3d73ca52b8d6 |
<div align="center">
<img width="640" alt="keremberke/pokemon-classification" src="https://huggingface.co/datasets/keremberke/pokemon-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Porygon', 'Goldeen', 'Hitmonlee', 'Hitmonchan', 'Gloom', 'Aerodactyl', 'Mankey', 'Seadra', 'Gengar', 'Venonat', 'Articuno', 'Seaking', 'Dugtrio', 'Machop', 'Jynx', 'Oddish', 'Dodrio', 'Dragonair', 'Weedle', 'Golduck', 'Flareon', 'Krabby', 'Parasect', 'Ninetales', 'Nidoqueen', 'Kabutops', 'Drowzee', 'Caterpie', 'Jigglypuff', 'Machamp', 'Clefairy', 'Kangaskhan', 'Dragonite', 'Weepinbell', 'Fearow', 'Bellsprout', 'Grimer', 'Nidorina', 'Staryu', 'Horsea', 'Electabuzz', 'Dratini', 'Machoke', 'Magnemite', 'Squirtle', 'Gyarados', 'Pidgeot', 'Bulbasaur', 'Nidoking', 'Golem', 'Dewgong', 'Moltres', 'Zapdos', 'Poliwrath', 'Vulpix', 'Beedrill', 'Charmander', 'Abra', 'Zubat', 'Golbat', 'Wigglytuff', 'Charizard', 'Slowpoke', 'Poliwag', 'Tentacruel', 'Rhyhorn', 'Onix', 'Butterfree', 'Exeggcute', 'Sandslash', 'Pinsir', 'Rattata', 'Growlithe', 'Haunter', 'Pidgey', 'Ditto', 'Farfetchd', 'Pikachu', 'Raticate', 'Wartortle', 'Vaporeon', 'Cloyster', 'Hypno', 'Arbok', 'Metapod', 'Tangela', 'Kingler', 'Exeggutor', 'Kadabra', 'Seel', 'Voltorb', 'Chansey', 'Venomoth', 'Ponyta', 'Vileplume', 'Koffing', 'Blastoise', 'Tentacool', 'Lickitung', 'Paras', 'Clefable', 'Cubone', 'Marowak', 'Nidorino', 'Jolteon', 'Muk', 'Magikarp', 'Slowbro', 'Tauros', 'Kabuto', 'Spearow', 'Sandshrew', 'Eevee', 'Kakuna', 'Omastar', 'Ekans', 'Geodude', 'Magmar', 'Snorlax', 'Meowth', 'Pidgeotto', 'Venusaur', 'Persian', 'Rhydon', 'Starmie', 'Charmeleon', 'Lapras', 'Alakazam', 'Graveler', 'Psyduck', 'Rapidash', 'Doduo', 'Magneton', 'Arcanine', 'Electrode', 'Omanyte', 'Poliwhirl', 'Mew', 'Alolan Sandslash', 'Mewtwo', 'Weezing', 'Gastly', 'Victreebel', 'Ivysaur', 'MrMime', 'Shellder', 'Scyther', 'Diglett', 'Primeape', 'Raichu']
```
### Number of Images
```json
{'train': 4869, 'valid': 1390, 'test': 732}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/pokemon-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14](https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14?ref=roboflow2huggingface)
### Citation
```
@misc{ pokedex_dataset,
title = { Pokedex Dataset },
type = { Open Source Dataset },
author = { Lance Zhang },
howpublished = { \\url{ https://universe.roboflow.com/robert-demo-qvail/pokedex } },
url = { https://universe.roboflow.com/robert-demo-qvail/pokedex },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-14 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on December 20, 2022 at 5:34 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 6991 images.
Pokemon are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 224x224 (Fit (black edges))
No image augmentation techniques were applied.
| keremberke/pokemon-classification | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Gaming",
"region:us"
]
| 2023-01-15T18:40:15+00:00 | {"task_categories": ["image-classification"], "tags": ["roboflow", "roboflow2huggingface", "Gaming"]} | 2023-01-15T18:41:29+00:00 |
3197466466cc55f621470bf6f3b801f5126a37d3 | kaliansh/oneapi | [
"license:unknown",
"region:us"
]
| 2023-01-15T19:01:07+00:00 | {"license": "unknown"} | 2024-02-14T13:47:28+00:00 |
|
d85cd40e7fcc63db6f6a3a2d509df4006e4a9ecc | The data comes from tweets collected and classified through Crowdbreaks.org [Muller, Martin M., and Marcel Salathe. "Crowdbreaks: Tracking Health Trends Using Public Social Media Data and Crowdsourcing." Frontiers in public health 7 (2019).]. Tweets have been classified as pro-vaccine (1), neutral (0) or anti-vaccine (-1). | allevelly/dataset | [
"license:creativeml-openrail-m",
"region:us"
]
| 2023-01-15T19:30:56+00:00 | {"license": "creativeml-openrail-m"} | 2023-01-15T19:35:20+00:00 |
e987f0f12e99e9d25aea1c3bcaa21394282864b2 | **Warning: THIS dataset is NOT suitable for use by minors. The dataset contains X-rated/NFSW content.**
# E621 Rising: Mini Image Dataset v1
**9,999** images (~4GB) downloaded from `e621.net` with [tags](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-curated/raw/main/meta/tag-counts.json).
This is a small sample of the E621 Rising: Raw Dataset [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-raw).
## Image Processing
* Only `jpg` and `png` images were considered
* Image width and height have been clamped to `(0, 4096]px`; larger images have been resized to meet the limit
* Alpha channels have been removed
* All images have been converted to `jpg` format
* All images have been converted to TrueColor `RGB`
* All images have been verified to load with `Pillow`
* Metadata from E621 is [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-raw/tree/main/meta) | hearmeneigh/e621-rising-v1-mini | [
"size_categories:1K<n<10K",
"not-for-all-audiences",
"region:us"
]
| 2023-01-15T21:05:19+00:00 | {"size_categories": ["1K<n<10K"], "pretty_name": "E621 Rising: Mini Image Dataset v1", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4051563749.765, "num_examples": 9999}], "download_size": 3979423376, "dataset_size": 4051563749.765}, "viewer": false, "tags": ["not-for-all-audiences"]} | 2023-05-12T15:35:30+00:00 |
53c84cec7a38405b70f678eadf2352739782330f | Sawera/CMP_Fascade_Dataset | [
"license:unknown",
"region:us"
]
| 2023-01-15T22:15:42+00:00 | {"license": "unknown"} | 2023-01-15T22:41:53+00:00 |
|
9f7b568454c0fc27942cc932d37538cbabbfa725 |
# Hand-picked class images:
`mai.class.768`: **contains most of the Images of the below datasets, not including animefull**
- 1082 hand-picked images containing at least `1girl`, generated by various finetuned models
- other inputs include `cowboy shot`, `a clean illustration of`, `best quality`, etc
`mk11_mixed_1girl_clip1_768.zip`: 893 images;
- mk11_last + some similar ones (mk9, mk7, mk12f, etc); **clip1**;
- various sampler/cfg/steps; with/without hires fix
- **manually picked**
```
1girl, (best quality), by sks
Negative prompt: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 28, Sampler: DDIM, CFG scale: 6.5, Seed: 1049498024, Size: 768x768, Model hash: e02601f3, Denoising strength: 0.7, First pass size: 384x384
a clean illustration of 1girl, (best quality), by sks
Negative prompt: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 28, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3047399039, Size: 768x768, Model hash: e02601f3, Model: tmp_models_miko11_last, Batch size: 2, Batch pos: 0, Denoising strength: 0.7, First pass size: 384x384
a clean illustration of 1girl, (best quality), cowboy shot, by sks
Negative prompt: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 25, Sampler: Euler a, CFG scale: 7.5, Seed: 4047034818, Size: 768x768, Model hash: e02601f3, Denoising strength: 0.7, First pass size: 384x384
```
<br>
`NOTmk11_mixed_clip1_768.zip`: 141 images; **manually picked**
- images that look good, possibly from evt_v2, evt_v3, gd series, claus series, etc
- cl17_miko9_040; CMA10hf3_mk12f_cl17_03(d0c); d0c_nice_035_clip1; evtv3_clip1; mk11_cl11_030; mk11f.class.768.clip2; mk12f; others
```
a clean illustration of 1girl, (best quality), by sks
Negative prompt: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 28, Sampler: DDIM, CFG scale: 6.5, Seed: 3011937418, Size: 768x768, Model hash: e02601f3, Denoising strength: 0.7, First pass size: 384x384
a clean illustration of 1girl, (best quality), by sks
Negative prompt: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 28, Sampler: DDIM, CFG scale: 6.5, Seed: 3755499482, Size: 768x768, Model hash: 2a535ddd, Denoising strength: 0.7, First pass size: 384x384
```
<br>
`mk11_bqsks_1girl_clip2_768`: 236 images; mk11_last.ckpt
```
1girl, (best quality), by sks
Negative prompt: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 28, Sampler: DDIM, CFG scale: 6.5, Seed: 3053897408, Size: 768x768, Model hash: e02601f3, Clip skip: 2
```
<br>
<br>
# Manually-inspected:
`cropped_hands.512.class`: 5958 images of cropped hands from [anime crop dataset](https://www.gwern.net/Crops#hand-model)
- inspected & removed most of the non-hand images
- upscaled to 512x512
<br>
<br>
# Auto-generated:
之前生成的Class Images
`animefull_1girl_clip2_512.zip`: 746 images
```
1girl
Steps: 35, Sampler: DDIM, CFG scale: 7, Seed: 5109255, Size: 512x512, Model hash: e6e8e1fc, Clip skip: 2
```
<br>
`animefull_mabq_1girl_clip2_512.zip`: 102 images
```
masterpiece, best quality, 1girl
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 25, Sampler: DDIM, CFG scale: 7, Seed: 2653130834, Size: 512x512, Model hash: e6e8e1fc, Clip skip: 2
```
<br>
| trojblue/RegImages | [
"license:openrail",
"region:us"
]
| 2023-01-16T00:50:10+00:00 | {"license": "openrail"} | 2023-03-04T18:19:38+00:00 |
0a447f67c7e9b313646577a233cfdf54f87d5cbf | mepmepmep/outline | [
"license:afl-3.0",
"doi:10.57967/hf/0277",
"region:us"
]
| 2023-01-16T01:12:01+00:00 | {"license": "afl-3.0"} | 2023-01-16T01:39:32+00:00 |
|
271eef3bfe83ae04b2feadc47a041b151392edd5 |
This dataset is derived from the RICO SCA presented by Google Research in the seq2act paper. This is a synthetically generated dataset for UI RefExp task.
See original repo for details and licensing info:
https://github.com/google-research/google-research/blob/master/seq2act/data_generation/README.md#generate-ricosca-dataset
The splits in this dataset are consistent with the splits in the crowdsourced [UIBert RefExp](https://huggingface.co/datasets/ivelin/ui_refexp_saved) dataset. Training split examples do not include images from the Validation or Test examples in the UI Bert RefExp dataset. Respectively the images in Validation and Test splits here match the images in the Validation and Test splits of UIBert RefExp.
| ivelin/rico_sca_refexp_synthetic | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-01-16T01:18:23+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "pretty_name": "RICO SCA RefExp", "dataset_info": [{"config_name": "rico_sca_refexp", "features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "string"}, {"name": "labels", "list": [{"name": "prompt", "dtype": "string"}, {"name": "target_bounding_box", "struct": [{"name": "xmin", "dtype": "float32"}, {"name": "ymin", "dtype": "float32"}, {"name": "xmax", "dtype": "float32"}, {"name": "ymax", "dtype": "float32"}]}]}], "splits": [{"name": "train", "num_bytes": 2605508469, "num_examples": 24063}, {"name": "validation", "num_bytes": 21192787, "num_examples": 160}, {"name": "test", "num_bytes": 22057836, "num_examples": 185}], "download_size": 6514703641, "dataset_size": 2605508469}]} | 2023-01-19T20:11:53+00:00 |
ed2ac81fd8ca23e630c0877bf6e0363ffdba9a11 | 챗봇 학습용 문답 페어 11,876개로 구성되었습니다.
https://github.com/songys/Chatbot_data
---
dataset_info:
features:
- name: index
dtype: int64
- name: Q
dtype: string
- name: A
dtype: string
splits:
- name: train
num_bytes: 773618
num_examples: 9465
- name: test
num_bytes: 246115
num_examples: 2358
download_size: 557106
dataset_size: 1019733
---
# Dataset Card for "chatbot_emotion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jeongah/chatbot_emotion | [
"region:us"
]
| 2023-01-16T02:55:04+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "Q", "dtype": "string"}, {"name": "A", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 773618, "num_examples": 9465}, {"name": "test", "num_bytes": 246115, "num_examples": 2358}], "download_size": 557106, "dataset_size": 1019733}} | 2023-01-16T04:29:58+00:00 |
cd1ce056c56bc99a409641bd85551d1b0e407e4a | 1983dlgustn/anything | [
"license:openrail",
"region:us"
]
| 2023-01-16T03:22:23+00:00 | {"license": "openrail"} | 2023-01-16T03:22:23+00:00 |
|
1cb382c54fe39823c40f4899760f870bdf78d714 | # Dataset Card for "speech2text2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | qbaro/speech2text2 | [
"region:us"
]
| 2023-01-16T04:13:31+00:00 | {"dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float32"}, {"name": "sampling_rate", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 2091887413, "num_examples": 2994}, {"name": "valid", "num_bytes": 275249571, "num_examples": 361}], "download_size": 2351520332, "dataset_size": 2367136984}} | 2023-01-16T04:26:26+00:00 |
3d6ef22dc618e0e6bf7432dfa86cc1a233bb443c | mistermpo2/slot-gacor | [
"license:bigscience-openrail-m",
"region:us"
]
| 2023-01-16T04:14:03+00:00 | {"license": "bigscience-openrail-m"} | 2023-01-16T04:15:15+00:00 |
|
ba85b94a6d2d72e7a84cbf54566ad4d77b04dd9d | ppppssss/human | [
"license:afl-3.0",
"region:us"
]
| 2023-01-16T05:08:24+00:00 | {"license": "afl-3.0"} | 2023-01-16T05:48:40+00:00 |
|
7b60795487a35accda5ba59a3cfe1dfde7acd1e7 | train 문장 4452개, test 문장 1113개로 구성되어 있습니다.
욕설일 경우 spam 값이 1, 욕설에 해당하지 않는 경우 0으로 라벨링 되어 있습니다.
https://github.com/2runo/Curse-detection-data
---
dataset_info:
features:
- name: index
dtype: int64
- name: sentence
dtype: string
- name: ' spam'
dtype: int64
splits:
- name: train
num_bytes: 429333
num_examples: 4452
- name: test
num_bytes: 106670
num_examples: 1113
download_size: 364457
dataset_size: 536003
---
# Dataset Card for "curse-detection-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jeongah/curse-detection-data | [
"region:us"
]
| 2023-01-16T05:27:47+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "document", "dtype": "string"}, {"name": " label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 429333, "num_examples": 4452}, {"name": "test", "num_bytes": 106670, "num_examples": 1113}], "download_size": 364473, "dataset_size": 536003}} | 2023-01-16T06:41:20+00:00 |
fade264de95d3d5197648408125eddf319a14faf | Codebmk/opus_ubuntu_lg_to_en | [
"task_categories:translation",
"language:lg",
"language:en",
"license:bsd-3-clause",
"region:us"
]
| 2023-01-16T06:24:34+00:00 | {"language": ["lg", "en"], "license": "bsd-3-clause", "task_categories": ["translation"]} | 2023-01-16T06:32:39+00:00 |
|
341735d2902a73423a6cf145ac6759eb36d64e34 | # Dataset Card for "artfaces"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jlbaker361/artfaces | [
"region:us"
]
| 2023-01-16T07:21:04+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "style", "dtype": "string"}, {"name": "src_image", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 65636859.275, "num_examples": 30163}], "download_size": 51043102, "dataset_size": 65636859.275}} | 2023-01-16T07:21:38+00:00 |
23935f59573d24083168480d48aff51cbb0408b3 | # AutoTrain Dataset for project: consunmer-complain-multiclass-classification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project consunmer-complain-multiclass-classification.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Unnamed: 0": null,
"text": "This is awful and borderline abuse. I can't imagine thinking that's even slightly okay",
"target": 5
},
{
"feat_Unnamed: 0": null,
"text": "i didnt feel so hot",
"target": 3
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Unnamed: 0": "Value(dtype='int64', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['0', '1', '2', '3', '4', '5'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 20663 |
| valid | 5167 |
| harperlucy2023/autotrain-data-consunmer-complain-multiclass-classification | [
"task_categories:text-classification",
"language:en",
"region:us"
]
| 2023-01-16T09:25:28+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2023-01-16T09:45:42+00:00 |
860480b540ed84407c32ae6deb2b92bfba2aad58 | furiousteabag/squad_v2_counterfactual | [
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-01-16T09:27:47+00:00 | {"license": "cc-by-sa-4.0"} | 2023-01-16T09:27:47+00:00 |
|
9ac3f38d8356e09cf547fbd034de4c21d56c51f1 | tariktalhadinc/testdataset | [
"license:openrail",
"region:us"
]
| 2023-01-16T09:51:59+00:00 | {"license": "openrail"} | 2023-01-16T09:51:59+00:00 |
|
f366e69fc822e3bfc75cc6666ea4883f986ce3da | # Dataset Card for "PickaPic-selected-prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/PickaPic-selected-prompts | [
"region:us"
]
| 2023-01-16T09:59:33+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10527, "num_examples": 200}], "download_size": 0, "dataset_size": 10527}} | 2023-01-17T16:01:53+00:00 |
4738131d40903d0576531a93bc000888c78c045d |
# Dataset Card for LFID Magnetic Field Data
You will need the package
https://chaosmagpy.readthedocs.io/en/master/
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LIFD DataSets homepage](https://cemac.github.io/LIFD_ML_Datasets/)
- **Repository:** [LIFD GitHub Repo](https://github.com/cemac/LIFD_ML_Datasets/)
- **Point of Contact:** [*coming soon*]()
### Dataset Summary
A description of the dataset:
The gufm1 model is a global geomagnetic model based on spherical harmonics, covering the period 1590 - 1990, and is described in the publication:
[Andrew Jackson, Art R. T. Jonkers and Matthew R. Walker (2000), “Four centuries of geomagnetic secular variation from historical records”, Phil. Trans. R. Soc. A.358957–990, http://doi.org/10.1098/rsta.2000.0569](https://royalsocietypublishing.org/doi/10.1098/rsta.2000.0569)
### Supported Tasks and Leaderboards
### Data Fields
The dataset has dimension (181, 361, 401) whose axes represent co-latitude, longitude, time, and whose values are the radial magnetic field at the core-mantle boundary (radius 3485km) in nT.
The colatitude takes values (in degrees): 0,1,2,3,…180; longitude (degrees) takes values -180,-179,….180; and time is yearly 1590, 1591, …1990.
## Dataset Creation
The native model representation is converted into a discrete dataset in physical space and time, using the Python package [Chaosmagpy](https://chaosmagpy.readthedocs.io/en/master/)
### Source Data
## Additional Information
### Dataset Curators
### Licensing Information
MIT Licence
### Citation Information
### Contributions
| cemachelen/LIFD_Magnetic_Field_Data | [
"task_categories:feature-extraction",
"task_categories:image-to-image",
"task_categories:time-series-forecasting",
"task_categories:object-detection",
"task_categories:unconditional-image-generation",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:gufm1 model",
"language:en",
"license:mit",
"region:us"
]
| 2023-01-16T10:43:30+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["gufm1 model"], "task_categories": ["feature-extraction", "image-to-image", "time-series-forecasting", "object-detection", "unconditional-image-generation"], "task_ids": ["multivariate-time-series-forecasting"], "pretty_name": "LIFD Magnetic Fields", "tags": []} | 2023-12-04T10:19:32+00:00 |
7c5d6696130c34f25a04d4d4459ea9a91249b169 | justram/AToMiC-Qrels-v0.2-src | [
"size_categories:10M<n<100M",
"license:cc-by-4.0",
"region:us"
]
| 2023-01-16T11:26:15+00:00 | {"license": "cc-by-4.0", "size_categories": ["10M<n<100M"], "dataset_info": {"features": [{"name": "text_id", "dtype": "string"}, {"name": "Q0", "dtype": "string"}, {"name": "image_id", "dtype": "string"}, {"name": "rel", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 419016038.0, "num_examples": 5048386}, {"name": "validation", "num_bytes": 1587960.0, "num_examples": 18045}, {"name": "test", "num_bytes": 888060.0, "num_examples": 10830}]}} | 2023-01-20T10:54:46+00:00 |
|
a3f6e2a8c798e99cfe8e89b1acc41746dc5a6f11 | furgo/test_0 | [
"license:openrail",
"region:us"
]
| 2023-01-16T11:45:32+00:00 | {"license": "openrail"} | 2023-01-16T20:15:32+00:00 |
|
be211332f4ea671c2bc7918a43bac4aa74cb429a |
# Dataset Card for ScandiWiki
## Dataset Description
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected])
- **Total amount of disk used:** 4485.90 MB
### Dataset Summary
ScandiWiki is a parsed and deduplicated Wikipedia dump in Danish, Norwegian Bokmål,
Norwegian Nynorsk, Swedish, Icelandic and Faroese.
### Supported Tasks and Leaderboards
This dataset is intended for general language modelling.
### Languages
The dataset is available in Danish (`da`), Swedish (`sv`), Norwegian Bokmål (`nb`),
Norwegian Nynorsk (`nn`), Icelandic (`is`) and Faroese (`fo`).
## Dataset Structure
### Data Instances
- **Total amount of disk used:** 4485.90 MB
An example from the `train` split of the `fo` subset looks as follows.
```
{
'id': '3380',
'url': 'https://fo.wikipedia.org/wiki/Enk%C3%B6pings%20kommuna',
'title': 'Enköpings kommuna',
'text': 'Enköpings kommuna (svenskt: Enköpings kommun), er ein kommuna í Uppsala län í Svøríki. Enköpings kommuna hevur umleið 40.656 íbúgvar (2013).\n\nKeldur \n\nKommunur í Svøríki'
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `text`: a `string` feature.
### Data Subsets
| name | samples |
|----------|----------:|
| sv | 2,469,978 |
| nb | 596,593 |
| da | 287,216 |
| nn | 162,776 |
| is | 55,418 |
| fo | 12,582 |
## Dataset Creation
### Curation Rationale
It takes quite a long time to parse the Wikipedia dump as well as to deduplicate it, so
this dataset is primarily for convenience.
### Source Data
The original data is from the [wikipedia
dataset](https://huggingface.co/datasets/wikipedia).
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY-SA 4.0
license](https://creativecommons.org/licenses/by-sa/4.0/), in accordance with the same
license of the [wikipedia dataset](https://huggingface.co/datasets/wikipedia).
| alexandrainst/scandi-wiki | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_categories:feature-extraction",
"task_ids:language-modeling",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:wikipedia",
"language:da",
"language:sv",
"language:no",
"language:nb",
"language:nn",
"language:is",
"language:fo",
"license:cc-by-sa-4.0",
"region:us"
]
| 2023-01-16T12:29:34+00:00 | {"language": ["da", "sv", false, "nb", "nn", "is", "fo"], "license": ["cc-by-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["wikipedia"], "task_categories": ["fill-mask", "text-generation", "feature-extraction"], "task_ids": ["language-modeling"], "pretty_name": "ScandiWiki"} | 2023-01-16T13:55:38+00:00 |
6d77cf32bc2906b848430bc8155f88dece2d1254 |
# hand.json
3,000 image data about "Hand" retrieved from Unsplash.
# portrait.json
10,000 image data about "Portrait" retrieved from Unsplash.
# pose.json
10,000 image data about "Pose" retrieved from Unsplash.
# Tool
- [unsplash-wizard](https://github.com/p1atdev/unsplash-wizard)
```typescript
deno task build
./unsplash download ./hand.json -o ./hand --color --relatedTags --likes 50
```
# Type Definition
```typescript
interface Photo {
id: string
color: string
description: string | null
alt_description: string | null
tags: string[]
likes: number
urls: {
raw: string
full: string
regular: string
small: string
thumb: string
small_s3: string
}
width: number
height: number
related_tags: string[]
location: {
name: string | null
city: string | null
country: string | null
position: {
latitude: number | null
longitude: number | null
}
}
exif: {
make: string | null
model: string | null
exposure_time: string | null
aperture: string | null
focal_length: string | null
iso: number | null
}
views: number
downloads: number
}
``` | p1atdev/resplash | [
"language:en",
"license:mit",
"region:us"
]
| 2023-01-16T12:30:11+00:00 | {"language": ["en"], "license": "mit"} | 2023-01-18T12:42:03+00:00 |
6cb367d92796f6c007070df6838a9e0015036301 | Regularization dataset with photorealistic men in fantasy armor for small-scale finetunes/LoRAs.
Produced with various Stable Diffusion derivatives
Body horrors and extreme crops were hand pruned, though some were left
Prompts were cycled for a variety of poses and environments and to reduce full frontal static portraits and 'sameface' (still suffers from it, though).
Work in progress | AntaFluorescent/man_in_armor | [
"size_categories:n<1K",
"license:cc0-1.0",
"region:us"
]
| 2023-01-16T12:35:53+00:00 | {"license": "cc0-1.0", "size_categories": ["n<1K"]} | 2023-01-19T02:42:20+00:00 |
a798d6a570781433c592737494184fb1104a5d05 | # AutoTrain Dataset for project: test1
## Dataset Description
This dataset has been automatically processed by AutoTrain for project test1.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Konjam porunga Vishwasam trailor varatum appo therium yaaru gethu nu",
"target": 0
},
{
"text": "Last 2 dialogues bigil ku vecha mathri oru feel....",
"target": 4
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['Mixed_feelings', 'Negative', 'Positive', 'not-Tamil', 'unknown_state'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 12593 |
| valid | 3151 |
| dmontaner/autotrain-data-test1 | [
"task_categories:text-classification",
"language:en",
"region:us"
]
| 2023-01-16T13:01:30+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2023-01-16T13:03:19+00:00 |
b5a1d0c47373b24ad676b781d60663f79a1521e0 | milkcow/penguin | [
"license:afl-3.0",
"region:us"
]
| 2023-01-16T14:08:49+00:00 | {"license": "afl-3.0"} | 2023-01-16T14:08:49+00:00 |
|
285509a3b132668cf7911ceafbe6c48ed6ecf4bb | # Dataset Card for "bert_dataset_202203"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nthngdy/bert_dataset_202203 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"language:en",
"license:apache-2.0",
"language-modeling",
"masked-language-modeling",
"region:us"
]
| 2023-01-16T14:40:52+00:00 | {"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation", "fill-mask"], "pretty_name": "BERT Dataset (BookCorpus + Wikipedia 03/2022)", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24635440616, "num_examples": 146707688}], "download_size": 14651841592, "dataset_size": 24635440616}, "tags": ["language-modeling", "masked-language-modeling"]} | 2023-01-17T10:10:06+00:00 |
07cc4a29341ef26e8614ae1139847f4d4888727d |
# Dataset Card for KorFin-ABSA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
The KorFin-ASC is an extension of KorFin-ABSA including 8818 samples with (aspect, polarity) pairs annotated.
The samples were collected from [KLUE-TC](https://klue-benchmark.com/tasks/66/overview/description) and
analyst reports from [Naver Finance](https://finance.naver.com).
Annotation of the dataset is described in the paper [Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance](https://arxiv.org/abs/2301.03136).
### Supported Tasks and Leaderboards
This dataset supports the following tasks:
* Aspect-Based Sentiment Classification
### Languages
Korean
## Dataset Structure
### Data Instances
Each instance consists of a single sentence, aspect, and corresponding polarity (POSITIVE/NEGATIVE/NEUTRAL).
```
{
"title": "LGU+ 1분기 영업익 1천706억원…마케팅 비용 감소",
"aspect": "LG U+",
'sentiment': 'NEUTRAL',
'url': 'https://news.naver.com/main/read.nhn?mode=LS2D&mid=shm&sid1=105&sid2=227&oid=001&aid=0008363739',
'annotator_id': 'A_01',
'Type': 'single'
}
```
### Data Fields
* title:
* aspect:
* sentiment:
* url:
* annotator_id:
* url:
### Data Splits
The dataset currently does not contain standard data splits.
## Additional Information
You can download the data via:
```
from datasets import load_dataset
dataset = load_dataset("amphora/KorFin-ASC")
```
Please find more information about the code and how the data was collected in the paper [Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance](https://arxiv.org/abs/2301.03136).
The best-performing model on this dataset can be found at [link](https://huggingface.co/amphora/KorFinASC-XLM-RoBERTa).
### Licensing Information
KorFin-ASC is licensed under the terms of the [cc-by-sa-4.0](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
Please cite this data using:
```
@article{son2023removing,
title={Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance},
author={Son, Guijin and Lee, Hanwool and Kang, Nahyeon and Hahm, Moonjeong},
journal={arXiv preprint arXiv:2301.03136},
year={2023}
}
```
### Contributions
Thanks to [@Albertmade](https://github.com/h-albert-lee), [@amphora](https://github.com/guijinSON) for making this dataset. | amphora/korfin-asc | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:klue",
"language:ko",
"license:cc-by-sa-4.0",
"sentiment analysis",
"aspect based sentiment analysis",
"finance",
"arxiv:2301.03136",
"region:us"
]
| 2023-01-16T14:53:48+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ko"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["klue"], "task_categories": ["text-classification"], "task_ids": ["topic-classification", "sentiment-classification"], "pretty_name": "KorFin-ABSA", "tags": ["sentiment analysis", "aspect based sentiment analysis", "finance"]} | 2023-01-16T15:26:46+00:00 |
533dfaba159e53e81e76224437091c1d667e6872 | # Dataset Card for "raven_properties"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jkwiatkowski/raven_properties | [
"region:us"
]
| 2023-01-16T15:34:05+00:00 | {"dataset_info": {"features": [{"name": "Description", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7234653, "num_examples": 42000}, {"name": "val", "num_bytes": 2410755, "num_examples": 14000}, {"name": "test", "num_bytes": 2412471, "num_examples": 14000}], "download_size": 997897, "dataset_size": 12057879}} | 2023-01-16T16:56:41+00:00 |
4da8775df00761f4f89b9a7c9709136b12a5acbb | loveunk/sichuan_cuisine | [
"license:mit",
"region:us"
]
| 2023-01-16T16:18:33+00:00 | {"license": "mit"} | 2023-01-16T16:19:30+00:00 |
|
155448e2766d54e6a58dd1003ec544f4fea49bf7 | Ravisahu06/test | [
"license:apache-2.0",
"region:us"
]
| 2023-01-16T17:15:23+00:00 | {"license": "apache-2.0"} | 2023-01-16T17:17:33+00:00 |
|
cdef4ff24bb27140d0e4e239ad795904343194ad | # Dataset Card for "twitter_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | StoneSeller/twitter_raw | [
"region:us"
]
| 2023-01-16T17:36:36+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "Q", "dtype": "string"}, {"name": "A", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2149019, "num_examples": 10607}, {"name": "valid", "num_bytes": 478895, "num_examples": 2652}], "download_size": 1304645, "dataset_size": 2627914}} | 2023-01-16T17:36:53+00:00 |
73f3245a756410b696934d4f048787174ce5a715 | # Open Images Dataset V7 (test set)
Original paper: [A Step Toward More Inclusive People Annotations for Fairness](https://arxiv.org/abs/2105.02317)
Homepage: https://storage.googleapis.com/openimages/web/extended.html
Bibtex:
```
@inproceedings{miap_aies,
title = {A Step Toward More Inclusive People Annotations for Fairness},
author = {Candice Schumann and Susanna Ricco and Utsav Prabhu and Vittorio Ferrari and Caroline Rebecca Pantofaru},
booktitle = {Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES)},
year = {2021}
}
``` | nlphuji/open_images_dataset_v7 | [
"arxiv:2105.02317",
"region:us"
]
| 2023-01-16T18:20:56+00:00 | {} | 2023-01-17T11:49:56+00:00 |
727d7f6446526483efcd7ca677ea795f36b8942d | # Dollar Street (test set)
Original paper: [The Dollar Street Dataset: Images Representing the Geographic and Socioeconomic Diversity of the World](https://openreview.net/forum?id=qnfYsave0U4)
Homepage: https://www.kaggle.com/datasets/mlcommons/the-dollar-street-dataset
Bibtex:
```
@inproceedings{
rojas2022the,
title={The Dollar Street Dataset: Images Representing the Geographic and Socioeconomic Diversity of the World},
author={William A Gaviria Rojas and Sudnya Diamos and Keertan Ranjan Kini and David Kanter and Vijay Janapa Reddi and Cody Coleman},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=qnfYsave0U4}
}
``` | nlphuji/dollar_street_test | [
"region:us"
]
| 2023-01-16T19:12:34+00:00 | {} | 2023-01-17T21:05:24+00:00 |
c84603c049571d37f3d9a48772f12083ab41ac95 | # FairFace (val set)
Original paper: [Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation](https://openaccess.thecvf.com/content/WACV2021/papers/Karkkainen_FairFace_Face_Attribute_Dataset_for_Balanced_Race_Gender_and_Age_WACV_2021_paper.pdf)
Homepage: https://github.com/joojs/fairface
Bibtex:
```
@inproceedings{karkkainenfairface,
title={FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation},
author={Karkkainen, Kimmo and Joo, Jungseock},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
year={2021},
pages={1548--1558}
}
``` | nlphuji/fairface_val_padding_125 | [
"region:us"
]
| 2023-01-16T19:50:46+00:00 | {} | 2023-01-18T22:59:22+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.