sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9265b1c11a97ab4491fbc1f0812d13b6a57a2e66
|
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's dimension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for:
- [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended)
- [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
- [Intel CPUs, Intel GPUs (both integrated and discrete)](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) (external wiki page)
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf
- Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
steammerf1/jay
|
[
"arxiv:2211.06679",
"region:us"
] |
2023-09-24T09:10:09+00:00
|
{}
|
2023-09-24T09:10:48+00:00
|
[
"2211.06679"
] |
[] |
TAGS
#arxiv-2211.06679 #region-us
|
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
Detailed feature showcase with images:
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a '((tuxedo))' - will pay more attention to tuxedo
- a man in a '(tuxedo:1.21)' - alternative syntax
- select text and press 'Ctrl+Up' or 'Ctrl+Down' (or 'Command+Up' or 'Command+Down' if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR (see here), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with '--allow-code' to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- Custom scripts with many extensions from community
- Composable-Diffusion, a way to use multiple prompts at once
- separate prompts using uppercase 'AND'
- also supports weights for prompts: 'a cat :1.2 AND a dog AND a penguin :2.2'
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- xformers, major speed increase for select cards: (add '--xformers' to commandline args)
- via extension: History tab: view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated inpainting model by RunwayML
- via extension: Aesthetic Gradients, a way to generate images with a specific aesthetic by using clip images embeds (implementation of URL
- Stable Diffusion 2.0 support - see wiki for instructions
- Alt-Diffusion support - see wiki for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's dimension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required dependencies are met and follow the instructions available for:
- NVidia (recommended)
- AMD GPUs.
- Intel CPUs, Intel GPUs (both integrated and discrete) (external wiki page)
Alternatively, use online services (like Google Colab):
- List of Online Services
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download 'URL' from v1.0.0-pre and extract it's contents.
2. Run 'URL'.
3. Run 'URL'.
> For more details see Install-and-Run-on-NVidia-GPUs
### Automatic Installation on Windows
1. Install Python 3.10.6 (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install git.
3. Download the stable-diffusion-webui repository, for example by running 'git clone URL
4. Run 'URL' from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
2. Navigate to the directory you would like the webui to be installed and execute the following command:
3. Run 'URL'.
4. Check 'URL' for options.
### Installation on Apple Silicon
Find the instructions here.
## Contributing
Here's how to add code to this repo: Contributing
## Documentation
The documentation was moved from this README over to the project's wiki.
For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) crawlable wiki.
## Credits
Licenses for borrowed code can be found in 'Settings -> Licenses' screen, and also in 'html/URL' file.
- Stable Diffusion - URL URL
- k-diffusion - URL
- GFPGAN - URL
- CodeFormer - URL
- ESRGAN - URL
- SwinIR - URL
- Swin2SR - URL
- LDSR - URL
- MiDaS - URL
- Ideas for optimizations - URL
- Cross Attention layer optimization - Doggettx - URL original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - URL (originally URL
- Sub-quadratic Cross Attention layer optimization - Alex Birch (URL Amin Rezaei (URL
- Textual Inversion - Rinon Gal - URL (we're not using his code, but we are using his ideas).
- Idea for SD upscale - URL
- Noise generation for outpainting mk2 - URL
- CLIP interrogator idea and borrowing some code - URL
- Idea for Composable Diffusion - URL
- xformers - URL
- DeepDanbooru - interrogator for anime diffusers URL
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (URL
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - URL
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - URL
- TAESD - Ollin Boer Bohan - URL
- LyCORIS - KohakuBlueleaf
- Restart sampling - lambertae - URL
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
[
"# Stable Diffusion web UI\nA browser interface based on Gradio library for Stable Diffusion.\n\n",
"## Features\nDetailed feature showcase with images:\n- Original txt2img and img2img modes\n- One click install and run script (but you still must install python and git)\n- Outpainting\n- Inpainting\n- Color Sketch\n- Prompt Matrix\n- Stable Diffusion Upscale\n- Attention, specify parts of text that the model should pay more attention to\n - a man in a '((tuxedo))' - will pay more attention to tuxedo\n - a man in a '(tuxedo:1.21)' - alternative syntax\n - select text and press 'Ctrl+Up' or 'Ctrl+Down' (or 'Command+Up' or 'Command+Down' if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)\n- Loopback, run img2img processing multiple times\n- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters\n- Textual Inversion\n - have as many embeddings as you want and use any names you like for them\n - use multiple embeddings with different numbers of vectors per token\n - works with half precision floating point numbers\n - train embeddings on 8GB (also reports of 6GB working)\n- Extras tab with:\n - GFPGAN, neural network that fixes faces\n - CodeFormer, face restoration tool as an alternative to GFPGAN\n - RealESRGAN, neural network upscaler\n - ESRGAN, neural network upscaler with a lot of third party models\n - SwinIR and Swin2SR (see here), neural network upscalers\n - LDSR, Latent diffusion super resolution upscaling\n- Resizing aspect ratio options\n- Sampling method selection\n - Adjust sampler eta values (noise multiplier)\n - More advanced noise setting options\n- Interrupt processing at any time\n- 4GB video card support (also reports of 2GB working)\n- Correct seeds for batches\n- Live prompt token length validation\n- Generation parameters\n - parameters you used to generate images are saved with that image\n - in PNG chunks for PNG, in EXIF for JPEG\n - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI\n - can be disabled in settings\n - drag and drop an image/text-parameters to promptbox\n- Read Generation Parameters Button, loads parameters in promptbox to UI\n- Settings page\n- Running arbitrary python code from UI (must run with '--allow-code' to enable)\n- Mouseover hints for most UI elements\n- Possible to change defaults/mix/max/step values for UI elements via text config\n- Tiling support, a checkbox to create images that can be tiled like textures\n- Progress bar and live image generation preview\n - Can use a separate neural network to produce previews with almost none VRAM or compute requirement\n- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image\n- Styles, a way to save part of prompt and easily apply them via dropdown later\n- Variations, a way to generate same image but with tiny differences\n- Seed resizing, a way to generate same image but at slightly different resolution\n- CLIP interrogator, a button that tries to guess prompt from an image\n- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway\n- Batch Processing, process a group of files using img2img\n- Img2img Alternative, reverse Euler method of cross attention control\n- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions\n- Reloading checkpoints on the fly\n- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one\n- Custom scripts with many extensions from community\n- Composable-Diffusion, a way to use multiple prompts at once\n - separate prompts using uppercase 'AND'\n - also supports weights for prompts: 'a cat :1.2 AND a dog AND a penguin :2.2'\n- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)\n- DeepDanbooru integration, creates danbooru style tags for anime prompts\n- xformers, major speed increase for select cards: (add '--xformers' to commandline args)\n- via extension: History tab: view, direct and delete images conveniently within the UI\n- Generate forever option\n- Training tab\n - hypernetworks and embeddings options\n - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)\n- Clip skip\n- Hypernetworks\n- Loras (same as Hypernetworks but more pretty)\n- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt \n- Can select to load a different VAE from settings screen\n- Estimated completion time in progress bar\n- API\n- Support for dedicated inpainting model by RunwayML\n- via extension: Aesthetic Gradients, a way to generate images with a specific aesthetic by using clip images embeds (implementation of URL\n- Stable Diffusion 2.0 support - see wiki for instructions\n- Alt-Diffusion support - see wiki for instructions\n- Now without any bad letters!\n- Load checkpoints in safetensors format\n- Eased resolution restriction: generated image's dimension must be a multiple of 8 rather than 64\n- Now with a license!\n- Reorder elements in the UI from settings screen",
"## Installation and Running\nMake sure the required dependencies are met and follow the instructions available for:\n- NVidia (recommended)\n- AMD GPUs.\n- Intel CPUs, Intel GPUs (both integrated and discrete) (external wiki page)\n\nAlternatively, use online services (like Google Colab):\n\n- List of Online Services",
"### Installation on Windows 10/11 with NVidia-GPUs using release package\n1. Download 'URL' from v1.0.0-pre and extract it's contents.\n2. Run 'URL'.\n3. Run 'URL'.\n> For more details see Install-and-Run-on-NVidia-GPUs",
"### Automatic Installation on Windows\n1. Install Python 3.10.6 (Newer version of Python does not support torch), checking \"Add Python to PATH\".\n2. Install git.\n3. Download the stable-diffusion-webui repository, for example by running 'git clone URL\n4. Run 'URL' from Windows Explorer as normal, non-administrator, user.",
"### Automatic Installation on Linux\n1. Install the dependencies:\n\n2. Navigate to the directory you would like the webui to be installed and execute the following command:\n\n3. Run 'URL'.\n4. Check 'URL' for options.",
"### Installation on Apple Silicon\n\nFind the instructions here.",
"## Contributing\nHere's how to add code to this repo: Contributing",
"## Documentation\n\nThe documentation was moved from this README over to the project's wiki.\n\nFor the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) crawlable wiki.",
"## Credits\nLicenses for borrowed code can be found in 'Settings -> Licenses' screen, and also in 'html/URL' file.\n\n- Stable Diffusion - URL URL\n- k-diffusion - URL\n- GFPGAN - URL\n- CodeFormer - URL\n- ESRGAN - URL\n- SwinIR - URL\n- Swin2SR - URL\n- LDSR - URL\n- MiDaS - URL\n- Ideas for optimizations - URL\n- Cross Attention layer optimization - Doggettx - URL original idea for prompt editing.\n- Cross Attention layer optimization - InvokeAI, lstein - URL (originally URL\n- Sub-quadratic Cross Attention layer optimization - Alex Birch (URL Amin Rezaei (URL\n- Textual Inversion - Rinon Gal - URL (we're not using his code, but we are using his ideas).\n- Idea for SD upscale - URL\n- Noise generation for outpainting mk2 - URL\n- CLIP interrogator idea and borrowing some code - URL\n- Idea for Composable Diffusion - URL\n- xformers - URL\n- DeepDanbooru - interrogator for anime diffusers URL\n- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (URL\n- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - URL\n- Security advice - RyotaK\n- UniPC sampler - Wenliang Zhao - URL\n- TAESD - Ollin Boer Bohan - URL\n- LyCORIS - KohakuBlueleaf\n- Restart sampling - lambertae - URL\n- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.\n- (You)"
] |
[
"TAGS\n#arxiv-2211.06679 #region-us \n",
"# Stable Diffusion web UI\nA browser interface based on Gradio library for Stable Diffusion.\n\n",
"## Features\nDetailed feature showcase with images:\n- Original txt2img and img2img modes\n- One click install and run script (but you still must install python and git)\n- Outpainting\n- Inpainting\n- Color Sketch\n- Prompt Matrix\n- Stable Diffusion Upscale\n- Attention, specify parts of text that the model should pay more attention to\n - a man in a '((tuxedo))' - will pay more attention to tuxedo\n - a man in a '(tuxedo:1.21)' - alternative syntax\n - select text and press 'Ctrl+Up' or 'Ctrl+Down' (or 'Command+Up' or 'Command+Down' if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)\n- Loopback, run img2img processing multiple times\n- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters\n- Textual Inversion\n - have as many embeddings as you want and use any names you like for them\n - use multiple embeddings with different numbers of vectors per token\n - works with half precision floating point numbers\n - train embeddings on 8GB (also reports of 6GB working)\n- Extras tab with:\n - GFPGAN, neural network that fixes faces\n - CodeFormer, face restoration tool as an alternative to GFPGAN\n - RealESRGAN, neural network upscaler\n - ESRGAN, neural network upscaler with a lot of third party models\n - SwinIR and Swin2SR (see here), neural network upscalers\n - LDSR, Latent diffusion super resolution upscaling\n- Resizing aspect ratio options\n- Sampling method selection\n - Adjust sampler eta values (noise multiplier)\n - More advanced noise setting options\n- Interrupt processing at any time\n- 4GB video card support (also reports of 2GB working)\n- Correct seeds for batches\n- Live prompt token length validation\n- Generation parameters\n - parameters you used to generate images are saved with that image\n - in PNG chunks for PNG, in EXIF for JPEG\n - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI\n - can be disabled in settings\n - drag and drop an image/text-parameters to promptbox\n- Read Generation Parameters Button, loads parameters in promptbox to UI\n- Settings page\n- Running arbitrary python code from UI (must run with '--allow-code' to enable)\n- Mouseover hints for most UI elements\n- Possible to change defaults/mix/max/step values for UI elements via text config\n- Tiling support, a checkbox to create images that can be tiled like textures\n- Progress bar and live image generation preview\n - Can use a separate neural network to produce previews with almost none VRAM or compute requirement\n- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image\n- Styles, a way to save part of prompt and easily apply them via dropdown later\n- Variations, a way to generate same image but with tiny differences\n- Seed resizing, a way to generate same image but at slightly different resolution\n- CLIP interrogator, a button that tries to guess prompt from an image\n- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway\n- Batch Processing, process a group of files using img2img\n- Img2img Alternative, reverse Euler method of cross attention control\n- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions\n- Reloading checkpoints on the fly\n- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one\n- Custom scripts with many extensions from community\n- Composable-Diffusion, a way to use multiple prompts at once\n - separate prompts using uppercase 'AND'\n - also supports weights for prompts: 'a cat :1.2 AND a dog AND a penguin :2.2'\n- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)\n- DeepDanbooru integration, creates danbooru style tags for anime prompts\n- xformers, major speed increase for select cards: (add '--xformers' to commandline args)\n- via extension: History tab: view, direct and delete images conveniently within the UI\n- Generate forever option\n- Training tab\n - hypernetworks and embeddings options\n - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)\n- Clip skip\n- Hypernetworks\n- Loras (same as Hypernetworks but more pretty)\n- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt \n- Can select to load a different VAE from settings screen\n- Estimated completion time in progress bar\n- API\n- Support for dedicated inpainting model by RunwayML\n- via extension: Aesthetic Gradients, a way to generate images with a specific aesthetic by using clip images embeds (implementation of URL\n- Stable Diffusion 2.0 support - see wiki for instructions\n- Alt-Diffusion support - see wiki for instructions\n- Now without any bad letters!\n- Load checkpoints in safetensors format\n- Eased resolution restriction: generated image's dimension must be a multiple of 8 rather than 64\n- Now with a license!\n- Reorder elements in the UI from settings screen",
"## Installation and Running\nMake sure the required dependencies are met and follow the instructions available for:\n- NVidia (recommended)\n- AMD GPUs.\n- Intel CPUs, Intel GPUs (both integrated and discrete) (external wiki page)\n\nAlternatively, use online services (like Google Colab):\n\n- List of Online Services",
"### Installation on Windows 10/11 with NVidia-GPUs using release package\n1. Download 'URL' from v1.0.0-pre and extract it's contents.\n2. Run 'URL'.\n3. Run 'URL'.\n> For more details see Install-and-Run-on-NVidia-GPUs",
"### Automatic Installation on Windows\n1. Install Python 3.10.6 (Newer version of Python does not support torch), checking \"Add Python to PATH\".\n2. Install git.\n3. Download the stable-diffusion-webui repository, for example by running 'git clone URL\n4. Run 'URL' from Windows Explorer as normal, non-administrator, user.",
"### Automatic Installation on Linux\n1. Install the dependencies:\n\n2. Navigate to the directory you would like the webui to be installed and execute the following command:\n\n3. Run 'URL'.\n4. Check 'URL' for options.",
"### Installation on Apple Silicon\n\nFind the instructions here.",
"## Contributing\nHere's how to add code to this repo: Contributing",
"## Documentation\n\nThe documentation was moved from this README over to the project's wiki.\n\nFor the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) crawlable wiki.",
"## Credits\nLicenses for borrowed code can be found in 'Settings -> Licenses' screen, and also in 'html/URL' file.\n\n- Stable Diffusion - URL URL\n- k-diffusion - URL\n- GFPGAN - URL\n- CodeFormer - URL\n- ESRGAN - URL\n- SwinIR - URL\n- Swin2SR - URL\n- LDSR - URL\n- MiDaS - URL\n- Ideas for optimizations - URL\n- Cross Attention layer optimization - Doggettx - URL original idea for prompt editing.\n- Cross Attention layer optimization - InvokeAI, lstein - URL (originally URL\n- Sub-quadratic Cross Attention layer optimization - Alex Birch (URL Amin Rezaei (URL\n- Textual Inversion - Rinon Gal - URL (we're not using his code, but we are using his ideas).\n- Idea for SD upscale - URL\n- Noise generation for outpainting mk2 - URL\n- CLIP interrogator idea and borrowing some code - URL\n- Idea for Composable Diffusion - URL\n- xformers - URL\n- DeepDanbooru - interrogator for anime diffusers URL\n- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (URL\n- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - URL\n- Security advice - RyotaK\n- UniPC sampler - Wenliang Zhao - URL\n- TAESD - Ollin Boer Bohan - URL\n- LyCORIS - KohakuBlueleaf\n- Restart sampling - lambertae - URL\n- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.\n- (You)"
] |
[
15,
30,
1293,
76,
70,
79,
51,
11,
18,
52,
407
] |
[
"passage: TAGS\n#arxiv-2211.06679 #region-us \n# Stable Diffusion web UI\nA browser interface based on Gradio library for Stable Diffusion.\n\n",
"passage: ## Features\nDetailed feature showcase with images:\n- Original txt2img and img2img modes\n- One click install and run script (but you still must install python and git)\n- Outpainting\n- Inpainting\n- Color Sketch\n- Prompt Matrix\n- Stable Diffusion Upscale\n- Attention, specify parts of text that the model should pay more attention to\n - a man in a '((tuxedo))' - will pay more attention to tuxedo\n - a man in a '(tuxedo:1.21)' - alternative syntax\n - select text and press 'Ctrl+Up' or 'Ctrl+Down' (or 'Command+Up' or 'Command+Down' if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)\n- Loopback, run img2img processing multiple times\n- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters\n- Textual Inversion\n - have as many embeddings as you want and use any names you like for them\n - use multiple embeddings with different numbers of vectors per token\n - works with half precision floating point numbers\n - train embeddings on 8GB (also reports of 6GB working)\n- Extras tab with:\n - GFPGAN, neural network that fixes faces\n - CodeFormer, face restoration tool as an alternative to GFPGAN\n - RealESRGAN, neural network upscaler\n - ESRGAN, neural network upscaler with a lot of third party models\n - SwinIR and Swin2SR (see here), neural network upscalers\n - LDSR, Latent diffusion super resolution upscaling\n- Resizing aspect ratio options\n- Sampling method selection\n - Adjust sampler eta values (noise multiplier)\n - More advanced noise setting options\n- Interrupt processing at any time\n- 4GB video card support (also reports of 2GB working)\n- Correct seeds for batches\n- Live prompt token length validation\n- Generation parameters\n - parameters you used to generate images are saved with that image\n - in PNG chunks for PNG, in EXIF for JPEG\n - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI\n - can be disabled in settings\n - drag and drop an image/text-parameters to promptbox\n- Read Generation Parameters Button, loads parameters in promptbox to UI\n- Settings page\n- Running arbitrary python code from UI (must run with '--allow-code' to enable)\n- Mouseover hints for most UI elements\n- Possible to change defaults/mix/max/step values for UI elements via text config\n- Tiling support, a checkbox to create images that can be tiled like textures\n- Progress bar and live image generation preview\n - Can use a separate neural network to produce previews with almost none VRAM or compute requirement\n- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image\n- Styles, a way to save part of prompt and easily apply them via dropdown later\n- Variations, a way to generate same image but with tiny differences\n- Seed resizing, a way to generate same image but at slightly different resolution\n- CLIP interrogator, a button that tries to guess prompt from an image\n- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway\n- Batch Processing, process a group of files using img2img\n- Img2img Alternative, reverse Euler method of cross attention control\n- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions\n- Reloading checkpoints on the fly\n- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one\n- Custom scripts with many extensions from community\n- Composable-Diffusion, a way to use multiple prompts at once\n - separate prompts using uppercase 'AND'\n - also supports weights for prompts: 'a cat :1.2 AND a dog AND a penguin :2.2'\n- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)\n- DeepDanbooru integration, creates danbooru style tags for anime prompts\n- xformers, major speed increase for select cards: (add '--xformers' to commandline args)\n- via extension: History tab: view, direct and delete images conveniently within the UI\n- Generate forever option\n- Training tab\n - hypernetworks and embeddings options\n - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)\n- Clip skip\n- Hypernetworks\n- Loras (same as Hypernetworks but more pretty)\n- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt \n- Can select to load a different VAE from settings screen\n- Estimated completion time in progress bar\n- API\n- Support for dedicated inpainting model by RunwayML\n- via extension: Aesthetic Gradients, a way to generate images with a specific aesthetic by using clip images embeds (implementation of URL\n- Stable Diffusion 2.0 support - see wiki for instructions\n- Alt-Diffusion support - see wiki for instructions\n- Now without any bad letters!\n- Load checkpoints in safetensors format\n- Eased resolution restriction: generated image's dimension must be a multiple of 8 rather than 64\n- Now with a license!\n- Reorder elements in the UI from settings screen## Installation and Running\nMake sure the required dependencies are met and follow the instructions available for:\n- NVidia (recommended)\n- AMD GPUs.\n- Intel CPUs, Intel GPUs (both integrated and discrete) (external wiki page)\n\nAlternatively, use online services (like Google Colab):\n\n- List of Online Services### Installation on Windows 10/11 with NVidia-GPUs using release package\n1. Download 'URL' from v1.0.0-pre and extract it's contents.\n2. Run 'URL'.\n3. Run 'URL'.\n> For more details see Install-and-Run-on-NVidia-GPUs### Automatic Installation on Windows\n1. Install Python 3.10.6 (Newer version of Python does not support torch), checking \"Add Python to PATH\".\n2. Install git.\n3. Download the stable-diffusion-webui repository, for example by running 'git clone URL\n4. Run 'URL' from Windows Explorer as normal, non-administrator, user.### Automatic Installation on Linux\n1. Install the dependencies:\n\n2. Navigate to the directory you would like the webui to be installed and execute the following command:\n\n3. Run 'URL'.\n4. Check 'URL' for options.### Installation on Apple Silicon\n\nFind the instructions here.## Contributing\nHere's how to add code to this repo: Contributing## Documentation\n\nThe documentation was moved from this README over to the project's wiki.\n\nFor the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) crawlable wiki."
] |
a8d8468b4b5338ff5e46f8e213f5a66a5a538343
|
# Dataset Card for "pubmed_subset_c4_40p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zxvix/pubmed_subset_c4_40p
|
[
"region:us"
] |
2023-09-24T11:21:38+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2339969714.971987, "num_examples": 1666514}, {"name": "test", "num_bytes": 1024229, "num_examples": 1000}], "download_size": 1389229225, "dataset_size": 2340993943.971987}}
|
2023-09-24T11:45:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pubmed_subset_c4_40p"
More Information needed
|
[
"# Dataset Card for \"pubmed_subset_c4_40p\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pubmed_subset_c4_40p\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pubmed_subset_c4_40p\"\n\nMore Information needed"
] |
94d88a779c50c41f859a4ecd7b2b5ba77772236b
|
# Dataset Card for "Andy_Lau"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rainbow/Andy_Lau
|
[
"region:us"
] |
2023-09-24T11:34:24+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6985835.0, "num_examples": 16}], "download_size": 6986820, "dataset_size": 6985835.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-24T11:41:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Andy_Lau"
More Information needed
|
[
"# Dataset Card for \"Andy_Lau\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Andy_Lau\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Andy_Lau\"\n\nMore Information needed"
] |
b10d3e205f9f4325f07c608b5826eb77fcae0632
|
# Dataset Card for "AISHELL1_withoutLM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ASR-HypR/AISHELL1_withoutLM
|
[
"region:us"
] |
2023-09-24T11:44:56+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "dev", "path": "data/dev-*"}]}], "dataset_info": {"features": [{"name": "ref", "dtype": "string"}, {"name": "hyps", "sequence": "string"}, {"name": "ctc_score", "sequence": "float64"}, {"name": "att_score", "sequence": "float64"}, {"name": "utt_id", "dtype": "string"}, {"name": "score", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 520539152, "num_examples": 120098}, {"name": "test", "num_bytes": 31358088, "num_examples": 7176}, {"name": "dev", "num_bytes": 61833418, "num_examples": 14326}], "download_size": 283530745, "dataset_size": 613730658}}
|
2023-09-24T14:03:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AISHELL1_withoutLM"
More Information needed
|
[
"# Dataset Card for \"AISHELL1_withoutLM\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AISHELL1_withoutLM\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AISHELL1_withoutLM\"\n\nMore Information needed"
] |
78dcb2b4b2ab7ec541a7c5330247500911d7a07d
|
# Dataset Card for "wikipedia_pos_tagged"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kinianlo/wikipedia_pos_tagged
|
[
"region:us"
] |
2023-09-24T11:51:20+00:00
|
{"dataset_info": [{"config_name": "20220301_en_nltk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "pos_tags", "sequence": {"sequence": {"sequence": "string"}}}], "splits": [{"name": "train", "num_bytes": 88585221192, "num_examples": 6458670}], "download_size": 3527644902, "dataset_size": 88585221192}, {"config_name": "20220301_en_nltk_tags_only", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "pos_tags", "sequence": {"sequence": {"sequence": "string"}}}], "splits": [{"name": "train", "num_bytes": 68920385173, "num_examples": 6458670}], "download_size": 0, "dataset_size": 68920385173}, {"config_name": "20220301_simple_nltk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "pos_tags", "sequence": {"sequence": {"sequence": "string"}}}], "splits": [{"name": "train", "num_bytes": 1000903680, "num_examples": 205328}], "download_size": 286763992, "dataset_size": 1000903680}, {"config_name": "20220301_simple_nltk_tags_only", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "pos_tags", "sequence": {"sequence": {"sequence": "string"}}}], "splits": [{"name": "train", "num_bytes": 783729741, "num_examples": 205328}], "download_size": 161414334, "dataset_size": 783729741}, {"config_name": "20220301_simple_spacy", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "pos_tags", "sequence": {"sequence": {"sequence": "string"}}}], "splits": [{"name": "train", "num_bytes": 1131814443, "num_examples": 205328}], "download_size": 289479815, "dataset_size": 1131814443}, {"config_name": "20220301_simple_spacy_tags_only", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "pos_tags", "sequence": {"sequence": {"sequence": "string"}}}], "splits": [{"name": "train", "num_bytes": 914640504, "num_examples": 205328}], "download_size": 164284823, "dataset_size": 914640504}], "configs": [{"config_name": "20220301_en_nltk", "data_files": [{"split": "train", "path": "20220301_en_nltk/train-*"}]}, {"config_name": "20220301_en_nltk_tags_only", "data_files": [{"split": "train", "path": "20220301_en_nltk_tags_only/train-*"}]}, {"config_name": "20220301_simple_nltk", "data_files": [{"split": "train", "path": "20220301_simple_nltk/train-*"}]}, {"config_name": "20220301_simple_nltk_tags_only", "data_files": [{"split": "train", "path": "20220301_simple_nltk_tags_only/train-*"}]}, {"config_name": "20220301_simple_spacy", "data_files": [{"split": "train", "path": "20220301_simple_spacy/train-*"}]}, {"config_name": "20220301_simple_spacy_tags_only", "data_files": [{"split": "train", "path": "20220301_simple_spacy_tags_only/train-*"}]}]}
|
2023-09-30T20:41:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wikipedia_pos_tagged"
More Information needed
|
[
"# Dataset Card for \"wikipedia_pos_tagged\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wikipedia_pos_tagged\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wikipedia_pos_tagged\"\n\nMore Information needed"
] |
c08da9fbcacdee0a85ced67e58d89c3f9e9180da
|
# Dataset Card for "en-zh-nyt31k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mikeee/en-zh-nyt31k
|
[
"region:us"
] |
2023-09-24T11:55:43+00:00
|
{"dataset_info": {"features": [{"name": "english", "dtype": "string"}, {"name": "chinese", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15197924, "num_examples": 31449}], "download_size": 10056620, "dataset_size": 15197924}}
|
2023-09-24T11:56:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "en-zh-nyt31k"
More Information needed
|
[
"# Dataset Card for \"en-zh-nyt31k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"en-zh-nyt31k\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"en-zh-nyt31k\"\n\nMore Information needed"
] |
6b2da40560711cb152d8cb2002590bd0a18937e1
|
# Dataset Card for "movie-prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
erlend0/movie-prompts
|
[
"region:us"
] |
2023-09-24T11:55:46+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 73702, "num_examples": 340}], "download_size": 27978, "dataset_size": 73702}}
|
2023-09-24T11:55:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "movie-prompts"
More Information needed
|
[
"# Dataset Card for \"movie-prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"movie-prompts\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"movie-prompts\"\n\nMore Information needed"
] |
b96a60707681413b502ed325029bced8427cc60c
|
# Dataset Card for "wikipedia-ja-20230720-100k"
This is data extracted randomly from [izumi-lab/wikipedia-ja-20230720](https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720), consisting of 100,000 records.
[izumi-lab/wikipedia-ja-20230720](https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720)からデータを100k分ランダムに抽出したデータです。
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mmnga/wikipedia-ja-20230720-100k
|
[
"license:cc-by-sa-3.0",
"region:us"
] |
2023-09-24T12:00:39+00:00
|
{"license": "cc-by-sa-3.0", "dataset_info": {"features": [{"name": "curid", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 268164890.06653625, "num_examples": 100000}], "download_size": 159687725, "dataset_size": 268164890.06653625}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-24T12:08:15+00:00
|
[] |
[] |
TAGS
#license-cc-by-sa-3.0 #region-us
|
# Dataset Card for "wikipedia-ja-20230720-100k"
This is data extracted randomly from izumi-lab/wikipedia-ja-20230720, consisting of 100,000 records.
izumi-lab/wikipedia-ja-20230720からデータを100k分ランダムに抽出したデータです。
More Information needed
|
[
"# Dataset Card for \"wikipedia-ja-20230720-100k\"\n\nThis is data extracted randomly from izumi-lab/wikipedia-ja-20230720, consisting of 100,000 records. \n\nizumi-lab/wikipedia-ja-20230720からデータを100k分ランダムに抽出したデータです。 \n\n\nMore Information needed"
] |
[
"TAGS\n#license-cc-by-sa-3.0 #region-us \n",
"# Dataset Card for \"wikipedia-ja-20230720-100k\"\n\nThis is data extracted randomly from izumi-lab/wikipedia-ja-20230720, consisting of 100,000 records. \n\nizumi-lab/wikipedia-ja-20230720からデータを100k分ランダムに抽出したデータです。 \n\n\nMore Information needed"
] |
[
17,
70
] |
[
"passage: TAGS\n#license-cc-by-sa-3.0 #region-us \n# Dataset Card for \"wikipedia-ja-20230720-100k\"\n\nThis is data extracted randomly from izumi-lab/wikipedia-ja-20230720, consisting of 100,000 records. \n\nizumi-lab/wikipedia-ja-20230720からデータを100k分ランダムに抽出したデータです。 \n\n\nMore Information needed"
] |
e7372c450f16b5804b31eac635acf46ea6823936
|
# 20 Newsgroups Dataset
## Introduction
The 20 Newsgroups dataset comprises roughly 20,000 documents from newsgroups, with an almost even distribution across 20 distinct newsgroups. Initially gathered by Ken Lang, this dataset has gained prominence in the machine learning community, particularly for text-related applications like classification and clustering.
## Dataset Structure
The dataset's organization is based on 20 different newsgroups, each representing a unique topic. While some of these newsgroups share similarities or are closely related, others are quite distinct from one another.
### List of Newsgroups:
- Computer Graphics
- Windows OS Miscellaneous
- IBM PC Hardware
- Mac Hardware
- Windows X
- Automobiles
- Motorcycles
- Baseball
- Hockey
- Cryptography
- Electronics
- Medicine
- Space
- Miscellaneous Sales
- Miscellaneous Politics
- Politics & Guns
- Middle East Politics
- Miscellaneous Religion
- Atheism
- Christianity
## Sample Entries
### Sample from `Windows X`
```
From: [email protected] (Bill Kayser)
Subject: Re: TeleUse, UIM/X, and C++
Article-I.D.: parsival.199304060629.AA00339
Organization: The Internet
Lines: 25
NNTP-Posting-Host: enterpoop.mit.edu
To: [email protected]
Cc: [email protected]
>
> Does anyone have any good ideas on how to integrate C++ code elegantly
> with TeleUse, UIM/X / Interface Architect generated code?
>
> Source would be great, but any suggestions are welcome.
It's my understanding that the next release of UIM/X, due out
last February :-) has full support for C++.
I use XDesigner which does not have the interpreter or UI meta languages
of these other tools but does fully support C++ code generation,
reusable templates via C++ classes which are generated, a variety of
other handy features for using C++ and layout functions in different
ways, and generates Motif 1.2 code (including drag 'n drop,
internationalization, etc.). Fits in quite nicely with Doug Young's
paradigm for C++/Motif.
Available in the US from VI Corp, in Europe from Imperial Software,
London (see FAQ for details).
Bill
________________________________________________________________________
Schlumberger Geco Prakla
[email protected]
```
### Sample from `Electronics`
```
From: [email protected] (baden de bari)
Subject: Re: Jacob's Ladder
Organization: System 6626 BBS, Winnipeg Manitoba Canada
Lines: 36
[email protected] (Brad Meier) writes:
> Hi, I'm looking for a circuit, that is called a "Jacob's Ladder".
> This little box is usually seen in sci-fi movies. It consists of
> two curves of wire protruding into the air, with little blue sparks
> starting at their base (where the two wires are closer to each other),
> moving up the wires to the top, and ending in a small crackling noise.
>
> Could anyone supply me with the schematic for the innards of this box?
>
> Thanks in advance
> Mike
>
> (Please reply by email to [email protected])
>
> --
> | / | | ~|~ /~~\ | | ~|~ /~~\ |~~\ /~~\ The KnightOrc
> |/ |\ | | | __ |__| | | | |__/ | [email protected]
> |\ | \| | | | | | | | | | | | "When it's over I'll go home,
> | \ | | _|_ \__/ | | | \__/ | | \__/ until then, I stay!" - Me
I'd like any accumulated information on this as well please.
Thanks.
_________________________________________
_____ |
| | | |
=========== | Baden de Bari |
| o o | | |
| ^ | | [email protected] |
| {-} | | [email protected] |
\_____/ | |
-----------------------------------------
```
## Data Availability
The dataset is bundled in `.tar.gz` format. Within each bundle, individual subdirectories represent a newsgroup. Every file within these subdirectories corresponds to a document posted in that specific newsgroup.
There are three primary versions of the dataset:
1. The original version, which remains unaltered.
2. A version sorted by date, which segregates the data into training (60%) and test (40%) sets. This version has removed duplicates and some headers for clarity.
3. A version that only retains the "From" and "Subject" headers, with duplicates removed.
For those seeking a more consistent benchmark, the date-sorted version is recommended. It offers a realistic split based on time and has removed any newsgroup-specific identifiers.
## Matlab/Octave Version
For users of Matlab or Octave, a processed variant of the date-sorted dataset is available. This version is structured as a sparse matrix and includes files like `train.data`, `train.label`, `test.data`, and more. Additionally, a vocabulary file is provided to help users understand the indexed data.
## Additional Information
For more details and the original dataset, you can refer to the [official website](http://qwone.com/~jason/20Newsgroups/).
---
license: cc-by-nc-4.0
---
|
MohammadOthman/20-News-Groups
|
[
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:question-answering",
"language:en",
"license:unknown",
"text classification",
"clustering",
"newsgroups",
"region:us"
] |
2023-09-24T12:10:31+00:00
|
{"language": ["en"], "license": "unknown", "task_categories": ["text-classification", "summarization", "question-answering"], "tags": ["text classification", "clustering", "newsgroups"], "size": "70 MB", "description": "The 20 Newsgroups dataset is a collection of approximately 20,000 newsgroup documents, partitioned across 20 different newsgroups. It's widely used for text classification and clustering experiments. The dataset offers three versions: the original, a date-sorted version, and a version with only \"From\" and \"Subject\" headers.\n", "homepage": "http://qwone.com/~jason/20Newsgroups/"}
|
2023-09-24T12:37:14+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #task_categories-summarization #task_categories-question-answering #language-English #license-unknown #text classification #clustering #newsgroups #region-us
|
# 20 Newsgroups Dataset
## Introduction
The 20 Newsgroups dataset comprises roughly 20,000 documents from newsgroups, with an almost even distribution across 20 distinct newsgroups. Initially gathered by Ken Lang, this dataset has gained prominence in the machine learning community, particularly for text-related applications like classification and clustering.
## Dataset Structure
The dataset's organization is based on 20 different newsgroups, each representing a unique topic. While some of these newsgroups share similarities or are closely related, others are quite distinct from one another.
### List of Newsgroups:
- Computer Graphics
- Windows OS Miscellaneous
- IBM PC Hardware
- Mac Hardware
- Windows X
- Automobiles
- Motorcycles
- Baseball
- Hockey
- Cryptography
- Electronics
- Medicine
- Space
- Miscellaneous Sales
- Miscellaneous Politics
- Politics & Guns
- Middle East Politics
- Miscellaneous Religion
- Atheism
- Christianity
## Sample Entries
### Sample from 'Windows X'
### Sample from 'Electronics'
## Data Availability
The dataset is bundled in '.URL' format. Within each bundle, individual subdirectories represent a newsgroup. Every file within these subdirectories corresponds to a document posted in that specific newsgroup.
There are three primary versions of the dataset:
1. The original version, which remains unaltered.
2. A version sorted by date, which segregates the data into training (60%) and test (40%) sets. This version has removed duplicates and some headers for clarity.
3. A version that only retains the "From" and "Subject" headers, with duplicates removed.
For those seeking a more consistent benchmark, the date-sorted version is recommended. It offers a realistic split based on time and has removed any newsgroup-specific identifiers.
## Matlab/Octave Version
For users of Matlab or Octave, a processed variant of the date-sorted dataset is available. This version is structured as a sparse matrix and includes files like 'URL', 'URL', 'URL', and more. Additionally, a vocabulary file is provided to help users understand the indexed data.
## Additional Information
For more details and the original dataset, you can refer to the official website.
---
license: cc-by-nc-4.0
---
|
[
"# 20 Newsgroups Dataset",
"## Introduction\n\nThe 20 Newsgroups dataset comprises roughly 20,000 documents from newsgroups, with an almost even distribution across 20 distinct newsgroups. Initially gathered by Ken Lang, this dataset has gained prominence in the machine learning community, particularly for text-related applications like classification and clustering.",
"## Dataset Structure\n\nThe dataset's organization is based on 20 different newsgroups, each representing a unique topic. While some of these newsgroups share similarities or are closely related, others are quite distinct from one another.",
"### List of Newsgroups:\n\n- Computer Graphics\n- Windows OS Miscellaneous\n- IBM PC Hardware\n- Mac Hardware\n- Windows X\n- Automobiles\n- Motorcycles\n- Baseball\n- Hockey\n- Cryptography\n- Electronics\n- Medicine\n- Space\n- Miscellaneous Sales\n- Miscellaneous Politics\n- Politics & Guns\n- Middle East Politics\n- Miscellaneous Religion\n- Atheism\n- Christianity",
"## Sample Entries",
"### Sample from 'Windows X'",
"### Sample from 'Electronics'",
"## Data Availability\n\nThe dataset is bundled in '.URL' format. Within each bundle, individual subdirectories represent a newsgroup. Every file within these subdirectories corresponds to a document posted in that specific newsgroup.\n\nThere are three primary versions of the dataset:\n\n1. The original version, which remains unaltered.\n2. A version sorted by date, which segregates the data into training (60%) and test (40%) sets. This version has removed duplicates and some headers for clarity.\n3. A version that only retains the \"From\" and \"Subject\" headers, with duplicates removed.\n\nFor those seeking a more consistent benchmark, the date-sorted version is recommended. It offers a realistic split based on time and has removed any newsgroup-specific identifiers.",
"## Matlab/Octave Version\n\nFor users of Matlab or Octave, a processed variant of the date-sorted dataset is available. This version is structured as a sparse matrix and includes files like 'URL', 'URL', 'URL', and more. Additionally, a vocabulary file is provided to help users understand the indexed data.",
"## Additional Information\n\nFor more details and the original dataset, you can refer to the official website.\n\n\n\n---\nlicense: cc-by-nc-4.0\n---"
] |
[
"TAGS\n#task_categories-text-classification #task_categories-summarization #task_categories-question-answering #language-English #license-unknown #text classification #clustering #newsgroups #region-us \n",
"# 20 Newsgroups Dataset",
"## Introduction\n\nThe 20 Newsgroups dataset comprises roughly 20,000 documents from newsgroups, with an almost even distribution across 20 distinct newsgroups. Initially gathered by Ken Lang, this dataset has gained prominence in the machine learning community, particularly for text-related applications like classification and clustering.",
"## Dataset Structure\n\nThe dataset's organization is based on 20 different newsgroups, each representing a unique topic. While some of these newsgroups share similarities or are closely related, others are quite distinct from one another.",
"### List of Newsgroups:\n\n- Computer Graphics\n- Windows OS Miscellaneous\n- IBM PC Hardware\n- Mac Hardware\n- Windows X\n- Automobiles\n- Motorcycles\n- Baseball\n- Hockey\n- Cryptography\n- Electronics\n- Medicine\n- Space\n- Miscellaneous Sales\n- Miscellaneous Politics\n- Politics & Guns\n- Middle East Politics\n- Miscellaneous Religion\n- Atheism\n- Christianity",
"## Sample Entries",
"### Sample from 'Windows X'",
"### Sample from 'Electronics'",
"## Data Availability\n\nThe dataset is bundled in '.URL' format. Within each bundle, individual subdirectories represent a newsgroup. Every file within these subdirectories corresponds to a document posted in that specific newsgroup.\n\nThere are three primary versions of the dataset:\n\n1. The original version, which remains unaltered.\n2. A version sorted by date, which segregates the data into training (60%) and test (40%) sets. This version has removed duplicates and some headers for clarity.\n3. A version that only retains the \"From\" and \"Subject\" headers, with duplicates removed.\n\nFor those seeking a more consistent benchmark, the date-sorted version is recommended. It offers a realistic split based on time and has removed any newsgroup-specific identifiers.",
"## Matlab/Octave Version\n\nFor users of Matlab or Octave, a processed variant of the date-sorted dataset is available. This version is structured as a sparse matrix and includes files like 'URL', 'URL', 'URL', and more. Additionally, a vocabulary file is provided to help users understand the indexed data.",
"## Additional Information\n\nFor more details and the original dataset, you can refer to the official website.\n\n\n\n---\nlicense: cc-by-nc-4.0\n---"
] |
[
62,
7,
72,
52,
89,
5,
9,
11,
179,
81,
34
] |
[
"passage: TAGS\n#task_categories-text-classification #task_categories-summarization #task_categories-question-answering #language-English #license-unknown #text classification #clustering #newsgroups #region-us \n# 20 Newsgroups Dataset## Introduction\n\nThe 20 Newsgroups dataset comprises roughly 20,000 documents from newsgroups, with an almost even distribution across 20 distinct newsgroups. Initially gathered by Ken Lang, this dataset has gained prominence in the machine learning community, particularly for text-related applications like classification and clustering.## Dataset Structure\n\nThe dataset's organization is based on 20 different newsgroups, each representing a unique topic. While some of these newsgroups share similarities or are closely related, others are quite distinct from one another.### List of Newsgroups:\n\n- Computer Graphics\n- Windows OS Miscellaneous\n- IBM PC Hardware\n- Mac Hardware\n- Windows X\n- Automobiles\n- Motorcycles\n- Baseball\n- Hockey\n- Cryptography\n- Electronics\n- Medicine\n- Space\n- Miscellaneous Sales\n- Miscellaneous Politics\n- Politics & Guns\n- Middle East Politics\n- Miscellaneous Religion\n- Atheism\n- Christianity## Sample Entries### Sample from 'Windows X'### Sample from 'Electronics'## Data Availability\n\nThe dataset is bundled in '.URL' format. Within each bundle, individual subdirectories represent a newsgroup. Every file within these subdirectories corresponds to a document posted in that specific newsgroup.\n\nThere are three primary versions of the dataset:\n\n1. The original version, which remains unaltered.\n2. A version sorted by date, which segregates the data into training (60%) and test (40%) sets. This version has removed duplicates and some headers for clarity.\n3. A version that only retains the \"From\" and \"Subject\" headers, with duplicates removed.\n\nFor those seeking a more consistent benchmark, the date-sorted version is recommended. It offers a realistic split based on time and has removed any newsgroup-specific identifiers."
] |
91d822f2a687dbc2666286400c803a957f410635
|
# Dataset Card for "llama-movie-prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
erlend0/llama-movie-prompts
|
[
"region:us"
] |
2023-09-24T12:13:56+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 65007, "num_examples": 340}], "download_size": 21880, "dataset_size": 65007}}
|
2023-09-24T12:13:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llama-movie-prompts"
More Information needed
|
[
"# Dataset Card for \"llama-movie-prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llama-movie-prompts\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llama-movie-prompts\"\n\nMore Information needed"
] |
a85e8ae75b516c7ce07a19a567ed3eee9dd06fca
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
kat33/test-bc1
|
[
"language:en",
"license:mit",
"region:us"
] |
2023-09-24T12:14:43+00:00
|
{"language": ["en"], "license": "mit", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": ["train/en-baltimore-catechism-1.jsonl", "train/en-baltimore-catechism-1-addon.jsonl"]}, {"split": "validation", "path": "validation/en-baltimore-catechism-1-validation.jsonl"}]}]}
|
2023-09-27T17:50:47+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #license-mit #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#language-English #license-mit #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
15,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#language-English #license-mit #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
d334799815f6aecb05fd6d446c43ea38b074e94a
|
# Dataset Card for "AISHELL1_withLM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ASR-HypR/AISHELL1_withLM
|
[
"region:us"
] |
2023-09-24T12:36:28+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "dev", "path": "data/dev-*"}]}], "dataset_info": {"features": [{"name": "ref", "dtype": "string"}, {"name": "hyps", "sequence": "string"}, {"name": "ctc_score", "sequence": "float64"}, {"name": "att_score", "sequence": "float64"}, {"name": "lm_score", "sequence": "float64"}, {"name": "utt_id", "dtype": "string"}, {"name": "score", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 572977340, "num_examples": 120098}, {"name": "test", "num_bytes": 34410820, "num_examples": 7176}, {"name": "dev", "num_bytes": 67924134, "num_examples": 14326}], "download_size": 355095107, "dataset_size": 675312294}}
|
2023-09-24T14:04:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AISHELL1_withLM"
More Information needed
|
[
"# Dataset Card for \"AISHELL1_withLM\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AISHELL1_withLM\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AISHELL1_withLM\"\n\nMore Information needed"
] |
54f24a6e9e740a560723b9fa8dfc97f79742c551
|
# Dataset Card for "LibriSpeech_withLM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ASR-HypR/LibriSpeech_withLM
|
[
"region:us"
] |
2023-09-24T12:56:57+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "dev_clean", "path": "data/dev_clean-*"}, {"split": "dev_other", "path": "data/dev_other-*"}, {"split": "test_clean", "path": "data/test_clean-*"}, {"split": "test_other", "path": "data/test_other-*"}]}], "dataset_info": {"features": [{"name": "utt_id", "dtype": "string"}, {"name": "hyps", "sequence": "string"}, {"name": "att_score", "sequence": "float64"}, {"name": "ctc_score", "sequence": "float64"}, {"name": "score", "sequence": "float64"}, {"name": "ref", "dtype": "string"}, {"name": "lm_score", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 3073751225, "num_examples": 281231}, {"name": "dev_clean", "num_bytes": 19839669, "num_examples": 2703}, {"name": "dev_other", "num_bytes": 18981732, "num_examples": 2864}, {"name": "test_clean", "num_bytes": 19336959, "num_examples": 2620}, {"name": "test_other", "num_bytes": 19464386, "num_examples": 2939}], "download_size": 879395852, "dataset_size": 3151373971}}
|
2023-09-24T14:40:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "LibriSpeech_withLM"
More Information needed
|
[
"# Dataset Card for \"LibriSpeech_withLM\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"LibriSpeech_withLM\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"LibriSpeech_withLM\"\n\nMore Information needed"
] |
ce400f07f9ba1398dc7e10cb267049aa3e4ee7be
|
# Dataset Card for "LibriSpeech_withoutLM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ASR-HypR/LibriSpeech_withoutLM
|
[
"region:us"
] |
2023-09-24T12:57:13+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "dev_clean", "path": "data/dev_clean-*"}, {"split": "dev_other", "path": "data/dev_other-*"}, {"split": "test_clean", "path": "data/test_clean-*"}, {"split": "test_other", "path": "data/test_other-*"}]}], "dataset_info": {"features": [{"name": "utt_id", "dtype": "string"}, {"name": "hyps", "sequence": "string"}, {"name": "att_score", "sequence": "float64"}, {"name": "ctc_score", "sequence": "float64"}, {"name": "score", "sequence": "float64"}, {"name": "ref", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2945538729, "num_examples": 281231}, {"name": "dev_clean", "num_bytes": 18638182, "num_examples": 2703}, {"name": "dev_other", "num_bytes": 17703758, "num_examples": 2864}, {"name": "test_clean", "num_bytes": 18159132, "num_examples": 2620}, {"name": "test_other", "num_bytes": 18159497, "num_examples": 2939}], "download_size": 721788345, "dataset_size": 3018199298}}
|
2023-09-24T14:44:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "LibriSpeech_withoutLM"
More Information needed
|
[
"# Dataset Card for \"LibriSpeech_withoutLM\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"LibriSpeech_withoutLM\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"LibriSpeech_withoutLM\"\n\nMore Information needed"
] |
3ff80194c7fb37af6ef4e831fa5eb75187de4f72
|
# Dataset Card for "TEDLIUM2_withLM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ASR-HypR/TEDLIUM2_withLM
|
[
"region:us"
] |
2023-09-24T12:57:36+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "dev", "path": "data/dev-*"}]}], "dataset_info": {"features": [{"name": "ref", "dtype": "string"}, {"name": "hyps", "sequence": "string"}, {"name": "ctc_score", "sequence": "float64"}, {"name": "att_score", "sequence": "float64"}, {"name": "lm_score", "sequence": "float64"}, {"name": "utt_id", "dtype": "string"}, {"name": "score", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 781909140, "num_examples": 92791}, {"name": "test", "num_bytes": 9515959, "num_examples": 1155}, {"name": "dev", "num_bytes": 5695607, "num_examples": 507}], "download_size": 267938768, "dataset_size": 797120706}}
|
2023-09-24T14:01:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "TEDLIUM2_withLM"
More Information needed
|
[
"# Dataset Card for \"TEDLIUM2_withLM\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"TEDLIUM2_withLM\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"TEDLIUM2_withLM\"\n\nMore Information needed"
] |
eb3d7a0fe39b105f902260d7baa16e5b72d2268a
|
# Dataset Card for "TEDLIUM2_withoutLM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ASR-HypR/TEDLIUM2_withoutLM
|
[
"region:us"
] |
2023-09-24T12:57:50+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "dev", "path": "data/dev-*"}]}], "dataset_info": {"features": [{"name": "ref", "dtype": "string"}, {"name": "hyps", "sequence": "string"}, {"name": "ctc_score", "sequence": "float64"}, {"name": "att_score", "sequence": "float64"}, {"name": "utt_id", "dtype": "string"}, {"name": "score", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 739353925, "num_examples": 92791}, {"name": "test", "num_bytes": 9005689, "num_examples": 1155}, {"name": "dev", "num_bytes": 5574485, "num_examples": 507}], "download_size": 216892133, "dataset_size": 753934099}}
|
2023-09-24T14:02:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "TEDLIUM2_withoutLM"
More Information needed
|
[
"# Dataset Card for \"TEDLIUM2_withoutLM\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"TEDLIUM2_withoutLM\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"TEDLIUM2_withoutLM\"\n\nMore Information needed"
] |
31a77a04ea67d4697d179a9d0f60f118f57fddb5
|
# Dataset Card for "KoQuality"
## Dataset Summary
Korean instruction dataset with only 1.04% selected from several insturction datasets for an autoregressive language model.
The final 4,006 sentences were selected out of 384,067 data from the integrated datasets.
## Dataset Description
- repository: [nayohan/HCLT](https://github.com/nayohan/KoQuality)
- huggingface: [DILAB-HYU/KoQuality](https://huggingface.co/datasets/DILAB-HYU/KoQuality)
- raw_datasets: [nayohan/koquality_raw](https://huggingface.co/datasets/nayohan/koquality_raw)
### Selection method
1. Grouping according to length of instruction (5 length group)
2. Clustering dataset with KoSimCSE model embedding <br>
Clustering datasets with 'instruction length' and 'K-means Clustering' using *KoAlpaca, KULLM*, and *OIG*. (384,067 instructions)
3. PPL sampling <br>
Select 1% of data based on PPL of each cluster. (4006 instructions)
### Used datasets
| Dataset | count | links |
| --- | --- | --- |
| KoAlpaca-v1.1a | 21155 | [https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a)
| kullm-v2 | 152630 | [https://huggingface.co/datasets/nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2)
| OIG-small-chip2-ko | 210282 | [https://huggingface.co/datasets/heegyu/OIG-small-chip2-ko](https://huggingface.co/datasets/heegyu/OIG-small-chip2-ko)
### How to use
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("DILAB-HYU/KoQuality", split="train")
>>> ds
Dataset({
features: ['instruction', 'output'],
num_rows: 4006
})
```
<br>
- Instruction Tuned Polyglot-ko Model: [DILAB-HYU/KoQuality-Polyglot-5.8b](https://huggingface.co/DILAB-HYU/KoQuality-Polyglot-5.8b)
- More details can be found here: [github.com/nayohan/KoQuality](https://github.com/nayohan/KoQuality)
- Raw datasets available here: [nayohan/koquality_raw](https://huggingface.co/datasets/nayohan/koquality_raw)
|
DILAB-HYU/KoQuality
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-09-24T13:08:18+00:00
|
{"license": "cc-by-4.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "ppl", "dtype": "float64"}, {"name": "len", "dtype": "int64"}, {"name": "len_group", "dtype": "int64"}, {"name": "cluster", "dtype": "int64"}, {"name": "group", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3762418, "num_examples": 4006}], "download_size": 2059932, "dataset_size": 3762418}}
|
2023-11-30T04:09:37+00:00
|
[] |
[] |
TAGS
#license-cc-by-4.0 #region-us
|
Dataset Card for "KoQuality"
============================
Dataset Summary
---------------
Korean instruction dataset with only 1.04% selected from several insturction datasets for an autoregressive language model.
The final 4,006 sentences were selected out of 384,067 data from the integrated datasets.
Dataset Description
-------------------
* repository: nayohan/HCLT
* huggingface: DILAB-HYU/KoQuality
* raw\_datasets: nayohan/koquality\_raw
### Selection method
1. Grouping according to length of instruction (5 length group)
2. Clustering dataset with KoSimCSE model embedding
Clustering datasets with 'instruction length' and 'K-means Clustering' using *KoAlpaca, KULLM*, and *OIG*. (384,067 instructions)
3. PPL sampling
Select 1% of data based on PPL of each cluster. (4006 instructions)
### Used datasets
Dataset: KoAlpaca-v1.1a, count: 21155, links: URL
Dataset: kullm-v2, count: 152630, links: URL
Dataset: OIG-small-chip2-ko, count: 210282, links: URL
### How to use
* Instruction Tuned Polyglot-ko Model: DILAB-HYU/KoQuality-Polyglot-5.8b
* More details can be found here: URL
* Raw datasets available here: nayohan/koquality\_raw
|
[
"### Selection method\n\n\n1. Grouping according to length of instruction (5 length group)\n2. Clustering dataset with KoSimCSE model embedding \n\nClustering datasets with 'instruction length' and 'K-means Clustering' using *KoAlpaca, KULLM*, and *OIG*. (384,067 instructions)\n3. PPL sampling \n\nSelect 1% of data based on PPL of each cluster. (4006 instructions)",
"### Used datasets\n\n\nDataset: KoAlpaca-v1.1a, count: 21155, links: URL\nDataset: kullm-v2, count: 152630, links: URL\nDataset: OIG-small-chip2-ko, count: 210282, links: URL",
"### How to use\n\n\n \n\n* Instruction Tuned Polyglot-ko Model: DILAB-HYU/KoQuality-Polyglot-5.8b\n* More details can be found here: URL\n* Raw datasets available here: nayohan/koquality\\_raw"
] |
[
"TAGS\n#license-cc-by-4.0 #region-us \n",
"### Selection method\n\n\n1. Grouping according to length of instruction (5 length group)\n2. Clustering dataset with KoSimCSE model embedding \n\nClustering datasets with 'instruction length' and 'K-means Clustering' using *KoAlpaca, KULLM*, and *OIG*. (384,067 instructions)\n3. PPL sampling \n\nSelect 1% of data based on PPL of each cluster. (4006 instructions)",
"### Used datasets\n\n\nDataset: KoAlpaca-v1.1a, count: 21155, links: URL\nDataset: kullm-v2, count: 152630, links: URL\nDataset: OIG-small-chip2-ko, count: 210282, links: URL",
"### How to use\n\n\n \n\n* Instruction Tuned Polyglot-ko Model: DILAB-HYU/KoQuality-Polyglot-5.8b\n* More details can be found here: URL\n* Raw datasets available here: nayohan/koquality\\_raw"
] |
[
15,
100,
67,
59
] |
[
"passage: TAGS\n#license-cc-by-4.0 #region-us \n### Selection method\n\n\n1. Grouping according to length of instruction (5 length group)\n2. Clustering dataset with KoSimCSE model embedding \n\nClustering datasets with 'instruction length' and 'K-means Clustering' using *KoAlpaca, KULLM*, and *OIG*. (384,067 instructions)\n3. PPL sampling \n\nSelect 1% of data based on PPL of each cluster. (4006 instructions)### Used datasets\n\n\nDataset: KoAlpaca-v1.1a, count: 21155, links: URL\nDataset: kullm-v2, count: 152630, links: URL\nDataset: OIG-small-chip2-ko, count: 210282, links: URL### How to use\n\n\n \n\n* Instruction Tuned Polyglot-ko Model: DILAB-HYU/KoQuality-Polyglot-5.8b\n* More details can be found here: URL\n* Raw datasets available here: nayohan/koquality\\_raw"
] |
d7279bf427880967e0f2cf39756edd15de1897c7
|
# Bangumi Image Base of Sound! Euphonium
This is the image base of bangumi Sound! Euphonium, we detected 86 characters, 8324 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 1794 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 93 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 118 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 39 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 23 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 52 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 420 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 27 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 11 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 27 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 25 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 44 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 41 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 504 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 66 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 56 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 217 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 35 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 51 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 16 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 192 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 75 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 32 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 24 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 93 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 454 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 516 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 54 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 63 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 23 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 12 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 55 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 111 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 23 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 227 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 86 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 43 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 43 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 38 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 112 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 36 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 17 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 14 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 88 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 19 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 26 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 59 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 35 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 23 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 26 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 28 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 20 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 24 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 24 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 103 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 21 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 185 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 12 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 24 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 14 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 29 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 22 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 38 | [Download](62/dataset.zip) |  |  |  |  |  |  |  |  |
| 63 | 413 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 65 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| 65 | 17 | [Download](65/dataset.zip) |  |  |  |  |  |  |  |  |
| 66 | 27 | [Download](66/dataset.zip) |  |  |  |  |  |  |  |  |
| 67 | 51 | [Download](67/dataset.zip) |  |  |  |  |  |  |  |  |
| 68 | 21 | [Download](68/dataset.zip) |  |  |  |  |  |  |  |  |
| 69 | 24 | [Download](69/dataset.zip) |  |  |  |  |  |  |  |  |
| 70 | 11 | [Download](70/dataset.zip) |  |  |  |  |  |  |  |  |
| 71 | 16 | [Download](71/dataset.zip) |  |  |  |  |  |  |  |  |
| 72 | 19 | [Download](72/dataset.zip) |  |  |  |  |  |  |  |  |
| 73 | 18 | [Download](73/dataset.zip) |  |  |  |  |  |  |  |  |
| 74 | 23 | [Download](74/dataset.zip) |  |  |  |  |  |  |  |  |
| 75 | 22 | [Download](75/dataset.zip) |  |  |  |  |  |  |  |  |
| 76 | 11 | [Download](76/dataset.zip) |  |  |  |  |  |  |  |  |
| 77 | 9 | [Download](77/dataset.zip) |  |  |  |  |  |  |  |  |
| 78 | 7 | [Download](78/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 79 | 180 | [Download](79/dataset.zip) |  |  |  |  |  |  |  |  |
| 80 | 32 | [Download](80/dataset.zip) |  |  |  |  |  |  |  |  |
| 81 | 26 | [Download](81/dataset.zip) |  |  |  |  |  |  |  |  |
| 82 | 23 | [Download](82/dataset.zip) |  |  |  |  |  |  |  |  |
| 83 | 10 | [Download](83/dataset.zip) |  |  |  |  |  |  |  |  |
| 84 | 30 | [Download](84/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 447 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/soundeuphonium
|
[
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-09-24T13:26:20+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
|
2023-09-29T10:47:11+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
|
Bangumi Image Base of Sound! Euphonium
======================================
This is the image base of bangumi Sound! Euphonium, we detected 86 characters, 8324 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
[
25
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
786b2836eb4484bab78928e199b1705005eb4321
|
# Dataset Card for "AO3_fandom_chatbot_1to1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/AO3_fandom_chatbot_1to1
|
[
"region:us"
] |
2023-09-24T13:35:07+00:00
|
{"dataset_info": {"features": [{"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1203600, "num_examples": 614}], "download_size": 0, "dataset_size": 1203600}}
|
2023-09-25T16:58:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AO3_fandom_chatbot_1to1"
More Information needed
|
[
"# Dataset Card for \"AO3_fandom_chatbot_1to1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AO3_fandom_chatbot_1to1\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AO3_fandom_chatbot_1to1\"\n\nMore Information needed"
] |
aea55cff81ca9b6457068e1dcba4ab02b0f29639
|
# Dataset Card for "phoner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Yuhthe/phoner
|
[
"region:us"
] |
2023-09-24T13:37:58+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "words", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "B-AGE", "1": "I-AGE", "2": "B-DATE", "3": "I-JOB", "4": "O", "5": "B-NAME", "6": "I-PATIENT_ID", "7": "B-LOCATION", "8": "B-TRANSPORTATION", "9": "B-GENDER", "10": "I-ORGANIZATION", "11": "B-SYMPTOM_AND_DISEASE", "12": "B-JOB", "13": "I-NAME", "14": "B-ORGANIZATION", "15": "I-TRANSPORTATION", "16": "B-PATIENT_ID", "17": "I-SYMPTOM_AND_DISEASE", "18": "I-LOCATION", "19": "I-DATE"}}}}], "splits": [{"name": "train", "num_bytes": 2408512, "num_examples": 5027}, {"name": "val", "num_bytes": 1020086, "num_examples": 2000}, {"name": "test", "num_bytes": 1549558, "num_examples": 3000}], "download_size": 0, "dataset_size": 4978156}}
|
2023-09-24T14:28:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "phoner"
More Information needed
|
[
"# Dataset Card for \"phoner\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"phoner\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"phoner\"\n\nMore Information needed"
] |
30f2ea61f288b025c5e1a8c8c4f5ee132fc1c4b9
|
[MMLU Github](https://github.com/hendrycks/test)
Only used the auxiliary test set. I have not checked for similarity or contamination, but it's something I need to figure out soon.
Has randomized starting messages indicating it's a multiple choice question, and the response needs to be a single letter. For the rejected response I randomly chose an incorrect answer, or randomly chose any answer written out fully and not just a single letter.
This was done to hopefully teach a model how to properly follow the task of answering a multiple choice question, with a restraint of *only* providing a single letter answer, and do so correctly on a quality set.
# Paper: [Measuring Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300)
```
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
@article{hendrycks2021ethics,
title={Aligning AI With Shared Human Values},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
```
|
xzuyn/mmlu-auxilary-train-dpo
|
[
"size_categories:10K<n<100K",
"language:en",
"human-feedback",
"comparison",
"rlhf",
"dpo",
"preference",
"pairwise",
"arxiv:2009.03300",
"region:us"
] |
2023-09-24T13:44:13+00:00
|
{"language": ["en"], "size_categories": ["10K<n<100K"], "tags": ["human-feedback", "comparison", "rlhf", "dpo", "preference", "pairwise"]}
|
2023-09-24T18:11:23+00:00
|
[
"2009.03300"
] |
[
"en"
] |
TAGS
#size_categories-10K<n<100K #language-English #human-feedback #comparison #rlhf #dpo #preference #pairwise #arxiv-2009.03300 #region-us
|
MMLU Github
Only used the auxiliary test set. I have not checked for similarity or contamination, but it's something I need to figure out soon.
Has randomized starting messages indicating it's a multiple choice question, and the response needs to be a single letter. For the rejected response I randomly chose an incorrect answer, or randomly chose any answer written out fully and not just a single letter.
This was done to hopefully teach a model how to properly follow the task of answering a multiple choice question, with a restraint of *only* providing a single letter answer, and do so correctly on a quality set.
# Paper: Measuring Massive Multitask Language Understanding
|
[
"# Paper: Measuring Massive Multitask Language Understanding"
] |
[
"TAGS\n#size_categories-10K<n<100K #language-English #human-feedback #comparison #rlhf #dpo #preference #pairwise #arxiv-2009.03300 #region-us \n",
"# Paper: Measuring Massive Multitask Language Understanding"
] |
[
52,
14
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #language-English #human-feedback #comparison #rlhf #dpo #preference #pairwise #arxiv-2009.03300 #region-us \n# Paper: Measuring Massive Multitask Language Understanding"
] |
55407b0f727bf2dbfb82a5cb457b951c34e3e170
|
# Dataset Card for "Kinyarwanda_wikipedia20230920"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
RogerB/Kinyarwanda_wikipedia20230920
|
[
"region:us"
] |
2023-09-24T14:14:58+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11949533, "num_examples": 8046}], "download_size": 6643489, "dataset_size": 11949533}}
|
2023-09-24T14:15:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Kinyarwanda_wikipedia20230920"
More Information needed
|
[
"# Dataset Card for \"Kinyarwanda_wikipedia20230920\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Kinyarwanda_wikipedia20230920\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Kinyarwanda_wikipedia20230920\"\n\nMore Information needed"
] |
5a3def1cfa17aec2cc30b441f9e3bac8d92e0473
|

# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
mindchain/ORCA_GOT_STYLE
|
[
"region:us"
] |
2023-09-24T14:33:49+00:00
|
{}
|
2023-09-24T17:08:58+00:00
|
[] |
[] |
TAGS
#region-us
|
!image/png
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
0bc6099b80ef03928630b49b13a037506acfbc42
|
# Dataset Card for "phoner_conll"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Yuhthe/phoner_conll
|
[
"region:us"
] |
2023-09-24T14:34:17+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "words", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "B-AGE", "1": "I-AGE", "2": "B-DATE", "3": "I-JOB", "4": "O", "5": "B-NAME", "6": "I-PATIENT_ID", "7": "B-LOCATION", "8": "B-TRANSPORTATION", "9": "B-GENDER", "10": "I-ORGANIZATION", "11": "B-SYMPTOM_AND_DISEASE", "12": "B-JOB", "13": "I-NAME", "14": "B-ORGANIZATION", "15": "I-TRANSPORTATION", "16": "B-PATIENT_ID", "17": "I-SYMPTOM_AND_DISEASE", "18": "I-LOCATION", "19": "I-DATE"}}}}], "splits": [{"name": "train", "num_bytes": 2408512, "num_examples": 5027}, {"name": "val", "num_bytes": 1020086, "num_examples": 2000}, {"name": "test", "num_bytes": 1549558, "num_examples": 3000}], "download_size": 831184, "dataset_size": 4978156}}
|
2023-09-24T14:34:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "phoner_conll"
More Information needed
|
[
"# Dataset Card for \"phoner_conll\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"phoner_conll\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"phoner_conll\"\n\nMore Information needed"
] |
ea42ec71bd6f0aa55fb3d77a37fb0d0fd0ff5e1d
|
# Dataset Card for "speech_commands"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
barto17/speech_commands
|
[
"region:us"
] |
2023-09-24T14:41:51+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 5348243424, "num_examples": 84848}, {"name": "validation", "num_bytes": 630456936, "num_examples": 9982}, {"name": "test", "num_bytes": 313038240, "num_examples": 4890}], "download_size": 733656472, "dataset_size": 6291738600}}
|
2023-09-24T15:01:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "speech_commands"
More Information needed
|
[
"# Dataset Card for \"speech_commands\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"speech_commands\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"speech_commands\"\n\nMore Information needed"
] |
21079dbd3e56763cfea40da3cd5d0b58c19b5629
|
# Dataset Card for "top_terms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
iohadrubin/top_terms
|
[
"region:us"
] |
2023-09-24T14:49:23+00:00
|
{"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "value", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 49818, "num_examples": 64}], "download_size": 31740, "dataset_size": 49818}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-24T14:49:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "top_terms"
More Information needed
|
[
"# Dataset Card for \"top_terms\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"top_terms\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"top_terms\"\n\nMore Information needed"
] |
e491c7b26b8258df8b47f882f2e78df4c830ff2b
|
# Dataset Card for "habr_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/habr_10k
|
[
"region:us"
] |
2023-09-24T14:52:31+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "uint32"}, {"name": "language", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text_markdown", "dtype": "string"}, {"name": "text_html", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "original_author", "dtype": "string"}, {"name": "original_url", "dtype": "string"}, {"name": "lead_html", "dtype": "string"}, {"name": "lead_markdown", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "time_published", "dtype": "uint64"}, {"name": "statistics", "struct": [{"name": "commentsCount", "dtype": "uint32"}, {"name": "favoritesCount", "dtype": "uint32"}, {"name": "readingCount", "dtype": "uint32"}, {"name": "score", "dtype": "int32"}, {"name": "votesCount", "dtype": "int32"}, {"name": "votesCountPlus", "dtype": "int32"}, {"name": "votesCountMinus", "dtype": "int32"}]}, {"name": "labels", "sequence": "string"}, {"name": "hubs", "sequence": "string"}, {"name": "flows", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "reading_time", "dtype": "uint32"}, {"name": "format", "dtype": "string"}, {"name": "complexity", "dtype": "string"}, {"name": "comments", "sequence": [{"name": "id", "dtype": "uint64"}, {"name": "parent_id", "dtype": "uint64"}, {"name": "level", "dtype": "uint32"}, {"name": "time_published", "dtype": "uint64"}, {"name": "score", "dtype": "int32"}, {"name": "votes", "dtype": "uint32"}, {"name": "message_html", "dtype": "string"}, {"name": "message_markdown", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "children", "sequence": "uint64"}]}, {"name": "readingCount", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 661170132.0315578, "num_examples": 10000}], "download_size": 901387901, "dataset_size": 661170132.0315578}}
|
2023-09-24T14:56:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "habr_10k"
More Information needed
|
[
"# Dataset Card for \"habr_10k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"habr_10k\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"habr_10k\"\n\nMore Information needed"
] |
d4418683a32cd7b413c43569be441a91de84c1c8
|
# Dataset Card for "kuo-diss"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
drumwell/kuo-diss
|
[
"region:us"
] |
2023-09-24T15:00:23+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2044182, "num_examples": 1}], "download_size": 1198738, "dataset_size": 2044182}}
|
2023-09-24T15:00:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "kuo-diss"
More Information needed
|
[
"# Dataset Card for \"kuo-diss\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"kuo-diss\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"kuo-diss\"\n\nMore Information needed"
] |
963dd7cf18cf35b421832e0d22f96ad61bf73d54
|
# Dataset Card for "top_terms_subtopics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
iohadrubin/top_terms_subtopics
|
[
"region:us"
] |
2023-09-24T15:46:59+00:00
|
{"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "value", "dtype": "string"}, {"name": "cluster", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3330605, "num_examples": 4096}], "download_size": 0, "dataset_size": 3330605}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-24T15:47:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "top_terms_subtopics"
More Information needed
|
[
"# Dataset Card for \"top_terms_subtopics\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"top_terms_subtopics\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"top_terms_subtopics\"\n\nMore Information needed"
] |
d2f8061f167326fbb89926bd1bdab0cabca596d1
|
# Dataset Card for "phoner_seq2seq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Yuhthe/phoner_seq2seq
|
[
"region:us"
] |
2023-09-24T15:53:05+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "words", "dtype": "string"}, {"name": "tags", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2534372, "num_examples": 5027}, {"name": "val", "num_bytes": 1140004, "num_examples": 2000}, {"name": "test", "num_bytes": 1742126, "num_examples": 3000}], "download_size": 2188554, "dataset_size": 5416502}}
|
2023-09-24T15:54:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "phoner_seq2seq"
More Information needed
|
[
"# Dataset Card for \"phoner_seq2seq\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"phoner_seq2seq\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"phoner_seq2seq\"\n\nMore Information needed"
] |
c426d87d719d3acbe38462b27b707d82cacd8583
|
# Dataset Card for "top_terms_subtopics_w_emb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
iohadrubin/top_terms_subtopics_w_emb
|
[
"region:us"
] |
2023-09-24T16:03:26+00:00
|
{"dataset_info": {"features": [{"name": "idx", "dtype": "int64"}, {"name": "value", "dtype": "string"}, {"name": "cluster", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "embeddings", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 53678637, "num_examples": 4096}], "download_size": 53069276, "dataset_size": 53678637}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-24T16:04:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "top_terms_subtopics_w_emb"
More Information needed
|
[
"# Dataset Card for \"top_terms_subtopics_w_emb\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"top_terms_subtopics_w_emb\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"top_terms_subtopics_w_emb\"\n\nMore Information needed"
] |
424d8fa8cf2453830643eb0c4de4e0460cda28f3
|
# Dataset Card for "llm-kuobot2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
drumwell/llm-kuobot
|
[
"region:us"
] |
2023-09-24T16:11:09+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 1631004.0, "num_examples": 199}, {"name": "test", "num_bytes": 188508.0, "num_examples": 23}], "download_size": 942321, "dataset_size": 1819512.0}}
|
2023-09-24T16:11:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llm-kuobot2"
More Information needed
|
[
"# Dataset Card for \"llm-kuobot2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llm-kuobot2\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llm-kuobot2\"\n\nMore Information needed"
] |
82471c6fc6337e9fd2daa9da52fdf467484bd89b
|
# Dataset Card for "xxt_ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yuanmei424/xxt_ds
|
[
"region:us"
] |
2023-09-24T16:12:32+00:00
|
{"dataset_info": {"features": [{"name": "edit_prompt", "dtype": "string"}, {"name": "input_image", "dtype": "image"}, {"name": "edited_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5219118955.25, "num_examples": 2283951}], "download_size": 0, "dataset_size": 5219118955.25}}
|
2023-09-26T21:10:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xxt_ds"
More Information needed
|
[
"# Dataset Card for \"xxt_ds\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xxt_ds\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xxt_ds\"\n\nMore Information needed"
] |
8cefb1bb06d087a3819e0cb645f2f66539ffb343
|
# Dataset Card for "xxt_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yuanmei424/xxt_sample
|
[
"region:us"
] |
2023-09-24T16:12:44+00:00
|
{"dataset_info": {"features": [{"name": "edit_prompt", "dtype": "string"}, {"name": "input_image", "dtype": "image"}, {"name": "edited_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 18228939.25, "num_examples": 7735}], "download_size": 15793441, "dataset_size": 18228939.25}}
|
2023-09-26T13:06:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xxt_sample"
More Information needed
|
[
"# Dataset Card for \"xxt_sample\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xxt_sample\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xxt_sample\"\n\nMore Information needed"
] |
95307c75fac6d354100df89bc40b7d9db6714d58
|
# Dataset Card for "donut_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sankettgorey/donut_4
|
[
"region:us"
] |
2023-09-24T16:13:47+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 168398938.91680533, "num_examples": 540}, {"name": "validation", "num_bytes": 8579406.106489185, "num_examples": 30}, {"name": "test", "num_bytes": 9378162.976705492, "num_examples": 31}], "download_size": 13900408, "dataset_size": 186356508.0}}
|
2023-09-24T16:17:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "donut_4"
More Information needed
|
[
"# Dataset Card for \"donut_4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"donut_4\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"donut_4\"\n\nMore Information needed"
] |
82d050b04b042476261b7524cc9b70056326cec5
|
# Dataset Card for "imdb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
barto17/imdb
|
[
"region:us"
] |
2023-09-24T16:16:21+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "unsupervised", "path": "data/unsupervised-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 97632823, "num_examples": 25000}, {"name": "test", "num_bytes": 96850685, "num_examples": 25000}, {"name": "unsupervised", "num_bytes": 195506794, "num_examples": 50000}], "download_size": 135785876, "dataset_size": 389990302}}
|
2023-09-24T16:16:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "imdb"
More Information needed
|
[
"# Dataset Card for \"imdb\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"imdb\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"imdb\"\n\nMore Information needed"
] |
727b96dc94fc6e4796762b9330b41665a4901e76
|
[ETHICS Github](https://github.com/hendrycks/ethics)
Dataset has been modified to have a random starting message to ask for a preferred scenario. It also has randomized usage of digits and words to indicate the scenario. I wanted to vary the quality of the instruction so that its similar to what a user may type, so I made the scenario prefixes able to be different from one another.
The order in which the scenarios are presented is randomized.
```
Selected Prefix: 'Choose the scenario which is better than the other.' Count: 1970
Selected Prefix: 'Which one is better?' Count: 1969
Selected Prefix: 'Please let me know what one of these you think is more ideal.' Count: 1966
Selected Prefix: 'Pick the more ideal situation.' Count: 1926
Selected Prefix: 'What scenario is better to you?' Count: 1901
Selected Prefix: 'What do you think is a better option?' Count: 2024
Selected Prefix: 'The following is two scenarios. Select which is better.' Count: 1982
Selected Scenario Prefix: 'scenario ' Count: 1744
Selected Scenario Prefix: 'Option ' Count: 1753
Selected Scenario Prefix: 'Choice ' Count: 1730
Selected Scenario Prefix: 'Situation ' Count: 1742
Selected Scenario Prefix: 'situation ' Count: 1705
Selected Scenario Prefix: 'choice ' Count: 1721
Selected Scenario Prefix: 'option ' Count: 1682
Selected Scenario Prefix: 'Scenario ' Count: 1661
Selected Scenario Prefix Number 1: '1: ' Count: 4586
Selected Scenario Prefix Number 1: 'One: ' Count: 4572
Selected Scenario Prefix Number 1: 'one: ' Count: 4580
Selected Scenario Prefix Number 2: '2: ' Count: 4502
Selected Scenario Prefix Number 2: 'two: ' Count: 4670
Selected Scenario Prefix Number 2: 'Two: ' Count: 4566
```
# Paper: [Aligning AI With Shared Human Values](https://arxiv.org/pdf/2008.02275)
```
@article{hendrycks2021ethics,
title={Aligning AI With Shared Human Values},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
```
|
xzuyn/ethics-utilitarianism-dpo
|
[
"size_categories:10K<n<100K",
"language:en",
"human-feedback",
"comparison",
"rlhf",
"dpo",
"preference",
"pairwise",
"arxiv:2008.02275",
"region:us"
] |
2023-09-24T17:00:20+00:00
|
{"language": ["en"], "size_categories": ["10K<n<100K"], "tags": ["human-feedback", "comparison", "rlhf", "dpo", "preference", "pairwise"]}
|
2023-09-24T18:11:10+00:00
|
[
"2008.02275"
] |
[
"en"
] |
TAGS
#size_categories-10K<n<100K #language-English #human-feedback #comparison #rlhf #dpo #preference #pairwise #arxiv-2008.02275 #region-us
|
ETHICS Github
Dataset has been modified to have a random starting message to ask for a preferred scenario. It also has randomized usage of digits and words to indicate the scenario. I wanted to vary the quality of the instruction so that its similar to what a user may type, so I made the scenario prefixes able to be different from one another.
The order in which the scenarios are presented is randomized.
# Paper: Aligning AI With Shared Human Values
|
[
"# Paper: Aligning AI With Shared Human Values"
] |
[
"TAGS\n#size_categories-10K<n<100K #language-English #human-feedback #comparison #rlhf #dpo #preference #pairwise #arxiv-2008.02275 #region-us \n",
"# Paper: Aligning AI With Shared Human Values"
] |
[
53,
13
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #language-English #human-feedback #comparison #rlhf #dpo #preference #pairwise #arxiv-2008.02275 #region-us \n# Paper: Aligning AI With Shared Human Values"
] |
e28317d6976a71ae6b04a456d1bfd4731981fd92
|
# Dataset Card for "questions_ForFineTune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vincenttttt/questions_ForFineTune
|
[
"region:us"
] |
2023-09-24T17:02:28+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3117, "num_examples": 5}], "download_size": 6540, "dataset_size": 3117}}
|
2023-09-24T17:02:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "questions_ForFineTune"
More Information needed
|
[
"# Dataset Card for \"questions_ForFineTune\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"questions_ForFineTune\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"questions_ForFineTune\"\n\nMore Information needed"
] |
958a3e0b32421b9f07c4c3f78aecbf98019a3402
|
# Dataset Card for COCO-Counterfactuals
<!-- Provide a quick summary of the dataset. -->
COCO-Counterfactuals is a high quality synthetic dataset for multimodal vision-language model evaluation and for training data augmentation.
## Dataset Details
### Dataset Description
COCO-Counterfactuals is a high quality synthetic dataset for multimodal vision-language model evaluation and for training data augmentation. Each COCO-Counterfactuals example includes a pair of image-text pairs; one is a counterfactual variation of the other. The two captions are identical to each other except a noun subject. The two corresponding synthetic images differ only in terms of the altered subject in the two captions. In our accompanying paper, we showed that the COCO-Counterfactuals dataset is challenging for existing pre-trained multimodal models and significantly increase the difficulty of the zero-shot image-text retrieval and image-text matching tasks. Our experiments also demonstrate that augmenting training data with COCO-Counterfactuals improves OOD generalization on multiple downstream tasks.
- **License:** CC-BY-4.0
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://huggingface.co/datasets/Intel/COCO-Counterfactuals
- **Paper:** https://openreview.net/forum?id=7AjdHnjIHX
### Data
The captions are located in `data/examples.jsonl` and the images are located in `data/images.zip`. You can load the data as follows:
```python
from datasets import load_dataset
examples = load_dataset('Intel/COCO-Counterfactuals', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
You can get `<YOUR USER ACCESS TOKEN>` by following these steps:
1) log into your Hugging Face account
2) click on your profile picture
3) click "Settings"
4) click "Access Tokens"
5) generate an access token
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Despite the impressive recent improvements in text-to-image generation capabilities, models such as Stable Diffusion have well-known limitations that should be considered when utilizing datasets which
are derived from them. We do not foresee significant risks of security threats or human rights violations in our work. However, the automated nature of our image generation process may introduce the possibility of our COCO-Counterfactuals dataset
containing images that some individuals may consider inappropriate or offensive.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
https://openreview.net/forum?id=7AjdHnjIHX
Tiep Le and Phillip Howard contributed equally.
**BibTeX:**
```
@inproceedings{le2023cococounterfactuals,
author = {Tiep Le and Vasudev Lal and Phillip Howard},
title = {{COCO}-Counterfactuals: Automatically Constructed Counterfactual Examples for Image-Text Pairs},
booktitle = {Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year = 2023,
url={https://openreview.net/forum?id=7AjdHnjIHX},
}
```
## Dataset Card Authors
Tiep Le and Vasudev Lal and Phillip Howard
## Dataset Card Contact
[email protected]; [email protected]; [email protected]
|
Intel/COCO-Counterfactuals
|
[
"license:cc-by-4.0",
"counterfactuals",
"Data Augmentation",
"Multimodal Models",
"Transformer",
"Diffusion Models",
"Cross Attention Control",
"Prompt-to-Prompt",
"region:us"
] |
2023-09-24T17:32:16+00:00
|
{"license": "cc-by-4.0", "pretty_name": "COCO-Counterfactuals", "tags": ["counterfactuals", "Data Augmentation", "Multimodal Models", "Transformer", "Diffusion Models", "Cross Attention Control", "Prompt-to-Prompt"]}
|
2023-12-12T02:20:39+00:00
|
[] |
[] |
TAGS
#license-cc-by-4.0 #counterfactuals #Data Augmentation #Multimodal Models #Transformer #Diffusion Models #Cross Attention Control #Prompt-to-Prompt #region-us
|
# Dataset Card for COCO-Counterfactuals
COCO-Counterfactuals is a high quality synthetic dataset for multimodal vision-language model evaluation and for training data augmentation.
## Dataset Details
### Dataset Description
COCO-Counterfactuals is a high quality synthetic dataset for multimodal vision-language model evaluation and for training data augmentation. Each COCO-Counterfactuals example includes a pair of image-text pairs; one is a counterfactual variation of the other. The two captions are identical to each other except a noun subject. The two corresponding synthetic images differ only in terms of the altered subject in the two captions. In our accompanying paper, we showed that the COCO-Counterfactuals dataset is challenging for existing pre-trained multimodal models and significantly increase the difficulty of the zero-shot image-text retrieval and image-text matching tasks. Our experiments also demonstrate that augmenting training data with COCO-Counterfactuals improves OOD generalization on multiple downstream tasks.
- License: CC-BY-4.0
### Dataset Sources
- Repository: URL
- Paper: URL
### Data
The captions are located in 'data/URL' and the images are located in 'data/URL'. You can load the data as follows:
You can get '<YOUR USER ACCESS TOKEN>' by following these steps:
1) log into your Hugging Face account
2) click on your profile picture
3) click "Settings"
4) click "Access Tokens"
5) generate an access token
## Dataset Structure
## Bias, Risks, and Limitations
Despite the impressive recent improvements in text-to-image generation capabilities, models such as Stable Diffusion have well-known limitations that should be considered when utilizing datasets which
are derived from them. We do not foresee significant risks of security threats or human rights violations in our work. However, the automated nature of our image generation process may introduce the possibility of our COCO-Counterfactuals dataset
containing images that some individuals may consider inappropriate or offensive.
URL
Tiep Le and Phillip Howard contributed equally.
BibTeX:
## Dataset Card Authors
Tiep Le and Vasudev Lal and Phillip Howard
## Dataset Card Contact
URL@URL; URL@URL; phillip.r.howard@URL
|
[
"# Dataset Card for COCO-Counterfactuals\n\n\n\nCOCO-Counterfactuals is a high quality synthetic dataset for multimodal vision-language model evaluation and for training data augmentation.",
"## Dataset Details",
"### Dataset Description\n\nCOCO-Counterfactuals is a high quality synthetic dataset for multimodal vision-language model evaluation and for training data augmentation. Each COCO-Counterfactuals example includes a pair of image-text pairs; one is a counterfactual variation of the other. The two captions are identical to each other except a noun subject. The two corresponding synthetic images differ only in terms of the altered subject in the two captions. In our accompanying paper, we showed that the COCO-Counterfactuals dataset is challenging for existing pre-trained multimodal models and significantly increase the difficulty of the zero-shot image-text retrieval and image-text matching tasks. Our experiments also demonstrate that augmenting training data with COCO-Counterfactuals improves OOD generalization on multiple downstream tasks. \n\n- License: CC-BY-4.0",
"### Dataset Sources\n\n\n\n- Repository: URL\n- Paper: URL",
"### Data\n\nThe captions are located in 'data/URL' and the images are located in 'data/URL'. You can load the data as follows:\n\nYou can get '<YOUR USER ACCESS TOKEN>' by following these steps:\n1) log into your Hugging Face account\n2) click on your profile picture\n3) click \"Settings\"\n4) click \"Access Tokens\"\n5) generate an access token",
"## Dataset Structure",
"## Bias, Risks, and Limitations\n\n\nDespite the impressive recent improvements in text-to-image generation capabilities, models such as Stable Diffusion have well-known limitations that should be considered when utilizing datasets which\nare derived from them. We do not foresee significant risks of security threats or human rights violations in our work. However, the automated nature of our image generation process may introduce the possibility of our COCO-Counterfactuals dataset\ncontaining images that some individuals may consider inappropriate or offensive.\n\n\nURL\n\nTiep Le and Phillip Howard contributed equally.\n\nBibTeX:",
"## Dataset Card Authors\n\nTiep Le and Vasudev Lal and Phillip Howard",
"## Dataset Card Contact\n\nURL@URL; URL@URL; phillip.r.howard@URL"
] |
[
"TAGS\n#license-cc-by-4.0 #counterfactuals #Data Augmentation #Multimodal Models #Transformer #Diffusion Models #Cross Attention Control #Prompt-to-Prompt #region-us \n",
"# Dataset Card for COCO-Counterfactuals\n\n\n\nCOCO-Counterfactuals is a high quality synthetic dataset for multimodal vision-language model evaluation and for training data augmentation.",
"## Dataset Details",
"### Dataset Description\n\nCOCO-Counterfactuals is a high quality synthetic dataset for multimodal vision-language model evaluation and for training data augmentation. Each COCO-Counterfactuals example includes a pair of image-text pairs; one is a counterfactual variation of the other. The two captions are identical to each other except a noun subject. The two corresponding synthetic images differ only in terms of the altered subject in the two captions. In our accompanying paper, we showed that the COCO-Counterfactuals dataset is challenging for existing pre-trained multimodal models and significantly increase the difficulty of the zero-shot image-text retrieval and image-text matching tasks. Our experiments also demonstrate that augmenting training data with COCO-Counterfactuals improves OOD generalization on multiple downstream tasks. \n\n- License: CC-BY-4.0",
"### Dataset Sources\n\n\n\n- Repository: URL\n- Paper: URL",
"### Data\n\nThe captions are located in 'data/URL' and the images are located in 'data/URL'. You can load the data as follows:\n\nYou can get '<YOUR USER ACCESS TOKEN>' by following these steps:\n1) log into your Hugging Face account\n2) click on your profile picture\n3) click \"Settings\"\n4) click \"Access Tokens\"\n5) generate an access token",
"## Dataset Structure",
"## Bias, Risks, and Limitations\n\n\nDespite the impressive recent improvements in text-to-image generation capabilities, models such as Stable Diffusion have well-known limitations that should be considered when utilizing datasets which\nare derived from them. We do not foresee significant risks of security threats or human rights violations in our work. However, the automated nature of our image generation process may introduce the possibility of our COCO-Counterfactuals dataset\ncontaining images that some individuals may consider inappropriate or offensive.\n\n\nURL\n\nTiep Le and Phillip Howard contributed equally.\n\nBibTeX:",
"## Dataset Card Authors\n\nTiep Le and Vasudev Lal and Phillip Howard",
"## Dataset Card Contact\n\nURL@URL; URL@URL; phillip.r.howard@URL"
] |
[
57,
46,
4,
209,
16,
88,
6,
140,
18,
23
] |
[
"passage: TAGS\n#license-cc-by-4.0 #counterfactuals #Data Augmentation #Multimodal Models #Transformer #Diffusion Models #Cross Attention Control #Prompt-to-Prompt #region-us \n# Dataset Card for COCO-Counterfactuals\n\n\n\nCOCO-Counterfactuals is a high quality synthetic dataset for multimodal vision-language model evaluation and for training data augmentation.## Dataset Details### Dataset Description\n\nCOCO-Counterfactuals is a high quality synthetic dataset for multimodal vision-language model evaluation and for training data augmentation. Each COCO-Counterfactuals example includes a pair of image-text pairs; one is a counterfactual variation of the other. The two captions are identical to each other except a noun subject. The two corresponding synthetic images differ only in terms of the altered subject in the two captions. In our accompanying paper, we showed that the COCO-Counterfactuals dataset is challenging for existing pre-trained multimodal models and significantly increase the difficulty of the zero-shot image-text retrieval and image-text matching tasks. Our experiments also demonstrate that augmenting training data with COCO-Counterfactuals improves OOD generalization on multiple downstream tasks. \n\n- License: CC-BY-4.0### Dataset Sources\n\n\n\n- Repository: URL\n- Paper: URL### Data\n\nThe captions are located in 'data/URL' and the images are located in 'data/URL'. You can load the data as follows:\n\nYou can get '<YOUR USER ACCESS TOKEN>' by following these steps:\n1) log into your Hugging Face account\n2) click on your profile picture\n3) click \"Settings\"\n4) click \"Access Tokens\"\n5) generate an access token## Dataset Structure"
] |
a7ff7a5fb84e1816c7e5a439d6d2f2c5f2ac3f51
|
# French Wikipedia Dataset
## Overview
This dataset is a curated collection of approximately 1.1 million French Wikipedia articles, scraped directly from the [official French Wikipedia site](https://fr.wikipedia.org/) on September 24, 2023.
There are already numerous datasets for Wikipedia, including the official one with [Wikipedia's dump](https://huggingface.co/datasets/wikipedia). Unfortunately, the text for the French version of this dataset is incomplete, lacking many elements like dates and locations.
As the saying goes, "garbage in, garbage out."
## Format
- **Type**: Text
- **File Extension**: `.txt`
## Structure
The dataset is divided into the following splits:
- `train.txt`: 3.45 GB - 1,810,000 rows - 90%
- `test.txt` : 192 MB - 100,575 rows - 5%
- `valid.txt`: 192 MB - 100,575 rows - 5%
Each article in the dataset exceeds 1400 characters in length.
## Data Cleaning and Preprocessing
The following elements have been excluded from the dataset:
- H1 - H4 Headings
- Lists
- Tables
- Sources and References
- Info box
- Banners
- LaTeX code
The text has been standardized for consistent formatting and line length. Additionally, the dataset has been filtered using the `langid` library to include only text in French. Some quotations or short terms in other languages, including non-Latin languages, may still be present.
## Exploring the Dataset
You can use the `explore_dataset.py` script to explore the dataset by randomly displaying a certain number of lines from it. The script creates and saves an index based on the line breaks, enabling faster data retrieval and display.
## Additional Information
This dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, as well as several hundred thousand Francophone news articles.
---
# WIKIPEDIA EXTRACT
Inside the `/extract_wiki/` directory, you'll find Python scripts used to extract text to compile this dataset.
## Requirements:
```python
pip install datasets aiohttp aiofiles beautifulsoup4 langid
```
## Scripts:
1. **1_extract_link.py**
```python
python 1_extract_link.py
```
Script to download the Wikipedia dataset from Hugging Face, extract URLs, and save them to a text file for further processing.
2. **2_extract_content.py**
```python
python 2_extract_content.py
```
This script retrieves the source code of Wikipedia pages based on URLs found in a text file. Instead of saving the entire HTML of the page, it trims the content, focusing on the main article section, thereby limiting the size of each record.
3. **3_extract_txt.py**
```python
python 3_extract_txt.py
```
This script extracts the text from the HTML pages and conducts tests to filter the content that should be retained or excluded. This includes language checks, special characters, numbers, etc.
|
eckendoerffer/wikipedia_fr
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:fr",
"license:cc-by-sa-3.0",
"wikipedia",
"wiki",
"fr.wikipedia.org",
"region:us"
] |
2023-09-24T17:40:48+00:00
|
{"language": ["fr"], "license": "cc-by-sa-3.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "tags": ["wikipedia", "wiki", "fr.wikipedia.org"]}
|
2023-09-27T17:36:03+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #size_categories-1M<n<10M #language-French #license-cc-by-sa-3.0 #wikipedia #wiki #fr.wikipedia.org #region-us
|
# French Wikipedia Dataset
## Overview
This dataset is a curated collection of approximately 1.1 million French Wikipedia articles, scraped directly from the official French Wikipedia site on September 24, 2023.
There are already numerous datasets for Wikipedia, including the official one with Wikipedia's dump. Unfortunately, the text for the French version of this dataset is incomplete, lacking many elements like dates and locations.
As the saying goes, "garbage in, garbage out."
## Format
- Type: Text
- File Extension: '.txt'
## Structure
The dataset is divided into the following splits:
- 'URL': 3.45 GB - 1,810,000 rows - 90%
- 'URL' : 192 MB - 100,575 rows - 5%
- 'URL': 192 MB - 100,575 rows - 5%
Each article in the dataset exceeds 1400 characters in length.
## Data Cleaning and Preprocessing
The following elements have been excluded from the dataset:
- H1 - H4 Headings
- Lists
- Tables
- Sources and References
- Info box
- Banners
- LaTeX code
The text has been standardized for consistent formatting and line length. Additionally, the dataset has been filtered using the 'langid' library to include only text in French. Some quotations or short terms in other languages, including non-Latin languages, may still be present.
## Exploring the Dataset
You can use the 'explore_dataset.py' script to explore the dataset by randomly displaying a certain number of lines from it. The script creates and saves an index based on the line breaks, enabling faster data retrieval and display.
## Additional Information
This dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, as well as several hundred thousand Francophone news articles.
---
# WIKIPEDIA EXTRACT
Inside the '/extract_wiki/' directory, you'll find Python scripts used to extract text to compile this dataset.
## Requirements:
## Scripts:
1. 1_extract_link.py
Script to download the Wikipedia dataset from Hugging Face, extract URLs, and save them to a text file for further processing.
2. 2_extract_content.py
This script retrieves the source code of Wikipedia pages based on URLs found in a text file. Instead of saving the entire HTML of the page, it trims the content, focusing on the main article section, thereby limiting the size of each record.
3. 3_extract_txt.py
This script extracts the text from the HTML pages and conducts tests to filter the content that should be retained or excluded. This includes language checks, special characters, numbers, etc.
|
[
"# French Wikipedia Dataset",
"## Overview\n\nThis dataset is a curated collection of approximately 1.1 million French Wikipedia articles, scraped directly from the official French Wikipedia site on September 24, 2023. \nThere are already numerous datasets for Wikipedia, including the official one with Wikipedia's dump. Unfortunately, the text for the French version of this dataset is incomplete, lacking many elements like dates and locations.\nAs the saying goes, \"garbage in, garbage out.\"",
"## Format\n\n- Type: Text\n- File Extension: '.txt'",
"## Structure\n\nThe dataset is divided into the following splits:\n\n- 'URL': 3.45 GB - 1,810,000 rows - 90%\n- 'URL' : 192 MB - 100,575 rows - 5%\n- 'URL': 192 MB - 100,575 rows - 5%\n\nEach article in the dataset exceeds 1400 characters in length.",
"## Data Cleaning and Preprocessing\n\nThe following elements have been excluded from the dataset:\n\n- H1 - H4 Headings\n- Lists\n- Tables\n- Sources and References\n- Info box\n- Banners\n- LaTeX code\n\nThe text has been standardized for consistent formatting and line length. Additionally, the dataset has been filtered using the 'langid' library to include only text in French. Some quotations or short terms in other languages, including non-Latin languages, may still be present.",
"## Exploring the Dataset\n\nYou can use the 'explore_dataset.py' script to explore the dataset by randomly displaying a certain number of lines from it. The script creates and saves an index based on the line breaks, enabling faster data retrieval and display.",
"## Additional Information\n\nThis dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, as well as several hundred thousand Francophone news articles.\n\n\n---",
"# WIKIPEDIA EXTRACT\n\nInside the '/extract_wiki/' directory, you'll find Python scripts used to extract text to compile this dataset.",
"## Requirements:",
"## Scripts:\n\n1. 1_extract_link.py\n \n Script to download the Wikipedia dataset from Hugging Face, extract URLs, and save them to a text file for further processing.\n\n2. 2_extract_content.py\n \n This script retrieves the source code of Wikipedia pages based on URLs found in a text file. Instead of saving the entire HTML of the page, it trims the content, focusing on the main article section, thereby limiting the size of each record.\n\n3. 3_extract_txt.py\n \n This script extracts the text from the HTML pages and conducts tests to filter the content that should be retained or excluded. This includes language checks, special characters, numbers, etc."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-French #license-cc-by-sa-3.0 #wikipedia #wiki #fr.wikipedia.org #region-us \n",
"# French Wikipedia Dataset",
"## Overview\n\nThis dataset is a curated collection of approximately 1.1 million French Wikipedia articles, scraped directly from the official French Wikipedia site on September 24, 2023. \nThere are already numerous datasets for Wikipedia, including the official one with Wikipedia's dump. Unfortunately, the text for the French version of this dataset is incomplete, lacking many elements like dates and locations.\nAs the saying goes, \"garbage in, garbage out.\"",
"## Format\n\n- Type: Text\n- File Extension: '.txt'",
"## Structure\n\nThe dataset is divided into the following splits:\n\n- 'URL': 3.45 GB - 1,810,000 rows - 90%\n- 'URL' : 192 MB - 100,575 rows - 5%\n- 'URL': 192 MB - 100,575 rows - 5%\n\nEach article in the dataset exceeds 1400 characters in length.",
"## Data Cleaning and Preprocessing\n\nThe following elements have been excluded from the dataset:\n\n- H1 - H4 Headings\n- Lists\n- Tables\n- Sources and References\n- Info box\n- Banners\n- LaTeX code\n\nThe text has been standardized for consistent formatting and line length. Additionally, the dataset has been filtered using the 'langid' library to include only text in French. Some quotations or short terms in other languages, including non-Latin languages, may still be present.",
"## Exploring the Dataset\n\nYou can use the 'explore_dataset.py' script to explore the dataset by randomly displaying a certain number of lines from it. The script creates and saves an index based on the line breaks, enabling faster data retrieval and display.",
"## Additional Information\n\nThis dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, as well as several hundred thousand Francophone news articles.\n\n\n---",
"# WIKIPEDIA EXTRACT\n\nInside the '/extract_wiki/' directory, you'll find Python scripts used to extract text to compile this dataset.",
"## Requirements:",
"## Scripts:\n\n1. 1_extract_link.py\n \n Script to download the Wikipedia dataset from Hugging Face, extract URLs, and save them to a text file for further processing.\n\n2. 2_extract_content.py\n \n This script retrieves the source code of Wikipedia pages based on URLs found in a text file. Instead of saving the entire HTML of the page, it trims the content, focusing on the main article section, thereby limiting the size of each record.\n\n3. 3_extract_txt.py\n \n This script extracts the text from the HTML pages and conducts tests to filter the content that should be retained or excluded. This includes language checks, special characters, numbers, etc."
] |
[
56,
5,
97,
14,
77,
116,
66,
46,
38,
6,
155
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-French #license-cc-by-sa-3.0 #wikipedia #wiki #fr.wikipedia.org #region-us \n# French Wikipedia Dataset## Overview\n\nThis dataset is a curated collection of approximately 1.1 million French Wikipedia articles, scraped directly from the official French Wikipedia site on September 24, 2023. \nThere are already numerous datasets for Wikipedia, including the official one with Wikipedia's dump. Unfortunately, the text for the French version of this dataset is incomplete, lacking many elements like dates and locations.\nAs the saying goes, \"garbage in, garbage out.\"## Format\n\n- Type: Text\n- File Extension: '.txt'## Structure\n\nThe dataset is divided into the following splits:\n\n- 'URL': 3.45 GB - 1,810,000 rows - 90%\n- 'URL' : 192 MB - 100,575 rows - 5%\n- 'URL': 192 MB - 100,575 rows - 5%\n\nEach article in the dataset exceeds 1400 characters in length.## Data Cleaning and Preprocessing\n\nThe following elements have been excluded from the dataset:\n\n- H1 - H4 Headings\n- Lists\n- Tables\n- Sources and References\n- Info box\n- Banners\n- LaTeX code\n\nThe text has been standardized for consistent formatting and line length. Additionally, the dataset has been filtered using the 'langid' library to include only text in French. Some quotations or short terms in other languages, including non-Latin languages, may still be present.## Exploring the Dataset\n\nYou can use the 'explore_dataset.py' script to explore the dataset by randomly displaying a certain number of lines from it. The script creates and saves an index based on the line breaks, enabling faster data retrieval and display.## Additional Information\n\nThis dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, as well as several hundred thousand Francophone news articles.\n\n\n---"
] |
a33f4fac588b8d167d5500b7549d7e68b0a41ffe
|
# Dataset Card for "EM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tingchih/EM
|
[
"region:us"
] |
2023-09-24T18:14:13+00:00
|
{"dataset_info": {"features": [{"name": "claim", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "origin", "dtype": "string"}, {"name": "evidence", "dtype": "string"}, {"name": "images", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 218081338, "num_examples": 37922}, {"name": "test", "num_bytes": 34882854, "num_examples": 5229}], "download_size": 68367435, "dataset_size": 252964192}}
|
2023-09-24T18:41:12+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "EM"
More Information needed
|
[
"# Dataset Card for \"EM\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"EM\"\n\nMore Information needed"
] |
[
6,
11
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"EM\"\n\nMore Information needed"
] |
ac7dbd62888e93a10df551fabd4881ac4db8ff6f
|
# Dataset Card for "macron-discours"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arresejo/macron-discours
|
[
"region:us"
] |
2023-09-24T18:21:51+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1512085, "num_examples": 1}], "download_size": 821286, "dataset_size": 1512085}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T09:21:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "macron-discours"
More Information needed
|
[
"# Dataset Card for \"macron-discours\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"macron-discours\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"macron-discours\"\n\nMore Information needed"
] |
c17f42aba1ad62173e28d59c0bbb65646ba6a3a3
|
# Dataset Card for "foundation_action"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
anamhira/foundation_action
|
[
"region:us"
] |
2023-09-24T18:35:49+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 663896, "num_examples": 289}, {"name": "valid", "num_bytes": 8842, "num_examples": 3}], "download_size": 134650, "dataset_size": 672738}}
|
2023-11-02T20:49:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "foundation_action"
More Information needed
|
[
"# Dataset Card for \"foundation_action\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"foundation_action\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"foundation_action\"\n\nMore Information needed"
] |
2da3a74f2419fee634644ab50d9f1839bed82094
|
# Original Dataset: [BeaverTails](https://huggingface.co/datasets/PKU-Alignment/BeaverTails)
```json
{
'Animal Abuse': {
True: 3480,
False: 297087
},
'Child Abuse': {
True: 1664,
False: 298903
},
'Controversial Topics, Politics': {
True: 9233,
False: 291334
},
'Discrimination, Stereotype, Injustice': {
True: 24006,
False: 276561
},
'Drug Abuse, Weapons, Banned Substance': {
True: 16724,
False: 283843
},
'Financial Crime, Property Crime, Theft': {
True: 28769,
False: 271798
},
'Hate Speech, Offensive Language': {
True: 27127,
False: 273440
},
'Misinformation Regarding Ethics, Laws And Safety': {
True: 3835,
False: 296732
},
'Non Violent Unethical Behavior': {
True: 59992,
False: 240575
},
'Privacy Violation': {
True: 14774,
False: 285793
},
'Self Harm': {
True: 2024,
False: 298543
},
'Sexually Explicit, Adult Content': {
True: 6876,
False: 293691
},
'Terrorism, Organized Crime': {
True: 2457,
False: 298110
},
'Violence, Aiding And Abetting, Incitement': {
True: 79544,
False: 221023
}
}
```
# Paper: [BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset](https://arxiv.org/abs/2307.04657)
```
@article{beavertails,
title = {BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset},
author = {Jiaming Ji and Mickel Liu and Juntao Dai and Xuehai Pan and Chi Zhang and Ce Bian and Chi Zhang and Ruiyang Sun and Yizhou Wang and Yaodong Yang},
journal = {arXiv preprint arXiv:2307.04657},
year = {2023}
}
```
|
xzuyn/beavertails-alpaca
|
[
"size_categories:100K<n<1M",
"arxiv:2307.04657",
"region:us"
] |
2023-09-24T18:56:48+00:00
|
{"size_categories": ["100K<n<1M"]}
|
2023-09-25T23:34:46+00:00
|
[
"2307.04657"
] |
[] |
TAGS
#size_categories-100K<n<1M #arxiv-2307.04657 #region-us
|
# Original Dataset: BeaverTails
# Paper: BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset
|
[
"# Original Dataset: BeaverTails",
"# Paper: BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset"
] |
[
"TAGS\n#size_categories-100K<n<1M #arxiv-2307.04657 #region-us \n",
"# Original Dataset: BeaverTails",
"# Paper: BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset"
] |
[
26,
10,
31
] |
[
"passage: TAGS\n#size_categories-100K<n<1M #arxiv-2307.04657 #region-us \n# Original Dataset: BeaverTails# Paper: BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset"
] |
3768bb54d82c86b62f26be791ad84138f17488fc
|
# Dataset Card for Evaluation run of stabilityai/StableBeluga1-Delta
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/stabilityai/StableBeluga1-Delta
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [stabilityai/StableBeluga1-Delta](https://huggingface.co/stabilityai/StableBeluga1-Delta) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_stabilityai__StableBeluga1-Delta",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T02:37:55.294823](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__StableBeluga1-Delta/blob/main/results_2023-12-03T02-37-55.294823.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_stabilityai__StableBeluga1-Delta
|
[
"region:us"
] |
2023-09-24T19:45:37+00:00
|
{"pretty_name": "Evaluation run of stabilityai/StableBeluga1-Delta", "dataset_summary": "Dataset automatically created during the evaluation run of model [stabilityai/StableBeluga1-Delta](https://huggingface.co/stabilityai/StableBeluga1-Delta) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_stabilityai__StableBeluga1-Delta\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T02:37:55.294823](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__StableBeluga1-Delta/blob/main/results_2023-12-03T02-37-55.294823.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/stabilityai/StableBeluga1-Delta", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_24T20_45_33.224865", "path": ["**/details_harness|drop|3_2023-09-24T20-45-33.224865.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-24T20-45-33.224865.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_24T20_45_33.224865", "path": ["**/details_harness|gsm8k|5_2023-09-24T20-45-33.224865.parquet"]}, {"split": "2023_12_03T02_37_55.294823", "path": ["**/details_harness|gsm8k|5_2023-12-03T02-37-55.294823.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T02-37-55.294823.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_24T20_45_33.224865", "path": ["**/details_harness|winogrande|5_2023-09-24T20-45-33.224865.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-24T20-45-33.224865.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_24T20_45_33.224865", "path": ["results_2023-09-24T20-45-33.224865.parquet"]}, {"split": "2023_12_03T02_37_55.294823", "path": ["results_2023-12-03T02-37-55.294823.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T02-37-55.294823.parquet"]}]}]}
|
2023-12-03T02:38:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of stabilityai/StableBeluga1-Delta
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model stabilityai/StableBeluga1-Delta on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T02:37:55.294823(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of stabilityai/StableBeluga1-Delta",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model stabilityai/StableBeluga1-Delta on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T02:37:55.294823(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of stabilityai/StableBeluga1-Delta",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model stabilityai/StableBeluga1-Delta on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T02:37:55.294823(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of stabilityai/StableBeluga1-Delta## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model stabilityai/StableBeluga1-Delta on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T02:37:55.294823(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
c8883c4cae9ab3cc090e3b6667ff38a53ed52758
|
# Dataset Card for "somos-clean-alpaca-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rovi27/somos-clean-alpaca-es
|
[
"region:us"
] |
2023-09-24T19:45:51+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "1-instruction", "dtype": "string"}, {"name": "2-input", "dtype": "string"}, {"name": "3-output", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "tr-flag-1-instruction", "dtype": "bool"}, {"name": "tr-flag-2-input", "dtype": "bool"}, {"name": "tr-flag-3-output", "dtype": "bool"}]}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 985217301, "num_examples": 51942}], "download_size": 651888024, "dataset_size": 985217301}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-24T19:46:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "somos-clean-alpaca-es"
More Information needed
|
[
"# Dataset Card for \"somos-clean-alpaca-es\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"somos-clean-alpaca-es\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"somos-clean-alpaca-es\"\n\nMore Information needed"
] |
e3f0b7a87aae0cf757f963e47ecacb079fc614a4
|
# Dataset Card for "translation-pt-indigenouns"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tiagoblima/translation-pt-indigenouns
|
[
"region:us"
] |
2023-09-24T19:52:11+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "translation", "struct": [{"name": "pt", "dtype": "string"}, {"name": "gub", "dtype": "string"}, {"name": "gun", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 57522705, "num_examples": 108670}, {"name": "validation", "num_bytes": 100285, "num_examples": 125}, {"name": "test", "num_bytes": 1324019, "num_examples": 1950}], "download_size": 11569330, "dataset_size": 58947009}}
|
2023-10-12T20:30:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "translation-pt-indigenouns"
More Information needed
|
[
"# Dataset Card for \"translation-pt-indigenouns\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"translation-pt-indigenouns\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"translation-pt-indigenouns\"\n\nMore Information needed"
] |
12d45c8a6fcbc35c18a067efb24d993caaf4b8a7
|
# Dataset Card for "pickapic_v2"
please pay attention - the URLs will be temporariliy unavailabe - but you do not need them! we have in jpg_0 and jpg_1 the image bytes! so by downloading the dataset you already have the images!
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yuvalkirstain/pickapic_v2
|
[
"region:us"
] |
2023-09-24T19:54:31+00:00
|
{"dataset_info": {"features": [{"name": "are_different", "dtype": "bool"}, {"name": "best_image_uid", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[ns]"}, {"name": "has_label", "dtype": "bool"}, {"name": "image_0_uid", "dtype": "string"}, {"name": "image_0_url", "dtype": "string"}, {"name": "image_1_uid", "dtype": "string"}, {"name": "image_1_url", "dtype": "string"}, {"name": "jpg_0", "dtype": "binary"}, {"name": "jpg_1", "dtype": "binary"}, {"name": "label_0", "dtype": "float64"}, {"name": "label_1", "dtype": "float64"}, {"name": "model_0", "dtype": "string"}, {"name": "model_1", "dtype": "string"}, {"name": "ranking_id", "dtype": "int64"}, {"name": "user_id", "dtype": "int64"}, {"name": "num_example_per_prompt", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 322022952127, "num_examples": 959040}, {"name": "validation", "num_bytes": 6339087542, "num_examples": 20596}, {"name": "test", "num_bytes": 6618429346, "num_examples": 20716}, {"name": "validation_unique", "num_bytes": 170578993, "num_examples": 500}, {"name": "test_unique", "num_bytes": 175368751, "num_examples": 500}], "download_size": 15603769274, "dataset_size": 335326416759}}
|
2024-01-19T07:01:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pickapic_v2"
please pay attention - the URLs will be temporariliy unavailabe - but you do not need them! we have in jpg_0 and jpg_1 the image bytes! so by downloading the dataset you already have the images!
More Information needed
|
[
"# Dataset Card for \"pickapic_v2\"\n\nplease pay attention - the URLs will be temporariliy unavailabe - but you do not need them! we have in jpg_0 and jpg_1 the image bytes! so by downloading the dataset you already have the images!\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pickapic_v2\"\n\nplease pay attention - the URLs will be temporariliy unavailabe - but you do not need them! we have in jpg_0 and jpg_1 the image bytes! so by downloading the dataset you already have the images!\nMore Information needed"
] |
[
6,
69
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pickapic_v2\"\n\nplease pay attention - the URLs will be temporariliy unavailabe - but you do not need them! we have in jpg_0 and jpg_1 the image bytes! so by downloading the dataset you already have the images!\nMore Information needed"
] |
9bc0ecfb2feb2697973fbec43a6e0d74b92bb139
|
# Dataset Card for "archdaily_3k_captioned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ossaili/archdaily_3k_captioned
|
[
"region:us"
] |
2023-09-24T20:06:52+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 945456196.347, "num_examples": 3091}], "download_size": 868716505, "dataset_size": 945456196.347}}
|
2023-09-24T20:07:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "archdaily_3k_captioned"
More Information needed
|
[
"# Dataset Card for \"archdaily_3k_captioned\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"archdaily_3k_captioned\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"archdaily_3k_captioned\"\n\nMore Information needed"
] |
c0c4fb7e961d4cf334e5bf1f304ad1896d359972
|
# Dataset Card for "kbd_monolingual"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
anzorq/kbd_monolingual
|
[
"region:us"
] |
2023-09-24T20:12:41+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "source", "dtype": "string"}, {"name": "id", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 157956610, "num_examples": 18141}], "download_size": 71398445, "dataset_size": 157956610}}
|
2023-09-24T20:13:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "kbd_monolingual"
More Information needed
|
[
"# Dataset Card for \"kbd_monolingual\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"kbd_monolingual\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"kbd_monolingual\"\n\nMore Information needed"
] |
9c1440cb4fc603eb114be665fdd0302f652bf596
|
# Dataset Card for "khammash_art_50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ossaili/khammash_art_50
|
[
"region:us"
] |
2023-09-24T20:16:48+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 15267367.0, "num_examples": 54}], "download_size": 15088666, "dataset_size": 15267367.0}}
|
2023-09-24T20:16:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "khammash_art_50"
More Information needed
|
[
"# Dataset Card for \"khammash_art_50\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"khammash_art_50\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"khammash_art_50\"\n\nMore Information needed"
] |
8411418dd9fa470b9aa5357a7903c65b8a5dfd09
|
OASST-1 Tasksource dataset (english only) processed.
|
pharaouk/oasst_reward_proc
|
[
"region:us"
] |
2023-09-24T20:18:37+00:00
|
{}
|
2023-09-24T20:23:14+00:00
|
[] |
[] |
TAGS
#region-us
|
OASST-1 Tasksource dataset (english only) processed.
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
5cd57a01403f82586b40a3c45d5576f56e2b9f9a
|
This is not the dataset you are looking for.
|
pharaouk/antidataset
|
[
"region:us"
] |
2023-09-24T20:25:57+00:00
|
{}
|
2023-09-24T20:38:45+00:00
|
[] |
[] |
TAGS
#region-us
|
This is not the dataset you are looking for.
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
de09124b75b034ff98ae2b95c467fafeae12da19
|
# Dataset Card for "llm-macron"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arresejo/llm-macron
|
[
"region:us"
] |
2023-09-24T20:30:20+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 991716.0, "num_examples": 121}, {"name": "test", "num_bytes": 114744.0, "num_examples": 14}], "download_size": 555903, "dataset_size": 1106460.0}}
|
2023-09-25T09:21:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llm-macron"
More Information needed
|
[
"# Dataset Card for \"llm-macron\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llm-macron\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llm-macron\"\n\nMore Information needed"
] |
4adc43bd57b7b76a30d429684e640bba6e9d25c7
|
# Dataset Card for "markpoulierart_1k_captioned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ossaili/markpoulierart_1k_captioned
|
[
"region:us"
] |
2023-09-24T20:45:01+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 280985089.17, "num_examples": 1285}], "download_size": 290567061, "dataset_size": 280985089.17}}
|
2023-09-24T20:45:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "markpoulierart_1k_captioned"
More Information needed
|
[
"# Dataset Card for \"markpoulierart_1k_captioned\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"markpoulierart_1k_captioned\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"markpoulierart_1k_captioned\"\n\nMore Information needed"
] |
bd228bb302307f867c07180c6c8235c0f989d208
|
Replication Data for: Image-based Treatment Effect Heterogeneity
Connor Thomas Jerzak, Fredrik Daniel Johansson, Adel Daoud Proceedings of the Second Conference on Causal Learning and Reasoning, PMLR 213:531-552, 2023.
UgandaDataProcessed.csv contains individual-level data from the YOP experiment. In the dataset, geo_long and geo_lat refer to the approximate geo-referenced long/lat of experimental units. The variable, geo_long_lat_key, refers to the image key associated with each location. Experimental outcomes are stored in Yobs. Treatment variable is stored in Wobs. See the tutorial for more information.
UgandaGeoKeyMat.csv contains information on keys linking to satellite images for all of Uganda for the transportability analysis.
Geo-referenced satellite images are saved in "./Uganda2000_processed/GeoKey%s_BAND%s.csv", where GeoKey%s denotes the image key associated with each observation and BAND%s refers to one of 3 bands in the satellite imagery.
For more information, see: https://github.com/cjerzak/causalimages-software/blob/main/tutorials/AnalyzeImageHeterogeneity_FullTutorial.R
|
cjerzak/ImageHeterogeneity
|
[
"license:mit",
"region:us"
] |
2023-09-24T21:12:10+00:00
|
{"license": "mit"}
|
2023-09-25T12:57:53+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
Replication Data for: Image-based Treatment Effect Heterogeneity
Connor Thomas Jerzak, Fredrik Daniel Johansson, Adel Daoud Proceedings of the Second Conference on Causal Learning and Reasoning, PMLR 213:531-552, 2023.
URL contains individual-level data from the YOP experiment. In the dataset, geo_long and geo_lat refer to the approximate geo-referenced long/lat of experimental units. The variable, geo_long_lat_key, refers to the image key associated with each location. Experimental outcomes are stored in Yobs. Treatment variable is stored in Wobs. See the tutorial for more information.
URL contains information on keys linking to satellite images for all of Uganda for the transportability analysis.
Geo-referenced satellite images are saved in "./Uganda2000_processed/GeoKey%s_BAND%s.csv", where GeoKey%s denotes the image key associated with each observation and BAND%s refers to one of 3 bands in the satellite imagery.
For more information, see: URL
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-mit #region-us \n"
] |
c9ba07076bd512faeacfce551eb21fd6a313dd4c
|
# Dataset Card for "Physics_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
YaTharThShaRma999/Physics_dataset
|
[
"region:us"
] |
2023-09-24T21:30:24+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2545558, "num_examples": 1000}], "download_size": 1198208, "dataset_size": 2545558}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-24T21:30:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Physics_dataset"
More Information needed
|
[
"# Dataset Card for \"Physics_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Physics_dataset\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Physics_dataset\"\n\nMore Information needed"
] |
ae0e45cd345ad5180639bc3c0b134e52e0d03dd4
|
# Dataset Card for "86947388"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/86947388
|
[
"region:us"
] |
2023-09-24T21:54:35+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 168, "num_examples": 10}], "download_size": 1325, "dataset_size": 168}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-24T21:54:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "86947388"
More Information needed
|
[
"# Dataset Card for \"86947388\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"86947388\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"86947388\"\n\nMore Information needed"
] |
48d6c1b8980819839f46afedc3096abed8515539
|
Full, 8K long ToxicQA. Unprocessed. Suggested not to be used as it is.
Use only for Alignment research. NOETI is not responsible for what you might do with it.
|
NobodyExistsOnTheInternet/toxicqa
|
[
"license:mit",
"not-for-all-audiences",
"region:us"
] |
2023-09-24T21:57:26+00:00
|
{"license": "mit", "tags": ["not-for-all-audiences"]}
|
2024-01-10T14:30:41+00:00
|
[] |
[] |
TAGS
#license-mit #not-for-all-audiences #region-us
|
Full, 8K long ToxicQA. Unprocessed. Suggested not to be used as it is.
Use only for Alignment research. NOETI is not responsible for what you might do with it.
|
[] |
[
"TAGS\n#license-mit #not-for-all-audiences #region-us \n"
] |
[
20
] |
[
"passage: TAGS\n#license-mit #not-for-all-audiences #region-us \n"
] |
3d8ee1cf92b2c9b7512c4bd62def3a367769a46a
|
# Dataset Card for MegaCoT
Slightly more detailed (to allow for RL) and extended re-creation of Flan's CoT submix, excluding ecqa (which contains synthetic generations).
Extra datasets not in the original CoT collection:
- [ROPES](https://huggingface.co/datasets/ropes)
- [Entailmentbank](https://github.com/allenai/entailment_bank) (excluding task 2) - edited to include random unrelated context points in some examples
- [MATH](https://huggingface.co/datasets/competition_math) (up to level 2)
- [cos_e](https://huggingface.co/datasets/cos_e)
- [feasibilityQA](https://huggingface.co/datasets/tasksource/feasibilityQA)
- [TAL-SCQ5K](https://huggingface.co/datasets/math-eval/TAL-SCQ5K)
- [Quartz](https://huggingface.co/datasets/quartz)
- [boolq](https://huggingface.co/datasets/boolq)
- [FLUTE](https://huggingface.co/datasets/ColumbiaNLP/FLUTE)
I reserve no rights to the dataset, but the original datasets were made available under various public licenses. Hence, consider each subset of this dataset to be licensed as the original dataset from where it comes was.
|
euclaise/MegaCoT
|
[
"region:us"
] |
2023-09-24T22:15:50+00:00
|
{"dataset_info": {"features": [{"name": "rationale", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 189154865, "num_examples": 629377}], "download_size": 90889666, "dataset_size": 189154865}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2024-01-23T02:09:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for MegaCoT
Slightly more detailed (to allow for RL) and extended re-creation of Flan's CoT submix, excluding ecqa (which contains synthetic generations).
Extra datasets not in the original CoT collection:
- ROPES
- Entailmentbank (excluding task 2) - edited to include random unrelated context points in some examples
- MATH (up to level 2)
- cos_e
- feasibilityQA
- TAL-SCQ5K
- Quartz
- boolq
- FLUTE
I reserve no rights to the dataset, but the original datasets were made available under various public licenses. Hence, consider each subset of this dataset to be licensed as the original dataset from where it comes was.
|
[
"# Dataset Card for MegaCoT\n\nSlightly more detailed (to allow for RL) and extended re-creation of Flan's CoT submix, excluding ecqa (which contains synthetic generations).\n\nExtra datasets not in the original CoT collection:\n- ROPES\n- Entailmentbank (excluding task 2) - edited to include random unrelated context points in some examples\n- MATH (up to level 2)\n- cos_e\n- feasibilityQA\n- TAL-SCQ5K\n- Quartz\n- boolq\n- FLUTE\n\nI reserve no rights to the dataset, but the original datasets were made available under various public licenses. Hence, consider each subset of this dataset to be licensed as the original dataset from where it comes was."
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for MegaCoT\n\nSlightly more detailed (to allow for RL) and extended re-creation of Flan's CoT submix, excluding ecqa (which contains synthetic generations).\n\nExtra datasets not in the original CoT collection:\n- ROPES\n- Entailmentbank (excluding task 2) - edited to include random unrelated context points in some examples\n- MATH (up to level 2)\n- cos_e\n- feasibilityQA\n- TAL-SCQ5K\n- Quartz\n- boolq\n- FLUTE\n\nI reserve no rights to the dataset, but the original datasets were made available under various public licenses. Hence, consider each subset of this dataset to be licensed as the original dataset from where it comes was."
] |
[
6,
176
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for MegaCoT\n\nSlightly more detailed (to allow for RL) and extended re-creation of Flan's CoT submix, excluding ecqa (which contains synthetic generations).\n\nExtra datasets not in the original CoT collection:\n- ROPES\n- Entailmentbank (excluding task 2) - edited to include random unrelated context points in some examples\n- MATH (up to level 2)\n- cos_e\n- feasibilityQA\n- TAL-SCQ5K\n- Quartz\n- boolq\n- FLUTE\n\nI reserve no rights to the dataset, but the original datasets were made available under various public licenses. Hence, consider each subset of this dataset to be licensed as the original dataset from where it comes was."
] |
02c5ad59c1d77be8353a37997884ae20a04bea87
|
# Dataset Card for "habr_prompts_5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/habr_prompts_5k
|
[
"region:us"
] |
2023-09-24T23:25:09+00:00
|
{"dataset_info": {"features": [{"name": "solution_short_llama2", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "language", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text_markdown", "dtype": "string"}, {"name": "text_html", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "original_author", "dtype": "string"}, {"name": "original_url", "dtype": "string"}, {"name": "lead_html", "dtype": "string"}, {"name": "lead_markdown", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "time_published", "dtype": "int64"}, {"name": "statistics", "struct": [{"name": "commentsCount", "dtype": "int64"}, {"name": "favoritesCount", "dtype": "int64"}, {"name": "readingCount", "dtype": "int64"}, {"name": "score", "dtype": "int64"}, {"name": "votesCount", "dtype": "int64"}, {"name": "votesCountMinus", "dtype": "int64"}, {"name": "votesCountPlus", "dtype": "int64"}]}, {"name": "labels", "sequence": "string"}, {"name": "hubs", "sequence": "string"}, {"name": "flows", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "reading_time", "dtype": "int64"}, {"name": "format", "dtype": "string"}, {"name": "complexity", "dtype": "string"}, {"name": "comments", "struct": [{"name": "author", "sequence": "string"}, {"name": "children", "sequence": {"sequence": "int64"}}, {"name": "id", "sequence": "int64"}, {"name": "level", "sequence": "int64"}, {"name": "message_html", "sequence": "string"}, {"name": "message_markdown", "sequence": "string"}, {"name": "parent_id", "sequence": "int64"}, {"name": "score", "sequence": "int64"}, {"name": "time_published", "sequence": "int64"}, {"name": "votes", "sequence": "int64"}]}, {"name": "readingCount", "dtype": "int64"}, {"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1032739347, "num_examples": 5000}], "download_size": 495188038, "dataset_size": 1032739347}}
|
2023-09-25T17:21:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "habr_prompts_5k"
More Information needed
|
[
"# Dataset Card for \"habr_prompts_5k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"habr_prompts_5k\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"habr_prompts_5k\"\n\nMore Information needed"
] |
3e99da17bb8b271dd3a2e79a72e2fde4526f58fc
|
# Dataset Card for "alpaca-tw-input-output-52k"
This dataset contains English Instruction-Following generated by GPT-3.5 using Alpaca prompts for fine-tuning LLMs.
The dataset was originaly shared in this repository: https://github.com/ntunlplab/traditional-chinese-alpaca. This is just a wraper for compatibility with huggingface's datasets library.
## Dataset structure
It contains 52K instruction-following data generated by GPT-3.5 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-3.5:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, the answer to the instruction as generated by `GPT-3.5`.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-3.5. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT3:
```bash
{
'instruction': '辨識那一個平台與其他不同。',
'input': 'Twitter, Instagram, Telegram',
'output': '在Twitter、Instagram和Telegram之間,Telegram是與其他兩者最不同的平台。'
}
```
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
|
DavidLanz/alpaca-tw-input-output-52k
|
[
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"gpt3",
"alpaca",
"instruction-finetuning",
"region:us"
] |
2023-09-24T23:54:42+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "conversational", "question-answering"], "tags": ["gpt3", "alpaca", "instruction-finetuning"]}
|
2023-09-24T23:56:10+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #gpt3 #alpaca #instruction-finetuning #region-us
|
# Dataset Card for "alpaca-tw-input-output-52k"
This dataset contains English Instruction-Following generated by GPT-3.5 using Alpaca prompts for fine-tuning LLMs.
The dataset was originaly shared in this repository: URL This is just a wraper for compatibility with huggingface's datasets library.
## Dataset structure
It contains 52K instruction-following data generated by GPT-3.5 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-3.5:
- 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.
- 'input': 'str', optional context or input for the task.
- 'output': 'str', the answer to the instruction as generated by 'GPT-3.5'.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-3.5. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT3:
## Licensing Information
The dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0).
|
[
"# Dataset Card for \"alpaca-tw-input-output-52k\"\n\nThis dataset contains English Instruction-Following generated by GPT-3.5 using Alpaca prompts for fine-tuning LLMs.\n\nThe dataset was originaly shared in this repository: URL This is just a wraper for compatibility with huggingface's datasets library.",
"## Dataset structure\n\nIt contains 52K instruction-following data generated by GPT-3.5 using the same prompts as in Alpaca.\nThe dataset has the same format as Alpaca data, except the output is generated by GPT-3.5:\n\n - 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.\n - 'input': 'str', optional context or input for the task. \n - 'output': 'str', the answer to the instruction as generated by 'GPT-3.5'.",
"## Difference with the original Alpaca dataset\n\nThe original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-3.5. Thus, in general, the responses are of higher quality and lenght. Here is an example:",
"#### Example from Alpaca-GPT3:",
"## Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)."
] |
[
"TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #gpt3 #alpaca #instruction-finetuning #region-us \n",
"# Dataset Card for \"alpaca-tw-input-output-52k\"\n\nThis dataset contains English Instruction-Following generated by GPT-3.5 using Alpaca prompts for fine-tuning LLMs.\n\nThe dataset was originaly shared in this repository: URL This is just a wraper for compatibility with huggingface's datasets library.",
"## Dataset structure\n\nIt contains 52K instruction-following data generated by GPT-3.5 using the same prompts as in Alpaca.\nThe dataset has the same format as Alpaca data, except the output is generated by GPT-3.5:\n\n - 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.\n - 'input': 'str', optional context or input for the task. \n - 'output': 'str', the answer to the instruction as generated by 'GPT-3.5'.",
"## Difference with the original Alpaca dataset\n\nThe original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-3.5. Thus, in general, the responses are of higher quality and lenght. Here is an example:",
"#### Example from Alpaca-GPT3:",
"## Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)."
] |
[
77,
87,
129,
74,
12,
25
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #gpt3 #alpaca #instruction-finetuning #region-us \n# Dataset Card for \"alpaca-tw-input-output-52k\"\n\nThis dataset contains English Instruction-Following generated by GPT-3.5 using Alpaca prompts for fine-tuning LLMs.\n\nThe dataset was originaly shared in this repository: URL This is just a wraper for compatibility with huggingface's datasets library.## Dataset structure\n\nIt contains 52K instruction-following data generated by GPT-3.5 using the same prompts as in Alpaca.\nThe dataset has the same format as Alpaca data, except the output is generated by GPT-3.5:\n\n - 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.\n - 'input': 'str', optional context or input for the task. \n - 'output': 'str', the answer to the instruction as generated by 'GPT-3.5'.## Difference with the original Alpaca dataset\n\nThe original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-3.5. Thus, in general, the responses are of higher quality and lenght. Here is an example:#### Example from Alpaca-GPT3:## Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)."
] |
83c22a7f185430937fa8b4cbd39c1f2d4c0e5df1
|
# Dataset Card for "MiniCoT"
Subset of [MegaCoT](https://huggingface.co/datasets/euclaise/MegaCoT) that excludes cos_e, and creak (since they have some lower-quality annotations). The datasets included are GSM8K, SenMaking, qasc, ROPES, Entailmentbank, MATH, feasibilityQA, TAL-SCQ5K, aqua_rat, Quartz, a filtered subset of esnli, FLUTE, and StrategyQA.
I reserve no rights to the dataset, but the original datasets were made available under various public licenses. Hence, consider each subset of this dataset to be licensed as the original dataset from where it comes was.
|
euclaise/MiniCoT
|
[
"task_categories:question-answering",
"size_categories:10K<n<100K",
"chain-of-thought",
"cot",
"region:us"
] |
2023-09-25T00:09:54+00:00
|
{"size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "pretty_name": "MiniCoT", "dataset_info": {"features": [{"name": "rationale", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 57000705, "num_examples": 128562}], "download_size": 31364563, "dataset_size": 57000705}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["chain-of-thought", "cot"]}
|
2024-01-23T02:09:43+00:00
|
[] |
[] |
TAGS
#task_categories-question-answering #size_categories-10K<n<100K #chain-of-thought #cot #region-us
|
# Dataset Card for "MiniCoT"
Subset of MegaCoT that excludes cos_e, and creak (since they have some lower-quality annotations). The datasets included are GSM8K, SenMaking, qasc, ROPES, Entailmentbank, MATH, feasibilityQA, TAL-SCQ5K, aqua_rat, Quartz, a filtered subset of esnli, FLUTE, and StrategyQA.
I reserve no rights to the dataset, but the original datasets were made available under various public licenses. Hence, consider each subset of this dataset to be licensed as the original dataset from where it comes was.
|
[
"# Dataset Card for \"MiniCoT\"\n\nSubset of MegaCoT that excludes cos_e, and creak (since they have some lower-quality annotations). The datasets included are GSM8K, SenMaking, qasc, ROPES, Entailmentbank, MATH, feasibilityQA, TAL-SCQ5K, aqua_rat, Quartz, a filtered subset of esnli, FLUTE, and StrategyQA.\n\nI reserve no rights to the dataset, but the original datasets were made available under various public licenses. Hence, consider each subset of this dataset to be licensed as the original dataset from where it comes was."
] |
[
"TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #chain-of-thought #cot #region-us \n",
"# Dataset Card for \"MiniCoT\"\n\nSubset of MegaCoT that excludes cos_e, and creak (since they have some lower-quality annotations). The datasets included are GSM8K, SenMaking, qasc, ROPES, Entailmentbank, MATH, feasibilityQA, TAL-SCQ5K, aqua_rat, Quartz, a filtered subset of esnli, FLUTE, and StrategyQA.\n\nI reserve no rights to the dataset, but the original datasets were made available under various public licenses. Hence, consider each subset of this dataset to be licensed as the original dataset from where it comes was."
] |
[
39,
156
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #chain-of-thought #cot #region-us \n# Dataset Card for \"MiniCoT\"\n\nSubset of MegaCoT that excludes cos_e, and creak (since they have some lower-quality annotations). The datasets included are GSM8K, SenMaking, qasc, ROPES, Entailmentbank, MATH, feasibilityQA, TAL-SCQ5K, aqua_rat, Quartz, a filtered subset of esnli, FLUTE, and StrategyQA.\n\nI reserve no rights to the dataset, but the original datasets were made available under various public licenses. Hence, consider each subset of this dataset to be licensed as the original dataset from where it comes was."
] |
06cd752ca8aeb548eec4782af649cc924f828058
|
# Open Syllabus - tagged by category via libgen
### Dataset Summary
This dataset is a scrape of explore.opensyllabus.com book titles, authors, etc. tagged with finding the same titles on libgen's database.
The ideal choice of using this dataset is for automated synthetic textbook generation.
## Considerations for Using the Data
Do not use this dataset for anything illegal. This is meant as a reference point for further development of open source AI.
|
benxh/opensyllabus-tagged-libgen
|
[
"task_categories:text-classification",
"language:en",
"Scrape",
"Open Syllabus",
"Libgen",
"region:us"
] |
2023-09-25T00:29:59+00:00
|
{"language": ["en"], "task_categories": ["text-classification"], "tags": ["Scrape", "Open Syllabus", "Libgen"]}
|
2023-09-25T00:36:49+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #language-English #Scrape #Open Syllabus #Libgen #region-us
|
# Open Syllabus - tagged by category via libgen
### Dataset Summary
This dataset is a scrape of URL book titles, authors, etc. tagged with finding the same titles on libgen's database.
The ideal choice of using this dataset is for automated synthetic textbook generation.
## Considerations for Using the Data
Do not use this dataset for anything illegal. This is meant as a reference point for further development of open source AI.
|
[
"# Open Syllabus - tagged by category via libgen",
"### Dataset Summary\n\nThis dataset is a scrape of URL book titles, authors, etc. tagged with finding the same titles on libgen's database.\nThe ideal choice of using this dataset is for automated synthetic textbook generation.",
"## Considerations for Using the Data\n\nDo not use this dataset for anything illegal. This is meant as a reference point for further development of open source AI."
] |
[
"TAGS\n#task_categories-text-classification #language-English #Scrape #Open Syllabus #Libgen #region-us \n",
"# Open Syllabus - tagged by category via libgen",
"### Dataset Summary\n\nThis dataset is a scrape of URL book titles, authors, etc. tagged with finding the same titles on libgen's database.\nThe ideal choice of using this dataset is for automated synthetic textbook generation.",
"## Considerations for Using the Data\n\nDo not use this dataset for anything illegal. This is meant as a reference point for further development of open source AI."
] |
[
33,
12,
57,
33
] |
[
"passage: TAGS\n#task_categories-text-classification #language-English #Scrape #Open Syllabus #Libgen #region-us \n# Open Syllabus - tagged by category via libgen### Dataset Summary\n\nThis dataset is a scrape of URL book titles, authors, etc. tagged with finding the same titles on libgen's database.\nThe ideal choice of using this dataset is for automated synthetic textbook generation.## Considerations for Using the Data\n\nDo not use this dataset for anything illegal. This is meant as a reference point for further development of open source AI."
] |
fe301fa6849e5f69720afdec5f7129fca9f2db02
|
# Dataset Card for "oscar-unshuffled_deduplicated_id_1m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
akahana/oscar-unshuffled_deduplicated_id_1m
|
[
"region:us"
] |
2023-09-25T01:11:18+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1783096235, "num_examples": 1000000}], "download_size": 1002709186, "dataset_size": 1783096235}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T01:16:12+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "oscar-unshuffled_deduplicated_id_1m"
More Information needed
|
[
"# Dataset Card for \"oscar-unshuffled_deduplicated_id_1m\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"oscar-unshuffled_deduplicated_id_1m\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"oscar-unshuffled_deduplicated_id_1m\"\n\nMore Information needed"
] |
ead7f0c6af9733d5781b14e0f6365b877e1a868b
|
# Dataset Card for "wikipedia-ja-20230720-50k"
This is data extracted randomly from [izumi-lab/wikipedia-ja-20230720](https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720), consisting of 50,000 records.
[izumi-lab/wikipedia-ja-20230720](https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720)からデータを50k分ランダムに抽出したデータです。
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mmnga/wikipedia-ja-20230720-50k
|
[
"region:us"
] |
2023-09-25T01:12:49+00:00
|
{"dataset_info": {"features": [{"name": "curid", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 134082445.03326812, "num_examples": 50000}], "download_size": 78354971, "dataset_size": 134082445.03326812}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T07:19:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wikipedia-ja-20230720-50k"
This is data extracted randomly from izumi-lab/wikipedia-ja-20230720, consisting of 50,000 records.
izumi-lab/wikipedia-ja-20230720からデータを50k分ランダムに抽出したデータです。
More Information needed
|
[
"# Dataset Card for \"wikipedia-ja-20230720-50k\"\n\nThis is data extracted randomly from izumi-lab/wikipedia-ja-20230720, consisting of 50,000 records. \n\nizumi-lab/wikipedia-ja-20230720からデータを50k分ランダムに抽出したデータです。 \n\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wikipedia-ja-20230720-50k\"\n\nThis is data extracted randomly from izumi-lab/wikipedia-ja-20230720, consisting of 50,000 records. \n\nizumi-lab/wikipedia-ja-20230720からデータを50k分ランダムに抽出したデータです。 \n\n\nMore Information needed"
] |
[
6,
69
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wikipedia-ja-20230720-50k\"\n\nThis is data extracted randomly from izumi-lab/wikipedia-ja-20230720, consisting of 50,000 records. \n\nizumi-lab/wikipedia-ja-20230720からデータを50k分ランダムに抽出したデータです。 \n\n\nMore Information needed"
] |
31bbc780ffa8be52bee7adf9402a4f7e9d139024
|
# Dataset Card for "oscar-unshuffled_deduplicated_id_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
akahana/oscar-unshuffled_deduplicated_id_1000
|
[
"region:us"
] |
2023-09-25T01:18:13+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1934976, "num_examples": 1000}], "download_size": 1086069, "dataset_size": 1934976}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T01:24:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "oscar-unshuffled_deduplicated_id_1000"
More Information needed
|
[
"# Dataset Card for \"oscar-unshuffled_deduplicated_id_1000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"oscar-unshuffled_deduplicated_id_1000\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"oscar-unshuffled_deduplicated_id_1000\"\n\nMore Information needed"
] |
1886b13a1802a5e3aafea4ebf5962e8113d3f20e
|
# Dataset Card for "oscar-unshuffled_deduplicated_id_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
akahana/oscar-unshuffled_deduplicated_id_100
|
[
"region:us"
] |
2023-09-25T01:20:25+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 158667, "num_examples": 100}], "download_size": 94516, "dataset_size": 158667}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T01:25:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "oscar-unshuffled_deduplicated_id_100"
More Information needed
|
[
"# Dataset Card for \"oscar-unshuffled_deduplicated_id_100\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"oscar-unshuffled_deduplicated_id_100\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"oscar-unshuffled_deduplicated_id_100\"\n\nMore Information needed"
] |
7455a9f62d934ab39c8b6e861f9d57120b110acf
|
# Dataset Card for "oscar-unshuffled_deduplicated_id_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
akahana/oscar-unshuffled_deduplicated_id_10k
|
[
"region:us"
] |
2023-09-25T01:28:30+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18527241, "num_examples": 10000}], "download_size": 10371685, "dataset_size": 18527241}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T01:29:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "oscar-unshuffled_deduplicated_id_10k"
More Information needed
|
[
"# Dataset Card for \"oscar-unshuffled_deduplicated_id_10k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"oscar-unshuffled_deduplicated_id_10k\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"oscar-unshuffled_deduplicated_id_10k\"\n\nMore Information needed"
] |
1e59f75e9eda2ab95744819c2e4588aff50ed391
|
# Dataset Card for "Master_Train_Test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Brecon/Master_Train_Test
|
[
"region:us"
] |
2023-09-25T01:29:16+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 446853.7995594714, "num_examples": 363}, {"name": "test", "num_bytes": 112021.20044052863, "num_examples": 91}], "download_size": 319014, "dataset_size": 558875.0}}
|
2023-09-25T01:29:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Master_Train_Test"
More Information needed
|
[
"# Dataset Card for \"Master_Train_Test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Master_Train_Test\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Master_Train_Test\"\n\nMore Information needed"
] |
3321fe8f314b5feeb866dbe2cfb81375da424ff0
|
# Dataset Card for "oscar-unshuffled_deduplicated_id_100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
akahana/oscar-unshuffled_deduplicated_id_100k
|
[
"region:us"
] |
2023-09-25T01:30:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 184321644, "num_examples": 100000}], "download_size": 103198766, "dataset_size": 184321644}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T01:31:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "oscar-unshuffled_deduplicated_id_100k"
More Information needed
|
[
"# Dataset Card for \"oscar-unshuffled_deduplicated_id_100k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"oscar-unshuffled_deduplicated_id_100k\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"oscar-unshuffled_deduplicated_id_100k\"\n\nMore Information needed"
] |
50760841255bbe315b9dcf77223c40ed044b7670
|
# Dataset Card for "belle_resampled_78K_CN"
通过启发式方法过滤后的[BelleGroup/train_3.5M_CN](https://huggingface.co/datasets/BelleGroup/train_3.5M_CN),移除了语料中明显的错误和低质量内容。
之后进行了均匀采样,从每个类别中提取出6000条最长的对话。
## 样例
```
{
"id":"66623028",
"conversations":[
{
"from":"human",
"value":"归纳介绍三种不同类型的电影题材"
},
{
"from":"assistant",
"value":"1. 动作片:动作片通常强调动作和战斗场面,故事情节紧凑,充满了紧张和兴奋的氛围。主角一般是具有超凡实力或特殊技能的英雄人物,往往会与反派角色进行一系列激烈的格斗,追逐和爆破等酷炫场面。例如《速度与激情》和《复仇者联盟》等。\n2. 爱情片:爱情片是以爱情为主题的电影,着重呈现角色之间的感情纠葛,展现了爱情的美好和悲壮。情节与角色之间的发展令人深思,增加了观众的感性体验。这种电影的故事情节主要以人类与人类之间的情感变化为主题。例如《泰坦尼克号》和《让子弹飞》等。\n3. 科幻片:科幻片是以未来科技和未知世界为主题的电影,通常包括时间旅行、外星人、虚拟现实、未来社会等各种奇幻的元素。这种电影描绘了一种比现实更加宏伟、更加神秘和惊奇的世界。例如《星际穿越》和《发条橙》等。"
}
]
}
{
"id":"18205437",
"conversations":[
{
"from":"human",
"value":"根据给定的食材,编写一道美食菜谱。\n鸡腿肉、蘑菇、洋葱"
},
{
"from":"assistant",
"value":"鸡肉炒蘑菇洋葱\n食材:\n- 鸡腿肉 400克,去骨去皮切成小块\n- 蘑菇 200克,切片\n- 洋葱 1个,切丝\n- 生姜 适量,切丝\n- 酱油 2勺\n- 料酒 1勺\n- 盐 适量\n- 白胡椒粉 适量\n- 植物油 适量\n做法:\n1. 在一个碗里放入切好的鸡腿肉块,加入酱油、料酒和适量的白胡椒粉,搅拌均匀,腌制10分钟。\n2. 热锅凉油,加入姜丝翻炒,放入腌好的鸡肉块翻炒至变色。\n3. 加入洋葱丝和蘑菇,翻炒2-3分钟至蘑菇变软。\n4. 加入适量的盐和少量的水,盖上锅盖焖煮5分钟直至鸡肉熟透。\n5. 最后翻炒几下,使汤汁变浓稠即可。\n温馨提示:\n1. 在腌制鸡肉时,如果想让肉质更嫩滑,可以加入少量的淀粉和蛋清。\n2. 可以根据个人口味加入其他调料,如蒜末、辣椒等。\n3. 洋葱和蘑菇可以用其他蔬菜代替,如青椒、胡萝卜等。"
},
{
"from":"human",
"value":"太好了,我会试着照着你教的做一次,但是我最近买了一部新相机,想让你推荐一些好的拍照地点。"
},
{
"from":"assistant",
"value":"当地的风景区、公园、景点等等都是比较适合拍照的地点。如果你想要一些特别的地方,可以考虑一些建筑群,街道巷子、咖啡馆、书店等具有文艺气息的地方。"
},
{
"from":"human",
"value":"嗯,这些地方都很不错,但是我还想找一些可以体验当地文化和风俗的地点,你有什么好的建议吗?"
},
{
"from":"assistant",
"value":"如果你想要更深入地了解当地文化和风俗,可以考虑参加一些游学活动,比如参观当地的博物馆、参加民俗节庆、尝试当地美食等等。此外,也可以去一些小城镇,感受那里的古朴风情。"
}
]
}
```
### 字段:
```
id: 数据id
conversations: 数据内容,以对话形式给出,包括多轮和单轮对话的数据
category: 数据类别
```
|
larryvrh/belle_resampled_78K_CN
|
[
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:zh",
"region:us"
] |
2023-09-25T01:35:33+00:00
|
{"language": ["zh"], "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "conversational"], "dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "id", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 197491538, "num_examples": 78000}], "download_size": 108247334, "dataset_size": 197491538}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T04:01:58+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #task_categories-conversational #size_categories-10K<n<100K #language-Chinese #region-us
|
# Dataset Card for "belle_resampled_78K_CN"
通过启发式方法过滤后的BelleGroup/train_3.5M_CN,移除了语料中明显的错误和低质量内容。
之后进行了均匀采样,从每个类别中提取出6000条最长的对话。
## 样例
### 字段:
|
[
"# Dataset Card for \"belle_resampled_78K_CN\"\n\n通过启发式方法过滤后的BelleGroup/train_3.5M_CN,移除了语料中明显的错误和低质量内容。\n\n之后进行了均匀采样,从每个类别中提取出6000条最长的对话。",
"## 样例",
"### 字段:"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-10K<n<100K #language-Chinese #region-us \n",
"# Dataset Card for \"belle_resampled_78K_CN\"\n\n通过启发式方法过滤后的BelleGroup/train_3.5M_CN,移除了语料中明显的错误和低质量内容。\n\n之后进行了均匀采样,从每个类别中提取出6000条最长的对话。",
"## 样例",
"### 字段:"
] |
[
44,
72,
4,
6
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-10K<n<100K #language-Chinese #region-us \n# Dataset Card for \"belle_resampled_78K_CN\"\n\n通过启发式方法过滤后的BelleGroup/train_3.5M_CN,移除了语料中明显的错误和低质量内容。\n\n之后进行了均匀采样,从每个类别中提取出6000条最长的对话。## 样例### 字段:"
] |
3f2c2d6b176fcf0aba92cda95d78662ab17348e2
|
JennyRodmin imagesn
|
sebarodri12/JennyRodmin
|
[
"JennyRodmin",
"Ecuadorian Woman",
"region:us"
] |
2023-09-25T01:55:45+00:00
|
{"tags": ["JennyRodmin", "Ecuadorian Woman"]}
|
2023-09-25T01:58:20+00:00
|
[] |
[] |
TAGS
#JennyRodmin #Ecuadorian Woman #region-us
|
JennyRodmin imagesn
|
[] |
[
"TAGS\n#JennyRodmin #Ecuadorian Woman #region-us \n"
] |
[
19
] |
[
"passage: TAGS\n#JennyRodmin #Ecuadorian Woman #region-us \n"
] |
e9eb0fb932c1e0fdecebfffcb1a191970199146e
|
# Dataset Card for "0c7ab6c4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/0c7ab6c4
|
[
"region:us"
] |
2023-09-25T02:27:53+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 166, "num_examples": 10}], "download_size": 1324, "dataset_size": 166}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T02:27:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "0c7ab6c4"
More Information needed
|
[
"# Dataset Card for \"0c7ab6c4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"0c7ab6c4\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"0c7ab6c4\"\n\nMore Information needed"
] |
2509d0e15780aee12c7d29ad62eca9116a0438ed
|
# Dataset Card for "2b270b48"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/2b270b48
|
[
"region:us"
] |
2023-09-25T02:43:57+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 162, "num_examples": 10}], "download_size": 1306, "dataset_size": 162}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T02:43:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "2b270b48"
More Information needed
|
[
"# Dataset Card for \"2b270b48\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"2b270b48\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"2b270b48\"\n\nMore Information needed"
] |
71768c208f7494c6e04c60e4244b0f1d0d84d0ee
|
# Dataset of Oumae Kumiko
This is the dataset of Oumae Kumiko, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 634 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 300 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 300 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 634 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 634 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 634 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/oumae_kumiko_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T02:51:54+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T02:56:29+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Oumae Kumiko
=======================
This is the dataset of Oumae Kumiko, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
33e8c34303e801d3f9d19a50a884db6e6ade229e
|
<span style="color:red"> 🔴 IMPORTANT❗🔴 HAE-RAE is now LM-Eval supported❗ We are adding evaluation results to our documentation so let us know if you want your model to be added.</span>
The HAE_RAE_BENCH is an ongoing project to develop a suite of evaluation tasks designed to test the
understanding of models regarding Korean cultural and contextual nuances.
Currently, it comprises 13 distinct tasks, with a total of 4900 instances.
Please note that although this repository contains datasets from the original [HAE-RAE BENCH paper](https://arxiv.org/abs/2309.02706),
the contents are not completely identical. Specifically, the reading comprehension subset from the original version has been removed due to copyright constraints.
In its place, an updated reading comprehension subset has been introduced, sourced from the CSAT, the Korean university entrance examination.
To replicate the studies from the paper, please use this [code](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/haerae.py).
For the latest version of the code, refer to [this](https://github.com/guijinSON/HAE-RAE-Bench.v2/blob/main/HAE_RAE_Bench_Evaluation.ipynb).
### Dataset Overview
| Task | Instances | Version | Explanation |
|-----------------------------|-----------|---------|---------------------------------------------------------------------|
| standard_nomenclature | 153 | v1.0 | Multiple-choice questions about Korean standard nomenclatures from NIKL. |
| loan_word | 169 | v1.0 | Multiple-choice questions about Korean loan words from NIKL. |
| rare_word | 405 | v1.0 | Multiple-choice questions about rare Korean words from NIKL. |
| general_knowledge | 176 | v1.0 | Multiple-choice questions on Korean cultural knowledge. |
| history | 188 | v1.0 | Multiple-choice questions on Korean history. |
| date_understanding | 475 | v1.1 | Multiple-choice questions about understanding Korean date expressions. |
| proverbs_denoising | 672 | v1.1 | Questions for generating Korean proverbs. |
| correct_definition_matching | 439 | v1.1 | Multiple-choice questions on matching the correct definition to words in context. |
| lyrics_denoising | 622 | v1.1 | Questions for reconstructing Korean pop song lyrics. |
| reading_comprehension | 936 | v1.1 | Multiple-choice questions on Korean reading comprehension from the CSAT. |
| csat_law | 217 | v1.1 | Multiple-choice questions on Korean law from the CSAT. |
| csat_geo | 150 | v1.1 | Multiple-choice questions on Korean geography from the CSAT. |
| csat_socio | 298 | v1.1 | Multiple-choice questions on Korean social studies from the CSAT. |
| **Total** | **4900** | | |
### Evaluation Results
| Models | correct_definition_matching| csat_geo | csat_law | csat_socio | date_understanding | general_knowledge | history | loan_words | reading_comprehension | rare_words | standard_nomenclature |
|----------|---------|----------|----------|----------|--------|--------|--------|--------|--------|--------|--------|
| daekeun-ml/Llama-2-ko-DPO-13B | 0.5421 | 0.1800 | 0.1613 | 0.2181 | 0.4905 | 0.3523 | 0.7500 | 0.8107 | 0.2382 | 0.6963 | 0.7908 |
### How to use (LM-Eval-Harness)
```Python
!git clone https://github.com/guijinSON/lm-evaluation-harness.git
!pip install sentencepiece
%cd lm-evaluation-harness
!pip install -e .
!pip install -e ".[multilingual]"
!pip install huggingface_hub
!python -c "from huggingface_hub.hf_api import HfFolder; HfFolder.save_token('<YOUR_HF_TOKEN>')"
!python main.py \
--model hf-causal \
--model_args pretrained=daekeun-ml/Llama-2-ko-DPO-13B,dtype=bfloat16\
--num_fewshot 1\
--batch_size 2\
--tasks hr2_cdm,hr2_cgeo,hr2_claw,hr2_csoc,hr2_du,hr2_gk,hr2_hi,hr2_lw,hr2_rw,hr2_rc,hr2_sn\
--alteration ""\
--device cuda:0
```
### Release Notes
__2023.12.03__: All errors fixed! 11 Tasks available via LM-Eval Harness refer to the code above to run evaluation. (List of Available Tasks: correct_definition_matching, csat_geo, csat_law, csat_socio, date_understanding, general_knowledge, history, loan_words, rare_words, reading_comprehension, standard_nomenclature)
__2023.11.06__: 3 tasks added (csat_geo, csat_law, csat_socio)
__2023.09.28__: [LM-Eval-Harness](https://github.com/EleutherAI/lm-evaluation-harness) support added for the following 8 tasks:
Loan Words, Rare Words, Standard Nomenclature, History, General Knowledge,correct_definition_matching, date_understanding,reading_comprehension.
Refer to the following [document](https://github.com/guijinSON/HAE-RAE-Bench.v2/blob/main/HAE_RAE_Bench_Evaluation.ipynb) to run the evaluation yourself.
__2023.09.16__: 10 tasks added, 5 from original HAE-RAE Bench(Loan Words, Rare Words, Standard Nomenclature, History, General Knowledge),
5 new tasks (correct_definition_matching, date_understanding, lyrics_denoising, proverbs_denoising, reading_comprehension)
### Point of Contact
For any questions contact us via the following email:)
```
[email protected]
```
|
HAERAE-HUB/HAE_RAE_BENCH
|
[
"task_categories:multiple-choice",
"language:ko",
"license:cc-by-nc-nd-4.0",
"arxiv:2309.02706",
"region:us"
] |
2023-09-25T03:16:13+00:00
|
{"language": ["ko"], "license": "cc-by-nc-nd-4.0", "task_categories": ["multiple-choice"], "extra_gated_prompt": "To request access to the dataset, please fill out this form, and we'll review and let you know if your use case is approved.", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Institution": "text", "Intended Use": "text", "I agree to use this dataset for non-commercial research ONLY": "checkbox"}, "configs": [{"config_name": "correct_definition_matching", "data_files": [{"split": "test", "path": "data/correct_definition_matching-00000-of-00001-65122a307e90d70a.parquet"}]}, {"config_name": "csat_geo", "data_files": [{"split": "test", "path": "data/csat_geo-00000-of-00001-68ce37d4952173a9.parquet"}]}, {"config_name": "csat_law", "data_files": [{"split": "test", "path": "data/csat_law-00000-of-00001-ef0ec0fac2f2238f.parquet"}]}, {"config_name": "csat_socio", "data_files": [{"split": "test", "path": "data/csat_socio-00000-of-00001-0cfadd93827d0d48.parquet"}]}, {"config_name": "date_understanding", "data_files": [{"split": "test", "path": "data/date_understanding-00000-of-00001-36887d05afd4c57f.parquet"}]}, {"config_name": "general_knowledge", "data_files": [{"split": "test", "path": "data/general_knowledge-00000-of-00001-7418d1a0b8c4a615.parquet"}]}, {"config_name": "history", "data_files": [{"split": "test", "path": "data/history-00000-of-00001-9ad551d475d58413.parquet"}]}, {"config_name": "loan_words", "data_files": [{"split": "test", "path": "data/loan_words-00000-of-00001-96ed08632b9a2ae8.parquet"}]}, {"config_name": "lyrics_denoising", "data_files": [{"split": "test", "path": "data/lyrics_denoising-00000-of-00001-ede1ce975e9bad54.parquet"}]}, {"config_name": "proverbs_denoising", "data_files": [{"split": "test", "path": "data/proverbs_denoising-00000-of-00001-f274ad136676b2e3.parquet"}]}, {"config_name": "rare_words", "data_files": [{"split": "test", "path": "data/rare_words-00000-of-00001-4f34314c09209524.parquet"}]}, {"config_name": "standard_nomenclature", "data_files": [{"split": "test", "path": "data/standard_nomenclature-00000-of-00001-52814989a6843328.parquet"}]}, {"config_name": "reading_comprehension", "data_files": [{"split": "test", "path": "data/reading_comprehension-00000-of-00001-f9c8df20c22e46d0.parquet"}]}]}
|
2023-12-03T08:23:34+00:00
|
[
"2309.02706"
] |
[
"ko"
] |
TAGS
#task_categories-multiple-choice #language-Korean #license-cc-by-nc-nd-4.0 #arxiv-2309.02706 #region-us
|
IMPORTANT HAE-RAE is now LM-Eval supported We are adding evaluation results to our documentation so let us know if you want your model to be added.
The HAE\_RAE\_BENCH is an ongoing project to develop a suite of evaluation tasks designed to test the
understanding of models regarding Korean cultural and contextual nuances.
Currently, it comprises 13 distinct tasks, with a total of 4900 instances.
Please note that although this repository contains datasets from the original HAE-RAE BENCH paper,
the contents are not completely identical. Specifically, the reading comprehension subset from the original version has been removed due to copyright constraints.
In its place, an updated reading comprehension subset has been introduced, sourced from the CSAT, the Korean university entrance examination.
To replicate the studies from the paper, please use this code.
For the latest version of the code, refer to this.
### Dataset Overview
### Evaluation Results
### How to use (LM-Eval-Harness)
### Release Notes
**2023.12.03**: All errors fixed! 11 Tasks available via LM-Eval Harness refer to the code above to run evaluation. (List of Available Tasks: correct\_definition\_matching, csat\_geo, csat\_law, csat\_socio, date\_understanding, general\_knowledge, history, loan\_words, rare\_words, reading\_comprehension, standard\_nomenclature)
**2023.11.06**: 3 tasks added (csat\_geo, csat\_law, csat\_socio)
**2023.09.28**: LM-Eval-Harness support added for the following 8 tasks:
Loan Words, Rare Words, Standard Nomenclature, History, General Knowledge,correct\_definition\_matching, date\_understanding,reading\_comprehension.
Refer to the following document to run the evaluation yourself.
**2023.09.16**: 10 tasks added, 5 from original HAE-RAE Bench(Loan Words, Rare Words, Standard Nomenclature, History, General Knowledge),
5 new tasks (correct\_definition\_matching, date\_understanding, lyrics\_denoising, proverbs\_denoising, reading\_comprehension)
### Point of Contact
For any questions contact us via the following email:)
|
[
"### Dataset Overview",
"### Evaluation Results",
"### How to use (LM-Eval-Harness)",
"### Release Notes\n\n\n**2023.12.03**: All errors fixed! 11 Tasks available via LM-Eval Harness refer to the code above to run evaluation. (List of Available Tasks: correct\\_definition\\_matching, csat\\_geo, csat\\_law, csat\\_socio, date\\_understanding, general\\_knowledge, history, loan\\_words, rare\\_words, reading\\_comprehension, standard\\_nomenclature)\n\n\n**2023.11.06**: 3 tasks added (csat\\_geo, csat\\_law, csat\\_socio)\n\n\n**2023.09.28**: LM-Eval-Harness support added for the following 8 tasks:\nLoan Words, Rare Words, Standard Nomenclature, History, General Knowledge,correct\\_definition\\_matching, date\\_understanding,reading\\_comprehension.\nRefer to the following document to run the evaluation yourself.\n\n\n**2023.09.16**: 10 tasks added, 5 from original HAE-RAE Bench(Loan Words, Rare Words, Standard Nomenclature, History, General Knowledge),\n5 new tasks (correct\\_definition\\_matching, date\\_understanding, lyrics\\_denoising, proverbs\\_denoising, reading\\_comprehension)",
"### Point of Contact\n\n\nFor any questions contact us via the following email:)"
] |
[
"TAGS\n#task_categories-multiple-choice #language-Korean #license-cc-by-nc-nd-4.0 #arxiv-2309.02706 #region-us \n",
"### Dataset Overview",
"### Evaluation Results",
"### How to use (LM-Eval-Harness)",
"### Release Notes\n\n\n**2023.12.03**: All errors fixed! 11 Tasks available via LM-Eval Harness refer to the code above to run evaluation. (List of Available Tasks: correct\\_definition\\_matching, csat\\_geo, csat\\_law, csat\\_socio, date\\_understanding, general\\_knowledge, history, loan\\_words, rare\\_words, reading\\_comprehension, standard\\_nomenclature)\n\n\n**2023.11.06**: 3 tasks added (csat\\_geo, csat\\_law, csat\\_socio)\n\n\n**2023.09.28**: LM-Eval-Harness support added for the following 8 tasks:\nLoan Words, Rare Words, Standard Nomenclature, History, General Knowledge,correct\\_definition\\_matching, date\\_understanding,reading\\_comprehension.\nRefer to the following document to run the evaluation yourself.\n\n\n**2023.09.16**: 10 tasks added, 5 from original HAE-RAE Bench(Loan Words, Rare Words, Standard Nomenclature, History, General Knowledge),\n5 new tasks (correct\\_definition\\_matching, date\\_understanding, lyrics\\_denoising, proverbs\\_denoising, reading\\_comprehension)",
"### Point of Contact\n\n\nFor any questions contact us via the following email:)"
] |
[
45,
6,
5,
14,
316,
16
] |
[
"passage: TAGS\n#task_categories-multiple-choice #language-Korean #license-cc-by-nc-nd-4.0 #arxiv-2309.02706 #region-us \n### Dataset Overview### Evaluation Results### How to use (LM-Eval-Harness)### Release Notes\n\n\n**2023.12.03**: All errors fixed! 11 Tasks available via LM-Eval Harness refer to the code above to run evaluation. (List of Available Tasks: correct\\_definition\\_matching, csat\\_geo, csat\\_law, csat\\_socio, date\\_understanding, general\\_knowledge, history, loan\\_words, rare\\_words, reading\\_comprehension, standard\\_nomenclature)\n\n\n**2023.11.06**: 3 tasks added (csat\\_geo, csat\\_law, csat\\_socio)\n\n\n**2023.09.28**: LM-Eval-Harness support added for the following 8 tasks:\nLoan Words, Rare Words, Standard Nomenclature, History, General Knowledge,correct\\_definition\\_matching, date\\_understanding,reading\\_comprehension.\nRefer to the following document to run the evaluation yourself.\n\n\n**2023.09.16**: 10 tasks added, 5 from original HAE-RAE Bench(Loan Words, Rare Words, Standard Nomenclature, History, General Knowledge),\n5 new tasks (correct\\_definition\\_matching, date\\_understanding, lyrics\\_denoising, proverbs\\_denoising, reading\\_comprehension)### Point of Contact\n\n\nFor any questions contact us via the following email:)"
] |
7e3dc1ef62daffa694c021fc2aa57150e7567fd6
|
# Dataset of Katou Hazuki
This is the dataset of Katou Hazuki, containing 281 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 281 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 674 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 281 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 281 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 281 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 281 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 281 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 674 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 674 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 674 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/katou_hazuki_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T03:17:11+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T03:22:42+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Katou Hazuki
=======================
This is the dataset of Katou Hazuki, containing 281 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
d072e08d5d8b07c819a99aa22461a3864ca3b954
|
# Dataset Card for "VQAv2_sample_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
will33am/VQAv2_sample_validation
|
[
"region:us"
] |
2023-09-25T03:22:03+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "question_type", "dtype": "string"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answers_original", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "id_image", "dtype": "int64"}, {"name": "answer_type", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "clip_tags_ViT_L_14", "sequence": "string"}, {"name": "blip_caption", "dtype": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float32"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float32"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14", "sequence": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes_ViT_L_14", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "new_info_captions3", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": {"sequence": "string"}}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_without_filtering", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "clip_tags_LAION_ViT_H_14_2B", "sequence": "string"}, {"name": "LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B", "sequence": "string"}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": "string"}, {"name": "captions_module_filter", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "Attributes_ViT_L_14_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_with_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_with_openai", "sequence": "string"}, {"name": "blip_caption_beam_5_Salesforce_blip2_flan_t5_xxl", "dtype": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "captions_all_patches", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_clean", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "captions_all_patches", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "blip_caption_topk_50_Salesforce_blip_image_captioning_base_multiple", "sequence": "string"}, {"name": "DETA_detections_deta_swin_large_o365_clip_caption_all_patches_Salesforce_blip_image_captioning_large__ViT_L_14", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "captions_all_patches", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "blip_caption_Salesforce_blip_image_captioning_large_intensive", "sequence": "string"}, {"name": "blip_caption_Salesforce_blip_image_captioning_base_intensive", "sequence": "string"}, {"name": "intensive_captions_Salesforce-blip-image-captioning-large", "sequence": "string"}], "splits": [{"name": "validation", "num_bytes": 511911985.0, "num_examples": 1000}], "download_size": 297787596, "dataset_size": 511911985.0}}
|
2023-09-25T03:42:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "VQAv2_sample_validation"
More Information needed
|
[
"# Dataset Card for \"VQAv2_sample_validation\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"VQAv2_sample_validation\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"VQAv2_sample_validation\"\n\nMore Information needed"
] |
f5f49a938f45ed99e235b8519ba28f76832a2add
|
### Overview
MMHal-Bench is a new evaluation benchmark specifically designed for hallucintation in Large Multimodal Models (LMM). It contains 96 challenging questions based on images from OpenImages, and their corresponding ground-truth answers and image contents.
You may check `response_template.json` for more details. In the folder `responses` we have included some example responses from representative LMMs.
### Usage
To evaluate your own model on MMHal-Bench, first generate model responses to the image-question pairs. You may check the template `get_response.py` about how to read and write to the response file.
After that, you may let GPT-4 rate your model's responses automatically. You will need package `openai` installed and an API key. Then, run `eval_gpt4.py`:
```
python eval_gpt4.py \
--response [JSON file with model responses] \
--evaluation [JSON file with GPT-4 evaluation to be saved] \
--api-key [your OpenAI API key, starting with 'sk-'] \
--gpt-model [GPT model to be used, or 'gpt-4-0314' by default]
```
Please note that the GPT-4 API calls are not free. Depending on your model response lengths, evaluating each question may use 1.5k-2k tokens. Also, GPT-4 responses are not deterministic, so you may get different results with the same responses.
At the end of the outputs, you can see the evaluation results like this:
```
Average score: 2.05
Hallucination rate: 0.61
Average score for each question type: 2.33,1.25,2,2.5,1.5,3.33,2.33,1.17
```
|
Shengcao1006/MMHal-Bench
|
[
"task_categories:visual-question-answering",
"task_categories:image-to-text",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-09-25T03:27:58+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["visual-question-answering", "image-to-text"], "pretty_name": "MMHal-Bench", "arxiv": 2309.14525}
|
2023-11-01T03:48:38+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-visual-question-answering #task_categories-image-to-text #size_categories-n<1K #language-English #license-apache-2.0 #region-us
|
### Overview
MMHal-Bench is a new evaluation benchmark specifically designed for hallucintation in Large Multimodal Models (LMM). It contains 96 challenging questions based on images from OpenImages, and their corresponding ground-truth answers and image contents.
You may check 'response_template.json' for more details. In the folder 'responses' we have included some example responses from representative LMMs.
### Usage
To evaluate your own model on MMHal-Bench, first generate model responses to the image-question pairs. You may check the template 'get_response.py' about how to read and write to the response file.
After that, you may let GPT-4 rate your model's responses automatically. You will need package 'openai' installed and an API key. Then, run 'eval_gpt4.py':
Please note that the GPT-4 API calls are not free. Depending on your model response lengths, evaluating each question may use 1.5k-2k tokens. Also, GPT-4 responses are not deterministic, so you may get different results with the same responses.
At the end of the outputs, you can see the evaluation results like this:
|
[
"### Overview\n\nMMHal-Bench is a new evaluation benchmark specifically designed for hallucintation in Large Multimodal Models (LMM). It contains 96 challenging questions based on images from OpenImages, and their corresponding ground-truth answers and image contents.\n\nYou may check 'response_template.json' for more details. In the folder 'responses' we have included some example responses from representative LMMs.",
"### Usage\n\nTo evaluate your own model on MMHal-Bench, first generate model responses to the image-question pairs. You may check the template 'get_response.py' about how to read and write to the response file.\n\nAfter that, you may let GPT-4 rate your model's responses automatically. You will need package 'openai' installed and an API key. Then, run 'eval_gpt4.py':\n\n\n\nPlease note that the GPT-4 API calls are not free. Depending on your model response lengths, evaluating each question may use 1.5k-2k tokens. Also, GPT-4 responses are not deterministic, so you may get different results with the same responses.\n\nAt the end of the outputs, you can see the evaluation results like this:"
] |
[
"TAGS\n#task_categories-visual-question-answering #task_categories-image-to-text #size_categories-n<1K #language-English #license-apache-2.0 #region-us \n",
"### Overview\n\nMMHal-Bench is a new evaluation benchmark specifically designed for hallucintation in Large Multimodal Models (LMM). It contains 96 challenging questions based on images from OpenImages, and their corresponding ground-truth answers and image contents.\n\nYou may check 'response_template.json' for more details. In the folder 'responses' we have included some example responses from representative LMMs.",
"### Usage\n\nTo evaluate your own model on MMHal-Bench, first generate model responses to the image-question pairs. You may check the template 'get_response.py' about how to read and write to the response file.\n\nAfter that, you may let GPT-4 rate your model's responses automatically. You will need package 'openai' installed and an API key. Then, run 'eval_gpt4.py':\n\n\n\nPlease note that the GPT-4 API calls are not free. Depending on your model response lengths, evaluating each question may use 1.5k-2k tokens. Also, GPT-4 responses are not deterministic, so you may get different results with the same responses.\n\nAt the end of the outputs, you can see the evaluation results like this:"
] |
[
55,
99,
179
] |
[
"passage: TAGS\n#task_categories-visual-question-answering #task_categories-image-to-text #size_categories-n<1K #language-English #license-apache-2.0 #region-us \n### Overview\n\nMMHal-Bench is a new evaluation benchmark specifically designed for hallucintation in Large Multimodal Models (LMM). It contains 96 challenging questions based on images from OpenImages, and their corresponding ground-truth answers and image contents.\n\nYou may check 'response_template.json' for more details. In the folder 'responses' we have included some example responses from representative LMMs.### Usage\n\nTo evaluate your own model on MMHal-Bench, first generate model responses to the image-question pairs. You may check the template 'get_response.py' about how to read and write to the response file.\n\nAfter that, you may let GPT-4 rate your model's responses automatically. You will need package 'openai' installed and an API key. Then, run 'eval_gpt4.py':\n\n\n\nPlease note that the GPT-4 API calls are not free. Depending on your model response lengths, evaluating each question may use 1.5k-2k tokens. Also, GPT-4 responses are not deterministic, so you may get different results with the same responses.\n\nAt the end of the outputs, you can see the evaluation results like this:"
] |
37b0a5dbf7ec5952abac391d27f93f8f0c09307b
|
# Dataset of shibuya_kanon/澁谷かのん/시부야카논 (Love Live! Superstar!!)
This is the dataset of shibuya_kanon/澁谷かのん/시부야카논 (Love Live! Superstar!!), containing 500 images and their tags.
The core tags of this character are `bangs, orange_hair, purple_eyes, long_hair, ribbon, neck_ribbon, red_ribbon, shiny_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 849.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shibuya_kanon_lovelivesuperstar/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 381.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shibuya_kanon_lovelivesuperstar/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1294 | 889.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shibuya_kanon_lovelivesuperstar/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 699.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shibuya_kanon_lovelivesuperstar/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1294 | 1.41 GiB | [Download](https://huggingface.co/datasets/CyberHarem/shibuya_kanon_lovelivesuperstar/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/shibuya_kanon_lovelivesuperstar',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, :d, blue_jacket, collared_shirt, grey_dress, looking_at_viewer, open_jacket, open_mouth, pinafore_dress, solo, white_shirt, yuigaoka_school_uniform, long_sleeves, shiny, upper_body, blush, medium_hair |
| 1 | 9 |  |  |  |  |  | 1girl, blue_jacket, collared_shirt, open_jacket, smile, solo, white_background, white_shirt, yuigaoka_school_uniform, blush, closed_mouth, grey_dress, looking_at_viewer, shiny, simple_background, upper_body, long_sleeves, hair_between_eyes, pinafore_dress |
| 2 | 42 |  |  |  |  |  | 1girl, solo, looking_at_viewer, smile, upper_body, blush, earrings, birthday, long_sleeves, dress, hat, open_mouth, star_(symbol), medium_hair |
| 3 | 6 |  |  |  |  |  | 1girl, blush, smile, solo, white_gloves, looking_at_viewer, open_mouth, white_dress, elbow_gloves, short_sleeves, blue_hairband, breasts, upper_body |
| 4 | 7 |  |  |  |  |  | 1girl, :d, open_mouth, short_sleeves, solo, blush, hat, blue_sky, looking_at_viewer, belt_buckle, blue_belt, cloud, white_belt, white_headwear, white_shirt, medium_hair, white_skirt |
| 5 | 8 |  |  |  |  |  | 1girl, collarbone, solo, blush, hair_scrunchie, looking_at_viewer, medium_hair, open_mouth, shorts, sweat, breasts, simple_background, white_shirt, hair_between_eyes, off_shoulder, short_sleeves, blue_scrunchie, holding, pants, shiny, shoes, short_hair, swept_bangs, towel, water_bottle, white_background |
| 6 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, maid_headdress, solo, cowboy_shot, white_apron, enmaided, standing, blush, hair_between_eyes, orange_skirt, smile, collared_shirt, dress_shirt, frilled_skirt, miniskirt, orange_bowtie, shiny, open_mouth, white_background, wing_collar, frilled_apron, holding_plate, puffy_short_sleeves, simple_background, thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | :d | blue_jacket | collared_shirt | grey_dress | looking_at_viewer | open_jacket | open_mouth | pinafore_dress | solo | white_shirt | yuigaoka_school_uniform | long_sleeves | shiny | upper_body | blush | medium_hair | smile | white_background | closed_mouth | simple_background | hair_between_eyes | earrings | birthday | dress | hat | star_(symbol) | white_gloves | white_dress | elbow_gloves | short_sleeves | blue_hairband | breasts | blue_sky | belt_buckle | blue_belt | cloud | white_belt | white_headwear | white_skirt | collarbone | hair_scrunchie | shorts | sweat | off_shoulder | blue_scrunchie | holding | pants | shoes | short_hair | swept_bangs | towel | water_bottle | maid_headdress | cowboy_shot | white_apron | enmaided | standing | orange_skirt | dress_shirt | frilled_skirt | miniskirt | orange_bowtie | wing_collar | frilled_apron | holding_plate | puffy_short_sleeves | thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----|:--------------|:-----------------|:-------------|:--------------------|:--------------|:-------------|:-----------------|:-------|:--------------|:--------------------------|:---------------|:--------|:-------------|:--------|:--------------|:--------|:-------------------|:---------------|:--------------------|:--------------------|:-----------|:-----------|:--------|:------|:----------------|:---------------|:--------------|:---------------|:----------------|:----------------|:----------|:-----------|:--------------|:------------|:--------|:-------------|:-----------------|:--------------|:-------------|:-----------------|:---------|:--------|:---------------|:-----------------|:----------|:--------|:--------|:-------------|:--------------|:--------|:---------------|:-----------------|:--------------|:--------------|:-----------|:-----------|:---------------|:--------------|:----------------|:------------|:----------------|:--------------|:----------------|:----------------|:----------------------|:-------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | | X | X | X | X | X | | X | X | X | X | X | X | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 42 |  |  |  |  |  | X | | | | | X | | X | | X | | | X | | X | X | X | X | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | | | X | | X | | X | | | | | X | X | | X | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | X | | | | X | | X | | X | X | | | | | X | X | | | | | | | | | X | | | | | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | | | | | X | | X | | X | X | | | X | | X | X | | X | | X | X | | | | | | | | | X | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 6 | 10 |  |  |  |  |  | X | | | X | | X | | X | | X | | | | X | | X | | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/shibuya_kanon_lovelivesuperstar
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T03:35:39+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T06:36:48+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of shibuya\_kanon/澁谷かのん/시부야카논 (Love Live! Superstar!!)
==============================================================
This is the dataset of shibuya\_kanon/澁谷かのん/시부야카논 (Love Live! Superstar!!), containing 500 images and their tags.
The core tags of this character are 'bangs, orange\_hair, purple\_eyes, long\_hair, ribbon, neck\_ribbon, red\_ribbon, shiny\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
2218e5fef0f3156daa88644a753fa37e923b3238
|
# Dataset of Kawashima Safaia
This is the dataset of Kawashima Safaia, containing 266 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 266 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 635 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 266 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 266 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 266 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 266 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 266 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 635 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 635 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 635 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/kawashima_safaia_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T03:44:02+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T03:46:45+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Kawashima Safaia
===========================
This is the dataset of Kawashima Safaia, containing 266 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
e2f0ba04854c0f6847a4427cad8c4327612eb974
|
关于Detection_ScanNet不同版本
# V2
在question中添加了ScanNet数据集中所有类别的映射map,所有场景都一样,只是进行了乱序
# V3
在question中只添加了对应场景包含的类别的映射map,每个场景的都不一样
|
hmxiong/3D_Benchmark
|
[
"region:us"
] |
2023-09-25T04:00:28+00:00
|
{}
|
2023-09-29T02:28:54+00:00
|
[] |
[] |
TAGS
#region-us
|
关于Detection_ScanNet不同版本
# V2
在question中添加了ScanNet数据集中所有类别的映射map,所有场景都一样,只是进行了乱序
# V3
在question中只添加了对应场景包含的类别的映射map,每个场景的都不一样
|
[
"# V2\n在question中添加了ScanNet数据集中所有类别的映射map,所有场景都一样,只是进行了乱序",
"# V3\n在question中只添加了对应场景包含的类别的映射map,每个场景的都不一样"
] |
[
"TAGS\n#region-us \n",
"# V2\n在question中添加了ScanNet数据集中所有类别的映射map,所有场景都一样,只是进行了乱序",
"# V3\n在question中只添加了对应场景包含的类别的映射map,每个场景的都不一样"
] |
[
6,
31,
27
] |
[
"passage: TAGS\n#region-us \n# V2\n在question中添加了ScanNet数据集中所有类别的映射map,所有场景都一样,只是进行了乱序# V3\n在question中只添加了对应场景包含的类别的映射map,每个场景的都不一样"
] |
12d70e585a44b5046e282da99a5763c0ea76ce49
|
# Dataset of Kousaka Reina
This is the dataset of Kousaka Reina, containing 291 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 291 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 608 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 291 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 291 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 291 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 291 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 291 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 608 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 608 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 608 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/kousaka_reina_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T04:11:40+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T04:14:09+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Kousaka Reina
========================
This is the dataset of Kousaka Reina, containing 291 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
31c3a921d7051c7d619c05b7e66e6721cae94f3e
|
# Dataset Card for "laion_subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nannullna/laion_subset
|
[
"region:us"
] |
2023-09-25T04:31:32+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "artwork", "path": "data/artwork-*"}, {"split": "person", "path": "data/person-*"}, {"split": "object", "path": "data/object-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "punsafe", "dtype": "float64"}, {"name": "pwatermark", "dtype": "float64"}], "splits": [{"name": "artwork", "num_bytes": 235558764.0, "num_examples": 452}, {"name": "person", "num_bytes": 254743194.0, "num_examples": 501}, {"name": "object", "num_bytes": 57867679.0, "num_examples": 114}], "download_size": 548177028, "dataset_size": 548169637.0}}
|
2023-09-25T04:33:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "laion_subset"
More Information needed
|
[
"# Dataset Card for \"laion_subset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"laion_subset\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"laion_subset\"\n\nMore Information needed"
] |
acf1cbf3c4cd46a7e7d8b94c75ceb3e09ec2e428
|
# Dataset Card for "cyborg_full_body_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/cyborg_full_body_prompts
|
[
"region:us"
] |
2023-09-25T04:38:28+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 156674, "num_examples": 1000}], "download_size": 4226, "dataset_size": 156674}}
|
2023-09-25T04:38:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cyborg_full_body_prompts"
More Information needed
|
[
"# Dataset Card for \"cyborg_full_body_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cyborg_full_body_prompts\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cyborg_full_body_prompts\"\n\nMore Information needed"
] |
41b8139d7b4e17179a5d6b49666859d5fbb6cb3a
|
# Dataset of Tanaka Asuka
This is the dataset of Tanaka Asuka, containing 294 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 294 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 684 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 294 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 294 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 294 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 294 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 294 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 684 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 684 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 684 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/tanaka_asuka_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T04:40:37+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T04:45:19+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Tanaka Asuka
=======================
This is the dataset of Tanaka Asuka, containing 294 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
39b55d7c07305e4dab280b66aaf037f74133362b
|
# Dataset of arashi_chisato/嵐千砂都/아라시치사토 (Love Live! Superstar!!)
This is the dataset of arashi_chisato/嵐千砂都/아라시치사토 (Love Live! Superstar!!), containing 500 images and their tags.
The core tags of this character are `bangs, white_hair, hair_bun, double_bun, red_eyes, long_hair, twintails, blunt_bangs, ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 673.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashi_chisato_lovelivesuperstar/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 328.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashi_chisato_lovelivesuperstar/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1227 | 741.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashi_chisato_lovelivesuperstar/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 567.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashi_chisato_lovelivesuperstar/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1227 | 1.14 GiB | [Download](https://huggingface.co/datasets/CyberHarem/arashi_chisato_lovelivesuperstar/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/arashi_chisato_lovelivesuperstar',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, collared_shirt, looking_at_viewer, neck_ribbon, pinafore_dress, red_ribbon, short_sleeves, solo, upper_body, white_shirt, yuigaoka_school_uniform, blush, single_sidelock, smile, birthday, ok_sign, pink_background |
| 1 | 10 |  |  |  |  |  | 1girl, blue_jacket, collared_shirt, grey_dress, long_sleeves, looking_at_viewer, neck_ribbon, open_jacket, pinafore_dress, red_ribbon, solo, white_shirt, yuigaoka_school_uniform, open_mouth, white_background, :d, blush, simple_background, upper_body, teeth |
| 2 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, neck_ribbon, red_ribbon, solo, upper_body, yuigaoka_school_uniform, blue_jacket, collared_shirt, portrait, smile, white_shirt, birthday, blush, long_sleeves, shiny_hair, open_mouth |
| 3 | 8 |  |  |  |  |  | 1girl, happy_birthday, looking_at_viewer, solo, character_name, dated, english_text, upper_body, grin, blush, jacket, short_sleeves, signature, single_sidelock |
| 4 | 11 |  |  |  |  |  | 1girl, looking_at_viewer, midriff, navel, solo, collarbone, smile, blush, open_jacket, pink_jacket, open_mouth, crop_top, long_sleeves, small_breasts, black_shorts, off_shoulder, one_eye_closed, pink_background, upper_body, white_tank_top |
| 5 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, solo, tiara, upper_body, white_gloves, blush, earrings, necklace, smile, collarbone, crown, open_mouth, elbow_gloves, pink_dress, puffy_short_sleeves, purple_dress |
| 6 | 5 |  |  |  |  |  | 2girls, looking_at_viewer, smile, solo_focus, holding_hands, orange_hair, boots, mini_hat, pink_dress |
| 7 | 17 |  |  |  |  |  | 1girl, blush, nipples, completely_nude, navel, collarbone, pussy, small_breasts, 1boy, hetero, censored, open_mouth, solo_focus, penis, sex, sweat, closed_eyes, looking_at_viewer |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | collared_shirt | looking_at_viewer | neck_ribbon | pinafore_dress | red_ribbon | short_sleeves | solo | upper_body | white_shirt | yuigaoka_school_uniform | blush | single_sidelock | smile | birthday | ok_sign | pink_background | blue_jacket | grey_dress | long_sleeves | open_jacket | open_mouth | white_background | :d | simple_background | teeth | portrait | shiny_hair | happy_birthday | character_name | dated | english_text | grin | jacket | signature | midriff | navel | collarbone | pink_jacket | crop_top | small_breasts | black_shorts | off_shoulder | one_eye_closed | white_tank_top | tiara | white_gloves | earrings | necklace | crown | elbow_gloves | pink_dress | puffy_short_sleeves | purple_dress | 2girls | solo_focus | holding_hands | orange_hair | boots | mini_hat | nipples | completely_nude | pussy | 1boy | hetero | censored | penis | sex | sweat | closed_eyes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:--------------------|:--------------|:-----------------|:-------------|:----------------|:-------|:-------------|:--------------|:--------------------------|:--------|:------------------|:--------|:-----------|:----------|:------------------|:--------------|:-------------|:---------------|:--------------|:-------------|:-------------------|:-----|:--------------------|:--------|:-----------|:-------------|:-----------------|:-----------------|:--------|:---------------|:-------|:---------|:------------|:----------|:--------|:-------------|:--------------|:-----------|:----------------|:---------------|:---------------|:-----------------|:-----------------|:--------|:---------------|:-----------|:-----------|:--------|:---------------|:-------------|:----------------------|:---------------|:---------|:-------------|:----------------|:--------------|:--------|:-----------|:----------|:------------------|:--------|:-------|:---------|:-----------|:--------|:------|:--------|:--------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | X | X | | X | X | X | X | X | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | X | | X | | X | X | X | X | X | | X | X | | | X | | X | | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | | X | | | | X | X | X | | | X | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 11 |  |  |  |  |  | X | | X | | | | | X | X | | | X | | X | | | X | | | X | X | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | X | | | | | X | X | | | X | | X | | | | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | | | | | | | | | | |
| 7 | 17 |  |  |  |  |  | X | | X | | | | | | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | X | X | | | X | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/arashi_chisato_lovelivesuperstar
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T04:48:53+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T06:50:23+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of arashi\_chisato/嵐千砂都/아라시치사토 (Love Live! Superstar!!)
===============================================================
This is the dataset of arashi\_chisato/嵐千砂都/아라시치사토 (Love Live! Superstar!!), containing 500 images and their tags.
The core tags of this character are 'bangs, white\_hair, hair\_bun, double\_bun, red\_eyes, long\_hair, twintails, blunt\_bangs, ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
ff18e82f583dd5c9a5599f52e07d7a8112cd7483
|
# Dataset of Ogasawara Haruka
This is the dataset of Ogasawara Haruka, containing 163 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 163 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 372 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 163 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 163 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 163 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 163 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 163 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 372 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 372 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 372 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/ogasawara_haruka_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T04:56:52+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T05:02:45+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Ogasawara Haruka
===========================
This is the dataset of Ogasawara Haruka, containing 163 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
3c64bd9f8139b25ba8f296e1341eaecc7a0378e8
|
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joelak07/test
|
[
"region:us"
] |
2023-09-25T05:06:11+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 655680.0, "num_examples": 80}, {"name": "test", "num_bytes": 73764.0, "num_examples": 9}], "download_size": 293906, "dataset_size": 729444.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-09-25T05:06:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "test"
More Information needed
|
[
"# Dataset Card for \"test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"test\"\n\nMore Information needed"
] |
[
6,
11
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"test\"\n\nMore Information needed"
] |
b68ac07e2011ea7df07be7fa5f756a07a0a7158f
|
# Dataset Card for "eval_tag_nq_test_v0.5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/eval_tag_nq_test_v0.5
|
[
"region:us"
] |
2023-09-25T05:07:43+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1972, "num_examples": 10}, {"name": "validation", "num_bytes": 787384, "num_examples": 3610}], "download_size": 488101, "dataset_size": 789356}}
|
2023-09-25T05:07:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "eval_tag_nq_test_v0.5"
More Information needed
|
[
"# Dataset Card for \"eval_tag_nq_test_v0.5\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"eval_tag_nq_test_v0.5\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"eval_tag_nq_test_v0.5\"\n\nMore Information needed"
] |
9b3e349c93ed88a4ad301e45ceba489bdfb59b4e
|
# Dataset of Nakaseko Kaori
This is the dataset of Nakaseko Kaori, containing 187 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 187 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 414 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 187 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 187 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 187 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 187 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 187 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 414 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 414 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 414 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/nakaseko_kaori_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T05:16:49+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T05:21:09+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Nakaseko Kaori
=========================
This is the dataset of Nakaseko Kaori, containing 187 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
951e8df3f86138724c25bf83ec3f4609e8842a4d
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
IceMasterT/BTC-Data-1Hour-2018-2023
|
[
"task_categories:token-classification",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"finance",
"region:us"
] |
2023-09-25T05:26:01+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["token-classification", "text-classification"], "pretty_name": "Bitcoin Data 1 Hour 2018-2023", "tags": ["finance"]}
|
2023-09-29T14:48:10+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-token-classification #task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #finance #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-token-classification #task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #finance #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
53,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-token-classification #task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #finance #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
416fc515da21c24f4a84bc4cc368bbb72ebe380a
|
# Dataset of Nakagawa Natsuki
This is the dataset of Nakagawa Natsuki, containing 175 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 175 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 429 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 175 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 175 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 175 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 175 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 175 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 429 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 429 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 429 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/nakagawa_natsuki_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T05:34:42+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T05:37:18+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Nakagawa Natsuki
===========================
This is the dataset of Nakagawa Natsuki, containing 175 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
434bf5f0adddacc152edfe9b11e2a098f07ecece
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
IceMasterT/BTC-Data-Daily-2014-2023
|
[
"task_categories:token-classification",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"finance",
"region:us"
] |
2023-09-25T05:38:24+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["token-classification", "text-classification"], "pretty_name": "Bitcoin Data", "tags": ["finance"]}
|
2023-09-29T14:28:38+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-token-classification #task_categories-text-classification #size_categories-1K<n<10K #language-English #license-mit #finance #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-token-classification #task_categories-text-classification #size_categories-1K<n<10K #language-English #license-mit #finance #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
53,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-token-classification #task_categories-text-classification #size_categories-1K<n<10K #language-English #license-mit #finance #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
d9e96bc4056a178b7bacc8fd079ece7d18e3bddc
|
# Dataset Card for "english_finance_news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ssahir/english_finance_news
|
[
"region:us"
] |
2023-09-25T05:40:38+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "newssource", "dtype": "string"}, {"name": "newscontents", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4297005.661361627, "num_examples": 24429}, {"name": "test", "num_bytes": 477562.3386383731, "num_examples": 2715}], "download_size": 0, "dataset_size": 4774568.0}}
|
2023-09-25T09:18:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "english_finance_news"
More Information needed
|
[
"# Dataset Card for \"english_finance_news\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"english_finance_news\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"english_finance_news\"\n\nMore Information needed"
] |
32a2998507149801d80e4efe26f8f042b6211a1b
|
# Dataset of Yoshikawa Yūko
This is the dataset of Yoshikawa Yūko, containing 180 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 180 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 427 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 180 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 180 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 180 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 180 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 180 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 427 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 427 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 427 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/yoshikawa_yuko_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T05:52:01+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T05:53:50+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Yoshikawa Yūko
=========================
This is the dataset of Yoshikawa Yūko, containing 180 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
eafa9c18ef7de87b3986a6191e25f460b35e6684
|
# Dataset Card for "COVID-QA-sentence-transformer-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minh21/COVID-QA-sentence-transformer-data
|
[
"region:us"
] |
2023-09-25T05:57:02+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "positive", "dtype": "string"}, {"name": "negative", "dtype": "string"}, {"name": "document_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4863851, "num_examples": 2378}, {"name": "test", "num_bytes": 510126, "num_examples": 269}], "download_size": 0, "dataset_size": 5373977}}
|
2023-10-06T06:10:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "COVID-QA-sentence-transformer-data"
More Information needed
|
[
"# Dataset Card for \"COVID-QA-sentence-transformer-data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"COVID-QA-sentence-transformer-data\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"COVID-QA-sentence-transformer-data\"\n\nMore Information needed"
] |
f4b19b7a308bb32edff6bb848e54eeda89101b7c
|
# Dataset Card for "COVID-QA-sentence-Question-Answering-Transformer-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minh21/COVID-QA-sentence-Question-Answering-Transformer-data
|
[
"region:us"
] |
2023-09-25T05:58:41+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "context_chunks", "sequence": "string"}, {"name": "document_id", "dtype": "int64"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 55383294, "num_examples": 1170}, {"name": "validation", "num_bytes": 5172033, "num_examples": 140}], "download_size": 16954453, "dataset_size": 60555327}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
|
2023-10-06T06:11:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "COVID-QA-sentence-Question-Answering-Transformer-data"
More Information needed
|
[
"# Dataset Card for \"COVID-QA-sentence-Question-Answering-Transformer-data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"COVID-QA-sentence-Question-Answering-Transformer-data\"\n\nMore Information needed"
] |
[
6,
29
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"COVID-QA-sentence-Question-Answering-Transformer-data\"\n\nMore Information needed"
] |
83ee9ab74b5b6e3013835020be9e7e3ab062c0b6
|
# Dataset of Saitou Aoi
This is the dataset of Saitou Aoi, containing 71 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 71 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 161 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 71 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 71 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 71 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 71 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 71 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 161 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 161 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 161 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/saitou_aoi_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T05:59:40+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T06:00:42+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Saitou Aoi
=====================
This is the dataset of Saitou Aoi, containing 71 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.