modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-04 00:48:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 550
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-04 00:47:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Norod78/SDXL-LofiGirl-Lora
|
Norod78
| 2023-09-19T16:31:41Z | 36 | 7 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-29T08:47:08Z |
---
license: mit
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Lofi Girl
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- stable-diffusion
- lora
- diffusers
widget:
- text: Dora the LofiGirl
- text: An alien Lofi Girl from outer space
- text: A Logi Girl Cthulhu rising from the sea in a great storm
- text: the girl with a pearl earring the LofiGirl
inference: true
language:
- en
---
# Trigger words
Use "Lofi Girl" or "LofiGirl" in your prompts
# Examples
The girl with a pearl earring the LofiGirl

A frame from the show Doctor Who featuring a cyberman Lofi girl

|
dhanushreddy29/OnlyRealistic
|
dhanushreddy29
| 2023-09-19T16:29:33Z | 29 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-19T16:13:37Z |
---
license: creativeml-openrail-m
---
|
dhanushreddy29/LOFIv3Inpaint
|
dhanushreddy29
| 2023-09-19T16:22:08Z | 29 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-19T16:11:41Z |
---
license: creativeml-openrail-m
---
|
manalsultan/cpmc
|
manalsultan
| 2023-09-19T16:20:41Z | 0 | 0 |
open_clip
|
[
"open_clip",
"clip",
"zero-shot-image-classification",
"arxiv:1910.04867",
"arxiv:2212.07143",
"license:mit",
"region:us"
] |
zero-shot-image-classification
| 2023-09-19T15:51:47Z |
---
license: mit
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Model Card for CLIP ViT-bigG/14 - LAION-2B
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
7. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
A CLIP ViT-bigG/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Model training done by Mitchell Wortsman on the [stability.ai](https://stability.ai/) cluster.
The license for this model is MIT.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with the 2 Billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
Fine-tuning was also partially done on LAION-A, a 900M subset of LAION-2B filtered with aesthetic V2 4.5+ and phash deduplicated.
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
The training procedure will soon be discussed by a blog post on laion.ai.
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
**TODO** - more detail
## Results
The model achieves a 80.1 zero-shot top-1 accuracy on ImageNet-1k.
An initial round of benchmarks have been performed on a wider range of datasets, and will soon be visible at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
**TODO** - create table for just this model's metrics.
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model.
# Citation
**BibTeX:**
LAION-5B
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenAI CLIP paper
```
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
OpenCLIP software
```
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
Scaling OpenCLIP paper
```
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
# How to Get Started with the Model
Use the code below to get started with the model.
** TODO ** - Hugging Face transformers, OpenCLIP, and timm getting started snippets
|
RonanMcGovern/Llama-2-7b-chat-hf-function-calling-adapters
|
RonanMcGovern
| 2023-09-19T16:20:02Z | 0 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-08-09T12:44:07Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
RonanMcGovern/Llama-2-7b-hf-function-calling-adapters
|
RonanMcGovern
| 2023-09-19T16:19:36Z | 0 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-08-15T16:59:13Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
Kendong/lora-trained-xl
|
Kendong
| 2023-09-19T16:03:34Z | 4 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-19T13:25:17Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of bc flower vase on the groud
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Kendong/lora-trained-xl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of bc flower vase on the groud using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
sauce1337/BerrySauce-L2-13b
|
sauce1337
| 2023-09-19T15:59:51Z | 1,445 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-13T07:12:49Z |
---
license: cc-by-nc-4.0
---
ok, it's a berry.

would you role play with a berry? maybe.
would you ask a berry complicated logical questions? maybe.
use alpaca format? maybe.
✧˖°.NEW★₊˚⊹ exllama v2 https://huggingface.co/sauce1337/BerrySauce-L2-13b-exl2
> TheBloke GGUF and GPTQ:\
> https://huggingface.co/TheBloke/BerrySauce-L2-13B-GPTQ \
> https://huggingface.co/TheBloke/BerrySauce-L2-13B-GGUF
|
voyzan/poca-SoccerTwos
|
voyzan
| 2023-09-19T15:51:50Z | 90 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-09-08T19:22:42Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: voyzan/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MnLgt/ppo-LunarLander-v2
|
MnLgt
| 2023-09-19T15:48:16Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-18T21:28:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.30 +/- 26.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
frankjoshua/controlnet-canny-sdxl-1.0
|
frankjoshua
| 2023-09-19T15:25:43Z | 104 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-10-13T23:06:49Z |
---
license: openrail++
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
inference: false
---
# SDXL-controlnet: Canny
These are controlnet weights trained on [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) with canny conditioning. You can find some example images in the following.
prompt: a couple watching a romantic sunset, 4k photo

prompt: ultrarealistic shot of a furry blue bird

prompt: a woman, close up, detailed, beautiful, street photography, photorealistic, detailed, Kodak ektar 100, natural, candid shot

prompt: Cinematic, neoclassical table in the living room, cinematic, contour, lighting, highly detailed, winter, golden hour

prompt: a tornado hitting grass field, 1980's film grain. overcast, muted colors.

## Usage
Make sure to first install the libraries:
```bash
pip install accelerate transformers safetensors opencv-python diffusers
```
And then we're ready to go:
```python
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
from diffusers.utils import load_image
from PIL import Image
import torch
import numpy as np
import cv2
prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
negative_prompt = 'low quality, bad quality, sketches'
image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png")
controlnet_conditioning_scale = 0.5 # recommended for good generalization
controlnet = ControlNetModel.from_pretrained(
"diffusers/controlnet-canny-sdxl-1.0",
torch_dtype=torch.float16
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
torch_dtype=torch.float16,
)
pipe.enable_model_cpu_offload()
image = np.array(image)
image = cv2.Canny(image, 100, 200)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
image = Image.fromarray(image)
images = pipe(
prompt, negative_prompt=negative_prompt, image=image, controlnet_conditioning_scale=controlnet_conditioning_scale,
).images
images[0].save(f"hug_lab.png")
```

To more details, check out the official documentation of [`StableDiffusionXLControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet_sdxl).
### Training
Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sdxl.md).
#### Training data
This checkpoint was first trained for 20,000 steps on laion 6a resized to a max minimum dimension of 384.
It was then further trained for 20,000 steps on laion 6a resized to a max minimum dimension of 1024 and
then filtered to contain only minimum 1024 images. We found the further high resolution finetuning was
necessary for image quality.
#### Compute
one 8xA100 machine
#### Batch size
Data parallel with a single gpu batch size of 8 for a total batch size of 64.
#### Hyper Parameters
Constant learning rate of 1e-4 scaled by batch size for total learning rate of 64e-4
#### Mixed precision
fp16
|
aminh/squad-bloom-3b
|
aminh
| 2023-09-19T15:24:43Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-19T15:24:36Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
vasimakram01/test_ludwig
|
vasimakram01
| 2023-09-19T15:12:53Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-19T15:11:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
vasimakram01/fine_tuning_falcon
|
vasimakram01
| 2023-09-19T15:08:50Z | 2 | 0 |
peft
|
[
"peft",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2023-06-09T06:35:54Z |
---
library_name: peft
base_model: tiiuae/falcon-7b
---
|
sudhangshankar/llama2_cssnlp
|
sudhangshankar
| 2023-09-19T15:05:45Z | 4 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-07-23T02:52:36Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
sudhangshankar/llama2_qlora_cssnlp
|
sudhangshankar
| 2023-09-19T15:05:27Z | 5 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-07-23T02:05:38Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
sudhangshankar/falcon_cssnlp
|
sudhangshankar
| 2023-09-19T15:05:10Z | 5 | 0 |
peft
|
[
"peft",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2023-07-28T15:58:16Z |
---
library_name: peft
base_model: tiiuae/falcon-7b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kmayeden/ppo-LunarLander-v2
|
kmayeden
| 2023-09-19T14:50:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-19T14:50:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 238.21 +/- 18.92
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
arnavgrg/codealpaca-qlora
|
arnavgrg
| 2023-09-19T14:49:19Z | 72 | 2 |
peft
|
[
"peft",
"text-generation",
"en",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-08-12T21:24:55Z |
---
language:
- en
license: apache-2.0
library_name: peft
tags:
- text-generation
widget:
- text: 'Below is an instruction that describes a task, paired with an input that
provides further context. Write a response that appropriately completes the request.
### Instruction: Generate an SQL statement to add a row in the customers table
where the columns are name, address, and city.
### Input: name = John, address = 123 Main Street, city = Winter Park
### Response:
'
inference:
parameters:
temperature: 0.1
max_new_tokens: 1024
base_model: meta-llama/Llama-2-7b-hf
---
# QLoRA weights using Llama-2-7b for the Code Alpaca Dataset
# Fine-Tuning on Predibase
This model was fine-tuned using [Predibase](https://predibase.com/), the first low-code AI platform for engineers.
I fine-tuned base Llama-2-7b using LoRA with 4 bit quantization on a single T4 GPU, which cost approximately $3 to train
on Predibase. Try out our free Predibase trial [here](https://predibase.com/free-trial).
Dataset and training parameters are borrowed from: https://github.com/sahil280114/codealpaca,
but all of these parameters including DeepSpeed can be directly used with [Ludwig](https://ludwig.ai/latest/), the open-source
toolkit for LLMs that Predibase is built on.
Co-trained by: [Infernaught](https://huggingface.co/Infernaught)
# How To Use The Model
To use these weights:
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM
# Load base model in 4 bit
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", load_in_4bit=True)
# Wrap model with pretrained model weights
config = PeftConfig.from_pretrained("arnavgrg/codealpaca-qlora")
model = PeftModel.from_pretrained(model, "arnavgrg/codealpaca-qlora")
```
Prompt Template:
```
Below is an instruction that describes a task, paired with an input
that provides further context. Write a response that appropriately
completes the request.
### Instruction: {instruction}
### Input: {input}
### Response:
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
royokong/prompteol-opt-13b
|
royokong
| 2023-09-19T14:42:35Z | 8 | 0 |
peft
|
[
"peft",
"base_model:facebook/opt-13b",
"base_model:adapter:facebook/opt-13b",
"region:us"
] | null | 2023-07-27T15:05:37Z |
---
library_name: peft
base_model: facebook/opt-13b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
royokong/prompteol-opt-6.7b
|
royokong
| 2023-09-19T14:42:24Z | 2 | 0 |
peft
|
[
"peft",
"base_model:facebook/opt-6.7b",
"base_model:adapter:facebook/opt-6.7b",
"region:us"
] | null | 2023-07-27T15:04:25Z |
---
library_name: peft
base_model: facebook/opt-6.7b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KaraKaraWitch/MythaKiCOTlion-v2
|
KaraKaraWitch
| 2023-09-19T14:39:11Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-19T14:13:54Z |
---
{}
---
# Model Card for MythaKiCOTlion
MythaKiCOTlion is a a lora merge of Mythalion 13B + (SuperCOT + Kimiko v2)
## Model Details
*Q: **"Why do you do this?!"***
*A: **Was bored.***
### Model Description
- **Developed by:** KaraKaraWitch (Merge), kaiokendev (SuperCOT LoRA), nRuaif (Kimiko v2 LoRA)
- **Model type:** Decoder only
- **License:** LLaMA2 (MythaKiCOTlion), SuperCOT (MIT), Kimiko v1 (CC BY-NC-SA (?))
- **Finetuned from model [optional]:** LLaMA2
### Model Sources [optional]
- [Mythalion 13B](https://huggingface.co/PygmalionAI/mythalion-13b)
- [SuperCOT LoRA](https://huggingface.co/kaiokendev/SuperCOT-LoRA)
- [Kimiko v2 LoRA](https://huggingface.co/nRuaif/Kimiko-v2-13B)
## Uses
YYMV.
### Direct Use
## Usage:
Since this is a merge between Mythalion 13B, SuperCOT-LoRA and Kimiko v2, the following instruction formats should work:
Metharme:
```
<|system|>Your system prompt goes here.<|user|>Are you alive?<|model|>
```
Alpaca:
```
### Instruction:
Your instruction or question here.
### Response:
```
## Bias, Risks, and Limitations
YMMV. This is untested territory.
## Training Details
N/A. Refer to the respective LoRa's and models.
|
Defalt-404/LoRA_Falcon
|
Defalt-404
| 2023-09-19T14:37:08Z | 4 | 0 |
peft
|
[
"peft",
"base_model:tiiuae/falcon-40b",
"base_model:adapter:tiiuae/falcon-40b",
"region:us"
] | null | 2023-06-13T01:41:52Z |
---
library_name: peft
base_model: tiiuae/falcon-40b
---
|
Pajri/TripleSevenRVC
|
Pajri
| 2023-09-19T14:32:43Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-09-19T14:24:55Z |
---
license: openrail
---
# TripleSevenRVC
No one asked for it, but i made it anyway
## What is this?
THIS, Is an Rvc2 voice model of an airplane engine (777)
## Where 2 Download?
Go to the files tab, and download the zip ;)
|
Fduv/Expense-Tracker-Llama-V2-Instruction_Fine_Tuned
|
Fduv
| 2023-09-19T14:31:38Z | 1 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-09-15T16:41:18Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
Amrutakhangaonkar/llama-2-finetune-txttosql
|
Amrutakhangaonkar
| 2023-09-19T14:19:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-17T14:39:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
nlp-maven/peftlora_gpt2
|
nlp-maven
| 2023-09-19T14:18:57Z | 1 | 0 |
peft
|
[
"peft",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-08-07T18:21:08Z |
---
library_name: peft
base_model: gpt2
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Lauryn122300/Clover
|
Lauryn122300
| 2023-09-19T14:15:26Z | 0 | 0 |
timm
|
[
"timm",
"text-to-image",
"en",
"dataset:fashion_mnist",
"license:unknown",
"region:us"
] |
text-to-image
| 2023-09-19T13:54:28Z |
---
license: unknown
language:
- en
metrics:
- accuracy
pipeline_tag: text-to-image
datasets:
- fashion_mnist
library_name: timm
---
|
jasonvan/llama-2-13b-text2sql
|
jasonvan
| 2023-09-19T14:13:50Z | 6 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2023-07-21T06:49:09Z |
---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
msullivan/65aAocmV
|
msullivan
| 2023-09-19T14:12:35Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-19T14:08:48Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# msullivan/65aAocmV
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("msullivan/65aAocmV")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
amiiin/methodFTs
|
amiiin
| 2023-09-19T14:11:35Z | 3 | 0 |
peft
|
[
"peft",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2023-08-30T12:08:06Z |
---
library_name: peft
base_model: ybelkada/falcon-7b-sharded-bf16
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
oscorrea/scores-falcon40b-sm
|
oscorrea
| 2023-09-19T14:07:34Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"RefinedWeb",
"custom_code",
"base_model:tiiuae/falcon-40b",
"base_model:adapter:tiiuae/falcon-40b",
"region:us"
] | null | 2023-08-29T03:08:38Z |
---
library_name: peft
base_model: tiiuae/falcon-40b
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
harigovind511/GPT2-Guanaco-LoRA
|
harigovind511
| 2023-09-19T14:04:26Z | 19 | 0 |
peft
|
[
"peft",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-07-09T09:20:34Z |
---
library_name: peft
base_model: gpt2
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
tendoesch/nsql-350M_2048
|
tendoesch
| 2023-09-19T13:57:47Z | 4 | 0 |
peft
|
[
"peft",
"base_model:NumbersStation/nsql-350M",
"base_model:adapter:NumbersStation/nsql-350M",
"region:us"
] | null | 2023-08-07T12:58:16Z |
---
library_name: peft
base_model: NumbersStation/nsql-350M
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
begeri/taxi-v3_vanilla_q_learning
|
begeri
| 2023-09-19T13:52:42Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-19T13:52:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3_vanilla_q_learning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="begeri/taxi-v3_vanilla_q_learning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
devdatanalytics/irishpotato
|
devdatanalytics
| 2023-09-19T13:50:46Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-09-19T13:50:40Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
anindya64/falcon7b_finetuned
|
anindya64
| 2023-09-19T13:47:05Z | 2 | 0 |
peft
|
[
"peft",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2023-07-23T12:25:39Z |
---
library_name: peft
base_model: tiiuae/falcon-7b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
margaretshark/Reinforce-CartPole
|
margaretshark
| 2023-09-19T13:46:59Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-19T13:46:47Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
eatingChen8059/ms_marco_llama2_v1
|
eatingChen8059
| 2023-09-19T13:45:55Z | 14 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-19T13:45:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
bugdaryan/WizardCoderSQL-15B-V1.0-QLoRA
|
bugdaryan
| 2023-09-19T13:42:41Z | 7 | 0 |
peft
|
[
"peft",
"code",
"sql",
"en",
"dataset:bugdaryan/spider-natsql-wikisql-instruct",
"base_model:WizardLMTeam/WizardCoder-15B-V1.0",
"base_model:adapter:WizardLMTeam/WizardCoder-15B-V1.0",
"license:openrail",
"region:us"
] | null | 2023-09-08T21:35:28Z |
---
language:
- en
license: openrail
library_name: peft
tags:
- code
- sql
datasets:
- bugdaryan/spider-natsql-wikisql-instruct
base_model: WizardLM/WizardCoder-15B-V1.0
---
# LoRA adapters for model WizardCoderSQL
## Overview
- **Model Name**: WizardCoderSQL-15B-V1.0-QLoRA
- **Repository**: [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
- **License**: [OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
- **Fine-Tuned Model Name**: WizardCoderSQL-15B-V1.0
- **Fine-Tuned Dataset**: [bugdaryan/spider-natsql-wikisql-instruct](https://huggingface.co/datasets/bugdaryan/spider-natsql-wikisql-instruct)
## Description
This repository contains a LoRA fine-tuned version of the Wizard Coder 15B model. The LoRA attention mechanism has been customized with specific parameters to enhance model performance in certain tasks. Additionally, the fine-tuned model has been merged with custom parameters to create a specialized model for specific use cases.
## Model Details
- **Base Model**: Wizard Coder 15B
- **Fine-Tuned Model Name**: WizardCoderSQL-15B-V1.0-QLoRA
- **Fine-Tuning Parameters**:
- QLoRA Parameters:
- LoRA Attention Dimension (lora_r): 64
- LoRA Alpha Parameter (lora_alpha): 16
- LoRA Dropout Probability (lora_dropout): 0.1
- bitsandbytes Parameters:
- Use 4-bit Precision Base Model (use_4bit): True
- Compute Dtype for 4-bit Base Models (bnb_4bit_compute_dtype): float16
- Quantization Type (bnb_4bit_quant_type): nf4
- Activate Nested Quantization (use_nested_quant): False
- TrainingArguments Parameters:
- Number of Training Epochs (num_train_epochs): 1
- Enable FP16/BF16 Training (fp16/bf16): False/True
- Batch Size per GPU for Training (per_device_train_batch_size): 48
- Batch Size per GPU for Evaluation (per_device_eval_batch_size): 4
- Gradient Accumulation Steps (gradient_accumulation_steps): 1
- Enable Gradient Checkpointing (gradient_checkpointing): True
- Maximum Gradient Norm (max_grad_norm): 0.3
- Initial Learning Rate (learning_rate): 2e-4
- Weight Decay (weight_decay): 0.001
- Optimizer (optim): paged_adamw_32bit
- Learning Rate Scheduler Type (lr_scheduler_type): cosine
- Maximum Training Steps (max_steps): -1
- Warmup Ratio (warmup_ratio): 0.03
- Group Sequences into Batches with Same Length (group_by_length): True
- Save Checkpoint Every X Update Steps (save_steps): 0
- Log Every X Update Steps (logging_steps): 25
- SFT Parameters:
- Maximum Sequence Length (max_seq_length): 500
## Usage
To use this fine-tuned LoRA model and merged parameters, you can load it using the Hugging Face Transformers library in Python. Here's an example of how to use it:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel
model_name = 'WizardLM/WizardCoder-15B-V1.0'
adapter_name = 'bugdaryan/WizardCoderSQL-15B-V1.0-QLoRA'
base_model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')
model = PeftModel.from_pretrained(base_model, adapter_name)
model = model.merge_and_unload()
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)
tables = "CREATE TABLE sales ( sale_id number PRIMARY KEY, product_id number, customer_id number, salesperson_id number, sale_date DATE, quantity number, FOREIGN KEY (product_id) REFERENCES products(product_id), FOREIGN KEY (customer_id) REFERENCES customers(customer_id), FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id)); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number, FOREIGN KEY (product_id) REFERENCES products(product_id)); CREATE TABLE customers ( customer_id number PRIMARY KEY, name text, address text ); CREATE TABLE salespeople ( salesperson_id number PRIMARY KEY, name text, region text ); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number );"
question = 'Find the salesperson who made the most sales.'
prompt = f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Convert text to SQLite query: {question} {tables} ### Response:"
ans = pipe(prompt, max_new_tokens=200)
print(ans[0]['generated_text'])
```
## Disclaimer
WizardCoderSQL model follows the same license as WizardCoder. The content produced by any version of WizardCoderSQL is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
|
colinglab/BureauBERTo
|
colinglab
| 2023-09-19T13:38:23Z | 283 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"bureauberto",
"administrative language",
"italian",
"it",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-13T10:46:02Z |
---
license: afl-3.0
language:
- it
widget:
- text: >-
Gli sviluppi delle prestazioni rivalutate e del valore di <mask> sono di
seguito riportati
tags:
- bureauberto
- administrative language
- italian
---
# BureauBERTo: adapting UmBERTo to the Italian bureaucratic language
<img src="https://huggingface.co/colinglab/BureauBERTo/resolve/main/bureauberto.jpg?raw=true" width="600"/>
BureauBERTo is the first transformer-based language model adapted to the Italian Public Administration (PA) and technical-bureaucratic domains. This model results from a further pre-training applied to the general-purpose Italian model UmBERTo.
## Training Corpus
BureauBERTo is trained on the Bureau Corpus, a composite corpus containing PA, banking, and insurance documents. The Bureau Corpus contains 35,293,226 sentences and approximately 1B tokens, for a total amount of 6.7 GB of plain text. The input dataset is constructed by applying the BureauBERTo tokenizer to contiguous sentences from one or more documents, using the separating special token after each sentence. The BureauBERTo vocabulary is expanded with 8,305 domain-specific tokens extracted from the Bureau Corpus.
## Training Procedure
The further pre-training is applied with a MLM objective (randomly masking 15\% of the tokens) on the Bureau Corpus. The model was trained for 40 epochs, resulting in 17,400 steps with a batch size of 8K on a NVIDIA A100 GPU. We used a learning rate of 5e-5 along with an Adam optimizer (β1=0.9, β2 = 0.98) with a weight decay of 0.1, and a 0.06 warm-up steps ratio.
## BureauBERTo model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "colinglab/BureauBERTo"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
## Citation
If you find our resource or paper useful, please consider including the following citation in your paper.
```
@inproceedings{auriemma2023bureauberto,
title = {{BureauBERTo}: adapting {UmBERTo} to the {Italian} bureaucratic language},
shorttitle = {{BureauBERTo}},
author = {Auriemma, Serena and Madeddu, Mauro and Miliani, Martina and Bondielli, Alessandro and Passaro, Lucia C and Lenci, Alessandro},
editor = {Falchi, Fabrizio and
Giannotti, Fosca and
Monreale, Anna and
Boldrini, Chiara and
Rinzivillo, Salvatore and
Colantonio, Sara},
language = {en},
booktitle = {{Proceedings of the Italia Intelligenza Artificiale - Thematic Workshops co-located with the 3rd CINI National Lab AIIS Conference on Artificial Intelligence (Ital IA 2023)}},
address = {Pisa, Italy},
series = {{CEUR} {Workshop} {Proceedings}},
volume = {3486},
pages = {240--248},
publisher = {CEUR-WS.org},
year = {2023},
url = {https://ceur-ws.org/Vol-3486/42.pdf},
}
```
|
AHMED36/lilt-en-funsd
|
AHMED36
| 2023-09-19T13:36:48Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"lilt",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"base_model:SCUT-DLVCLab/lilt-roberta-en-base",
"base_model:finetune:SCUT-DLVCLab/lilt-roberta-en-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-19T13:17:39Z |
---
license: mit
base_model: SCUT-DLVCLab/lilt-roberta-en-base
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9565
- Answer: {'precision': 0.8948004836759371, 'recall': 0.9057527539779682, 'f1': 0.9002433090024331, 'number': 817}
- Header: {'precision': 0.6868686868686869, 'recall': 0.5714285714285714, 'f1': 0.6238532110091742, 'number': 119}
- Question: {'precision': 0.8923212709620476, 'recall': 0.9387186629526463, 'f1': 0.9149321266968325, 'number': 1077}
- Overall Precision: 0.8834
- Overall Recall: 0.9036
- Overall F1: 0.8934
- Overall Accuracy: 0.8096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.409 | 10.53 | 200 | 0.8991 | {'precision': 0.8176855895196506, 'recall': 0.9167686658506732, 'f1': 0.8643969994229659, 'number': 817} | {'precision': 0.5094339622641509, 'recall': 0.453781512605042, 'f1': 0.48, 'number': 119} | {'precision': 0.891465677179963, 'recall': 0.8922934076137419, 'f1': 0.8918793503480278, 'number': 1077} | 0.84 | 0.8763 | 0.8578 | 0.7897 |
| 0.0485 | 21.05 | 400 | 1.1875 | {'precision': 0.8504566210045662, 'recall': 0.9118727050183598, 'f1': 0.8800945067926758, 'number': 817} | {'precision': 0.5691056910569106, 'recall': 0.5882352941176471, 'f1': 0.578512396694215, 'number': 119} | {'precision': 0.8970315398886828, 'recall': 0.8978644382544104, 'f1': 0.897447795823666, 'number': 1077} | 0.8580 | 0.8852 | 0.8714 | 0.7935 |
| 0.0139 | 31.58 | 600 | 1.5032 | {'precision': 0.8455377574370709, 'recall': 0.9045287637698899, 'f1': 0.8740390301596689, 'number': 817} | {'precision': 0.6206896551724138, 'recall': 0.6050420168067226, 'f1': 0.6127659574468085, 'number': 119} | {'precision': 0.9057142857142857, 'recall': 0.883008356545961, 'f1': 0.8942172073342736, 'number': 1077} | 0.8637 | 0.8753 | 0.8695 | 0.7913 |
| 0.0083 | 42.11 | 800 | 1.4968 | {'precision': 0.8316939890710382, 'recall': 0.9314565483476133, 'f1': 0.8787528868360277, 'number': 817} | {'precision': 0.6363636363636364, 'recall': 0.47058823529411764, 'f1': 0.5410628019323671, 'number': 119} | {'precision': 0.8928909952606635, 'recall': 0.8746518105849582, 'f1': 0.8836772983114447, 'number': 1077} | 0.8547 | 0.8738 | 0.8642 | 0.8017 |
| 0.0058 | 52.63 | 1000 | 1.7837 | {'precision': 0.8385300668151447, 'recall': 0.9216646266829865, 'f1': 0.8781341107871721, 'number': 817} | {'precision': 0.6138613861386139, 'recall': 0.5210084033613446, 'f1': 0.5636363636363637, 'number': 119} | {'precision': 0.8972667295004713, 'recall': 0.8839368616527391, 'f1': 0.8905519176800748, 'number': 1077} | 0.8578 | 0.8778 | 0.8677 | 0.7914 |
| 0.008 | 63.16 | 1200 | 1.8600 | {'precision': 0.8239130434782609, 'recall': 0.9277845777233782, 'f1': 0.8727691421991941, 'number': 817} | {'precision': 0.5865384615384616, 'recall': 0.5126050420168067, 'f1': 0.5470852017937219, 'number': 119} | {'precision': 0.9037735849056604, 'recall': 0.8895078922934077, 'f1': 0.8965839962564343, 'number': 1077} | 0.8527 | 0.8828 | 0.8675 | 0.8009 |
| 0.0037 | 73.68 | 1400 | 2.8372 | {'precision': 0.8821428571428571, 'recall': 0.9069767441860465, 'f1': 0.8943874471937237, 'number': 817} | {'precision': 0.5966386554621849, 'recall': 0.5966386554621849, 'f1': 0.5966386554621849, 'number': 119} | {'precision': 0.8961748633879781, 'recall': 0.9136490250696379, 'f1': 0.9048275862068965, 'number': 1077} | 0.8731 | 0.8922 | 0.8826 | 0.7928 |
| 0.004 | 84.21 | 1600 | 2.8378 | {'precision': 0.881578947368421, 'recall': 0.9020807833537332, 'f1': 0.8917120387174834, 'number': 817} | {'precision': 0.631578947368421, 'recall': 0.6050420168067226, 'f1': 0.6180257510729613, 'number': 119} | {'precision': 0.891989198919892, 'recall': 0.9201485608170845, 'f1': 0.9058500914076782, 'number': 1077} | 0.8734 | 0.8942 | 0.8837 | 0.8079 |
| 0.0018 | 94.74 | 1800 | 3.0272 | {'precision': 0.8742655699177438, 'recall': 0.9106487148102815, 'f1': 0.8920863309352519, 'number': 817} | {'precision': 0.6759259259259259, 'recall': 0.6134453781512605, 'f1': 0.6431718061674008, 'number': 119} | {'precision': 0.89937106918239, 'recall': 0.9294336118848654, 'f1': 0.9141552511415526, 'number': 1077} | 0.8774 | 0.9031 | 0.8901 | 0.7992 |
| 0.0008 | 105.26 | 2000 | 2.9565 | {'precision': 0.8948004836759371, 'recall': 0.9057527539779682, 'f1': 0.9002433090024331, 'number': 817} | {'precision': 0.6868686868686869, 'recall': 0.5714285714285714, 'f1': 0.6238532110091742, 'number': 119} | {'precision': 0.8923212709620476, 'recall': 0.9387186629526463, 'f1': 0.9149321266968325, 'number': 1077} | 0.8834 | 0.9036 | 0.8934 | 0.8096 |
| 0.0008 | 115.79 | 2200 | 3.1429 | {'precision': 0.8411111111111111, 'recall': 0.9265605875152999, 'f1': 0.881770529994176, 'number': 817} | {'precision': 0.6666666666666666, 'recall': 0.5546218487394958, 'f1': 0.6055045871559633, 'number': 119} | {'precision': 0.9147141518275539, 'recall': 0.9062209842154132, 'f1': 0.9104477611940299, 'number': 1077} | 0.8708 | 0.8937 | 0.8821 | 0.7970 |
| 0.0005 | 126.32 | 2400 | 3.0269 | {'precision': 0.8617511520737328, 'recall': 0.9155446756425949, 'f1': 0.8878338278931751, 'number': 817} | {'precision': 0.6952380952380952, 'recall': 0.6134453781512605, 'f1': 0.6517857142857143, 'number': 119} | {'precision': 0.906871609403255, 'recall': 0.9312906220984215, 'f1': 0.9189189189189189, 'number': 1077} | 0.8773 | 0.9061 | 0.8915 | 0.7994 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dhanushreddy29/EpicRealismInpaint
|
dhanushreddy29
| 2023-09-19T13:22:12Z | 31 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-19T08:08:06Z |
---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
|
EdwardYu/llama-2-7b-MedQuAD
|
EdwardYu
| 2023-09-19T13:18:35Z | 15 | 1 |
peft
|
[
"peft",
"pytorch",
"llama-2",
"text-generation",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-07-27T14:20:20Z |
---
license: apache-2.0
library_name: peft
tags:
- pytorch
- llama-2
pipeline_tag: text-generation
base_model: meta-llama/Llama-2-7b-chat-hf
---
This model is fine-tuned on [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) using [MedQuAD](https://github.com/abachaa/MedQuAD) (Medical Question Answering Dataset).
If you are interested how to fine-tune Llama-2 or other LLM models, the [repo](https://github.com/yhyu/fine-tune-llm) will tell you.
## Usage
```python
base_model = "meta-llama/Llama-2-7b-chat-hf"
adapter = 'EdwardYu/llama-2-7b-MedQuAD'
tokenizer = AutoTokenizer.from_pretrained(adapter)
model = AutoModelForCausalLM.from_pretrained(
base_model,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="auto",
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
model = PeftModel.from_pretrained(model, adapter)
question = 'What are the side effects or risks of Glucagon?'
inputs = tokenizer(question, return_tensors="pt").to("cuda")
outputs = model.generate(inputs=inputs.input_ids, max_length=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
To run model inference faster, you can load in 16-bits without 4-bit quantization.
```python
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)
```
|
lifeofcoding/mastermax-7b-lora-guanaco
|
lifeofcoding
| 2023-09-19T13:17:47Z | 2 | 0 |
peft
|
[
"peft",
"base_model:lifeofcoding/mastermax-7b",
"base_model:adapter:lifeofcoding/mastermax-7b",
"region:us"
] | null | 2023-06-26T02:52:10Z |
---
library_name: peft
base_model: lifeofcoding/mastermax-7b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Venki-ds/outputs
|
Venki-ds
| 2023-09-19T13:12:11Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-19T13:11:45Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
minstrelzxm/llama2-qlora-finetunined-prompt-test
|
minstrelzxm
| 2023-09-19T13:10:20Z | 3 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-09-13T11:53:50Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
lapki/Llama-2-7b-panorama-QLoRA
|
lapki
| 2023-09-19T13:01:53Z | 7 | 1 |
peft
|
[
"peft",
"llama",
"llama-2",
"news",
"text-generation",
"ru",
"dataset:its5Q/panorama",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] |
text-generation
| 2023-07-28T13:24:15Z |
---
language:
- ru
library_name: peft
tags:
- llama
- llama-2
- news
datasets:
- its5Q/panorama
pipeline_tag: text-generation
base_model: meta-llama/Llama-2-7b-hf
---
# Llama 2 7B, fine-tuned on Panorama media
This repo contains the QLoRA adapter.
Prompt:
```
Write a hypothetical news story based on the given headline
### Title:
{prompt}
Text:
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
### Additional information
Thanks [its5Q](https://huggingface.co/its5Q) for dataset and help
|
Kendong/ad_dog
|
Kendong
| 2023-09-19T12:59:53Z | 4 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-19T12:48:16Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Kendong/ad_dog
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
checkiejan/prefix-paraphase-50-19-auto
|
checkiejan
| 2023-09-19T12:55:49Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-19T12:55:46Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
AbdelKarim95/Reinforce2
|
AbdelKarim95
| 2023-09-19T12:54:30Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-19T12:54:25Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 22.00 +/- 17.61
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
goendalf666/phi-1_5-finetuned-gsm8k-test
|
goendalf666
| 2023-09-19T12:42:14Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mixformer-sequential",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-09-19T12:10:16Z |
---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-gsm8k-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-gsm8k-test
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0.dev20230829+cu121
- Datasets 2.14.5
- Tokenizers 0.13.3
|
omarelsayeed/Classfier_V0
|
omarelsayeed
| 2023-09-19T12:39:13Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"generated_from_keras_callback",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-09-19T12:34:00Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: Classfier_V0
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Classfier_V0
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
jiwon65/whisper-small_korean-zeroth
|
jiwon65
| 2023-09-19T12:25:02Z | 44 | 2 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-19T10:42:19Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-korr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-korr
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3466
- Wer: 19.9610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3119 | 0.69 | 100 | 0.3334 | 20.6884 |
| 0.1223 | 1.39 | 200 | 0.3179 | 21.4336 |
| 0.0757 | 2.08 | 300 | 0.3234 | 20.3158 |
| 0.0349 | 2.77 | 400 | 0.3329 | 20.8481 |
| 0.0172 | 3.47 | 500 | 0.3354 | 20.1916 |
| 0.0059 | 4.16 | 600 | 0.3357 | 19.7480 |
| 0.0057 | 4.85 | 700 | 0.3396 | 19.9965 |
| 0.0046 | 5.55 | 800 | 0.3417 | 19.7658 |
| 0.0025 | 6.24 | 900 | 0.3461 | 20.0497 |
| 0.0029 | 6.93 | 1000 | 0.3466 | 19.9610 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ldos/text_shortening_model_v42
|
ldos
| 2023-09-19T12:20:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-xsum",
"base_model:finetune:facebook/bart-large-xsum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-19T07:49:46Z |
---
license: mit
base_model: facebook/bart-large-xsum
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v42
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2972
- Rouge1: 0.4588
- Rouge2: 0.2356
- Rougel: 0.4162
- Rougelsum: 0.4165
- Bert precision: 0.8664
- Bert recall: 0.8655
- Average word count: 8.5616
- Max word count: 16
- Min word count: 4
- Average token count: 16.1051
- % shortened texts with length > 12: 4.8048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 1.1087 | 1.0 | 73 | 2.0307 | 0.4468 | 0.2283 | 0.3951 | 0.394 | 0.8582 | 0.8635 | 8.5435 | 15 | 4 | 14.6997 | 3.6036 |
| 0.6451 | 2.0 | 146 | 2.0108 | 0.4629 | 0.2419 | 0.4159 | 0.4142 | 0.8724 | 0.8668 | 8.1081 | 17 | 5 | 14.7718 | 4.2042 |
| 0.4594 | 3.0 | 219 | 1.9499 | 0.4267 | 0.229 | 0.3887 | 0.3882 | 0.8579 | 0.8575 | 8.3093 | 16 | 5 | 13.976 | 1.8018 |
| 0.4681 | 4.0 | 292 | 2.0819 | 0.4127 | 0.2049 | 0.3734 | 0.372 | 0.8549 | 0.8543 | 8.3123 | 17 | 4 | 15.3514 | 3.6036 |
| 0.334 | 5.0 | 365 | 2.1413 | 0.4302 | 0.2184 | 0.3885 | 0.3886 | 0.857 | 0.8595 | 8.8589 | 15 | 4 | 14.5285 | 3.6036 |
| 0.296 | 6.0 | 438 | 2.0881 | 0.4716 | 0.2349 | 0.4216 | 0.4217 | 0.8684 | 0.8706 | 8.7928 | 16 | 5 | 15.0841 | 6.006 |
| 0.2588 | 7.0 | 511 | 2.2671 | 0.4517 | 0.2262 | 0.4085 | 0.4079 | 0.8654 | 0.8632 | 8.4985 | 14 | 4 | 14.8258 | 3.3033 |
| 0.1883 | 8.0 | 584 | 2.4313 | 0.4572 | 0.2369 | 0.409 | 0.4099 | 0.8646 | 0.867 | 8.7207 | 16 | 5 | 14.2192 | 4.2042 |
| 0.1822 | 9.0 | 657 | 2.3293 | 0.4413 | 0.2154 | 0.3943 | 0.3936 | 0.857 | 0.8619 | 8.8318 | 16 | 4 | 16.2973 | 6.006 |
| 0.1298 | 10.0 | 730 | 2.4037 | 0.4614 | 0.2303 | 0.4145 | 0.4144 | 0.8668 | 0.866 | 8.4715 | 18 | 4 | 15.8348 | 6.3063 |
| 0.1413 | 11.0 | 803 | 2.7031 | 0.4533 | 0.2337 | 0.4099 | 0.4095 | 0.8656 | 0.8637 | 8.2943 | 16 | 4 | 15.9009 | 4.2042 |
| 0.0786 | 12.0 | 876 | 2.5766 | 0.441 | 0.2218 | 0.3982 | 0.3982 | 0.8609 | 0.8613 | 8.5916 | 16 | 4 | 15.8228 | 3.6036 |
| 0.0662 | 13.0 | 949 | 2.8013 | 0.4408 | 0.2177 | 0.3989 | 0.3984 | 0.8573 | 0.8596 | 8.5946 | 15 | 4 | 16.4204 | 4.2042 |
| 0.0635 | 14.0 | 1022 | 2.8125 | 0.44 | 0.2265 | 0.3974 | 0.3975 | 0.8591 | 0.8618 | 8.8919 | 17 | 4 | 16.7898 | 4.5045 |
| 0.0648 | 15.0 | 1095 | 2.7665 | 0.4642 | 0.2371 | 0.42 | 0.4197 | 0.8662 | 0.8675 | 8.7477 | 16 | 4 | 15.6186 | 4.8048 |
| 0.0446 | 16.0 | 1168 | 3.1244 | 0.4599 | 0.2327 | 0.4211 | 0.4205 | 0.8656 | 0.8667 | 8.6396 | 16 | 4 | 16.1351 | 5.7057 |
| 0.0475 | 17.0 | 1241 | 3.3107 | 0.4626 | 0.24 | 0.422 | 0.4221 | 0.8673 | 0.8696 | 8.7027 | 16 | 5 | 16.3934 | 5.4054 |
| 0.0332 | 18.0 | 1314 | 3.1808 | 0.465 | 0.2413 | 0.4231 | 0.4231 | 0.8672 | 0.867 | 8.5315 | 16 | 5 | 16.048 | 5.1051 |
| 0.0252 | 19.0 | 1387 | 3.2446 | 0.4587 | 0.2315 | 0.4142 | 0.4143 | 0.866 | 0.8655 | 8.5586 | 16 | 4 | 16.012 | 4.8048 |
| 0.0294 | 20.0 | 1460 | 3.2972 | 0.4588 | 0.2356 | 0.4162 | 0.4165 | 0.8664 | 0.8655 | 8.5616 | 16 | 4 | 16.1051 | 4.8048 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
lyimo/potato
|
lyimo
| 2023-09-19T12:11:41Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-09-19T12:11:34Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
muzammil-eds/Llama-2-13b-chat-hf
|
muzammil-eds
| 2023-09-19T12:08:33Z | 4 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-13b-chat-hf",
"region:us"
] | null | 2023-08-28T07:19:40Z |
---
library_name: peft
base_model: meta-llama/Llama-2-13b-chat-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
Gustrd/open-llama-13b-cabra-gtpq-lora-adapter
|
Gustrd
| 2023-09-19T12:08:31Z | 3 | 0 |
peft
|
[
"peft",
"base_model:Gustrd/open-llama-13b-4bit-128g-GPTQ",
"base_model:adapter:Gustrd/open-llama-13b-4bit-128g-GPTQ",
"region:us"
] | null | 2023-07-17T21:03:28Z |
---
library_name: peft
base_model: Gustrd/open-llama-13b-4bit-128g-GPTQ
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Gustrd/mpt-7b-lora-cabra3-adapter
|
Gustrd
| 2023-09-19T12:08:13Z | 9 | 1 |
peft
|
[
"peft",
"pt",
"dataset:dominguesm/wikipedia-ptbr-20230601",
"base_model:HachiML/mpt-7b-instruct-for-peft",
"base_model:adapter:HachiML/mpt-7b-instruct-for-peft",
"license:cc-by-3.0",
"region:us"
] | null | 2023-08-21T11:41:19Z |
---
language:
- pt
license: cc-by-3.0
library_name: peft
datasets:
- dominguesm/wikipedia-ptbr-20230601
base_model: HachiML/mpt-7b-instruct-for-peft
---
## Cabra: A portuguese finetuned instruction commercial model
LoRA adapter created with the procedures detailed at the GitHub repository: https://github.com/gustrd/cabra .
This training was done at 1 epoch using P100 at Kaggle, by around 11 hours, at a random slice of the dataset.
This LoRA adapter was created following the procedure:
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Gustrd/mpt-7b-lora-cabra-adapter
|
Gustrd
| 2023-09-19T12:07:57Z | 8 | 0 |
peft
|
[
"peft",
"pt",
"dataset:Gustrd/dolly-15k-hippo-translated-pt-12k",
"base_model:HachiML/mpt-7b-instruct-for-peft",
"base_model:adapter:HachiML/mpt-7b-instruct-for-peft",
"license:cc-by-3.0",
"region:us"
] | null | 2023-08-17T18:41:06Z |
---
language:
- pt
license: cc-by-3.0
library_name: peft
datasets:
- Gustrd/dolly-15k-hippo-translated-pt-12k
base_model: HachiML/mpt-7b-instruct-for-peft
---
### Cabra: A portuguese finetuned instruction Open-LLaMA
LoRA adapter created with the procedures detailed at the GitHub repository: https://github.com/gustrd/cabra .
This training was done at 2 epochs using two T4 at Kaggle.
This LoRA adapter was created following the procedure:
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Xilabs/llama-2-7B-Guanaco-QLoRA
|
Xilabs
| 2023-09-19T12:00:55Z | 4 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-07-23T17:46:00Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Undi95/Storytelling-v1-13B-lora
|
Undi95
| 2023-09-19T11:57:46Z | 15 | 6 |
peft
|
[
"peft",
"base_model:TheBloke/Llama-2-13B-fp16",
"base_model:adapter:TheBloke/Llama-2-13B-fp16",
"license:other",
"region:us"
] | null | 2023-09-07T23:39:30Z |
---
license: other
library_name: peft
base_model: TheBloke/Llama-2-13B-fp16
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
I'm NOT the author of this work.
I cite anon :
```shell
Well, here it is. Storytelling Qlora. Trained on base llama2 13B but works flawlessly on other 13Bs. Idk about other sizes.
25MB of nsfw books, 60MB of sfwish ones.
No special formatting other than *** between chapters and ⁂ between books. Takes some text to get going but once you have some context filled, it feels way better for prose than raw llama or instruct models, imho.
Do whatever you want with it, I can't be bothered to maintain a HF page. WTFPL.
It's just shit from nai's archive
```
Credit to "anon49"
|
matttvpl/model_v1
|
matttvpl
| 2023-09-19T11:56:58Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:poquad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-04-25T15:23:44Z |
---
tags:
- generated_from_trainer
datasets:
- poquad
model-index:
- name: model_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_v1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 334 | 1.4651 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Ioana23/distilbert-base-uncased-finetuned-imdb
|
Ioana23
| 2023-09-19T11:47:28Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-19T11:33:04Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.10.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
benji1a/openllama-3b-pelt-squad_v2
|
benji1a
| 2023-09-19T11:43:19Z | 1 | 0 |
peft
|
[
"peft",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"region:us"
] | null | 2023-08-24T17:37:45Z |
---
library_name: peft
base_model: openlm-research/open_llama_3b_v2
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
derguene/saytutension-xlmroberta-v1
|
derguene
| 2023-09-19T11:32:38Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-19T11:31:47Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# derguene/saytutension-xlmroberta-v1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("derguene/saytutension-xlmroberta-v1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
NbAiLab/nb-alpaca-lora-7b
|
NbAiLab
| 2023-09-19T11:32:00Z | 10 | 5 |
peft
|
[
"peft",
"safetensors",
"text-generation",
"no",
"nb",
"dataset:NbAiLab/norwegian-alpaca",
"license:openrail",
"region:us"
] |
text-generation
| 2023-03-27T11:28:50Z |
---
language:
- 'no'
- nb
license: openrail
library_name: peft
datasets:
- NbAiLab/norwegian-alpaca
pipeline_tag: text-generation
base_model: decapoda-research/llama-7b-hf
---
# NB-Alpaca-LoRA 7B
This is an Norwegian adapter generated by fine-tuning LLaMA-7B on a [Norwegian Alpaca](https://huggingface.co/datasets/NbAiLab/norwegian-alpaca) dataset.
## Usage
```python
from peft import PeftModel
from transformers import LlamaTokenizer, LlamaForCausalLM
base_model = "decapoda-research/llama-7b-hf"
tokenizer = LlamaTokenizer.from_pretrained(base_model)
model = LlamaForCausalLM.from_pretrained(
base_model,
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "NbAiLab/nb-alpaca-lora-7b")
```
For generation, the promtp still needs the English template:
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
instruction = "Skriv en e-post der du ønsker velkommen til en ny medarbeider ved navn Svein"
pipe(f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
""")
# Kjære Svein,
#
# Velkommen til vårt team! Vi er så glade for å ha deg med oss. Vi ser frem til å hjelpe deg med å nå dine mål og oppnå dine drømmer.
#
# Vi er alltid tilgjengelige hvis du har noen spørsmål eller ønsker å diskutere noen av våre prosjekter.
#
# Vi ser frem til å jobbe sammen med deg!
#
# Med vennlig
```
## Data
The dataset is a translation to Norwegian Bokmål of [alpaca_data_cleaned.json](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json) (a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca)) using OpenAI's `gpt-3.5-turbo` model. We translated using a full-sample prompt instead of per strings, which resulted in more coherent tuples of `(instruction, input, output)` and costed around $60.0.
**This dataset cannot be used to create models that compete in any way with OpenAI.**
## Finetuning
To fine-tune the LLaMA model we used the code available on [Alpaca Lora](https://github.com/tloen/alpaca-lora), which provides code to finetune the LLaMA model using PEFT from Hugging Face. We run finetuning for 3 epochs using sequence length of 512 with no gradient accumulation on a single A6000 with 48GB VRAM for 12 hours on top of LLaMA-7B.
## Example outputs
Stanford Alpaca:
```
Instruction: Kom med en kreativ unnskyldning for å si at jeg ikke trenger å gå på festen.
Response: I'm sorry, but I don't need to go to the party.
```
NB-Alpaca-LoRA-7B:
```
Instruction: Kom med en kreativ unnskyldning for å si at jeg ikke trenger å gå på festen.
Response: Jeg beklager at jeg ikke kan gå på festen, men jeg tror at det er bedre at jeg ikke gjør det.
```
[NB-GTP-J-6B-Alpaca](https://huggingface.co/NbAiLab/nb-gpt-j-6B-alpaca):
```
Instruction: Kom med en kreativ unnskyldning for å si at jeg ikke trenger å gå på festen.
Response: Jeg beklager, men jeg har en avtale jeg ikke kan avlyse.
```
Stanford Alpaca:
```
Instruction: Hvordan kan jeg starte en karriere som dataforsker? Skriv i listeform.
Response:
Starting a career as a data scientist requires a combination of education, experience, and skills.
1. Obtain a Bachelor's or Master's degree in computer science, statistics, mathematics, or a related field.
2. Gain practical experience through internships or part-time jobs.
3. Learn programming languages such as Python, R, and SQL.
4. Develop strong analytical and problem-solving skills.
5. Build a portfolio of projects to showcase your work.
6. Network with other data scientists to learn
```
NB-Alpaca-LoRA-7B:
```
Instruction: Hvordan kan jeg starte en karriere som dataforsker? Skriv i listeform.
Response:
1. Søk etter en universitetsgrad i datavitenskap, matematikk, informatikk eller en relatert fag.
2. Søk etter en jobb som dataforsker eller dataanalytiker.
3. Utvikle dine ferdigheter i datavitenskap, matematikk, informatikk eller en relatert fag.
4. Utvikle dine ferdigheter innen dataanalyse.
```
[NB-GTP-J-6B-Alpaca](https://huggingface.co/NbAiLab/nb-gpt-j-6B-alpaca):
```
Instruction: Hvordan kan jeg starte en karriere som dataforsker? Skriv i listeform.
Response:
1. Lær programmeringsspråk som Python og Java.
2. Skaff deg en grad i informatikk, datavitenskap eller relaterte felt.
3. Bli sertifisert innen datamaskinsyn, maskinlæring eller andre relaterte felt.
4. Bygg nettverk med andre dataforskere.
5. Delta på konferanser og kurs for å holde deg oppdatert på de siste utviklingene innen feltet.
```
You can test it using the eval notebook [here](https://colab.research.google.com/github/22-hours/cabrita/blob/main/notebooks/cabrita-lora.ipynb).
## References
- [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Norwegian Alpaca](https://huggingface.co/datasets/NbAiLab/norwegian-alpaca)
- [Alpaca LoRA](https://github.com/tloen/alpaca-lora)
- [ChatGPT](https://openai.com/blog/chatgpt)
- [Hugging Face](https://huggingface.co/)
## Hardware Requirements
For training we have used an A6000 48GB VRAM Nvidia GPU. For eval, you can use a T4.
|
radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram
|
radiogroup-crits
| 2023-09-19T11:21:43Z | 89 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"it",
"mozilla-foundation/common_voice_8_0",
"speech",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-29T08:31:46Z |
---
language:
- it
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
- it
- mozilla-foundation/common_voice_8_0
- speech
- wav2vec2
model-index:
- name: XLS-R Wav2Vec2 Italian by radiogroup crits
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0 italian
type: mozilla-foundation/common_voice_8_0
args: it
metrics:
- name: Test WER
type: wer
value: 9.04
- name: Test CER
type: cer
value: 2.2
- name: Test WER (+LM)
type: wer
value: 6.24
- name: Test CER (+LM)
type: cer
value: 1.67
---
# XLS-R-1B-ITALIAN-DOC4LM-5GRAM
## Fine-tuned XLS-R 1B model for speech recognition in Italian
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Italian using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
## Language model information
Our language model was generated using a dataset of Italian wikipedia articles and manual transcriptions of radio newspapers and television programs.
## Download CommonVoice8.0 dataset for italian language
```python
from datasets import load_dataset
dataset = load_dataset("mozilla-foundation/common_voice_8_0", "it", use_auth_token=True)
```
## Evaluation Commands
To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`:
```bash
python eval.py --model_id radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram --dataset mozilla-foundation/common_voice_8_0 --config it --split test --log_outputs --greedy
mv log_mozilla-foundation_common_voice_8_0_it_test_predictions.txt log_mozilla-foundation_common_voice_8_0_it_test_predictions_greedy.txt
mv log_mozilla-foundation_common_voice_8_0_it_test_targets.txt log_mozilla-foundation_common_voice_8_0_it_test_targets_greedy.txt
mv mozilla-foundation_common_voice_8_0_it_test_eval_results.txt mozilla-foundation_common_voice_8_0_it_test_eval_results_greedy.txt
python eval.py --model_id radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram --dataset mozilla-foundation/common_voice_8_0 --config it --split test --log_outputs
mv log_mozilla-foundation_common_voice_8_0_it_test_predictions.txt log_mozilla-foundation_common_voice_8_0_it_test_predictions_lm.txt
mv log_mozilla-foundation_common_voice_8_0_it_test_targets.txt log_mozilla-foundation_common_voice_8_0_it_test_targets_lm.txt
mv mozilla-foundation_common_voice_8_0_it_test_eval_results.txt mozilla-foundation_common_voice_8_0_it_test_eval_results_lm.txt
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{crits2022wav2vec2-xls-r-1b-italian-doc4lm-5gram,
title={XLS-R Wav2Vec2 Italian by radiogroup crits},
author={Teraoni Prioletti Raffaele, Casagranda Paolo and Russo Francesco},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram}},
year={2022}
}
```
|
0sunfire0/Llama_7B_Test08
|
0sunfire0
| 2023-09-19T11:17:26Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-04T10:28:18Z |
---
library_name: peft
base_model: decapoda-research/llama-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
ANKITA8/GENA1
|
ANKITA8
| 2023-09-19T11:17:19Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-09-19T11:17:19Z |
---
license: cc-by-nc-sa-4.0
---
|
Helsinki-NLP/opus-tatoeba-en-ja
|
Helsinki-NLP
| 2023-09-19T11:15:18Z | 4,272 | 13 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- ja
tags:
- translation
license: apache-2.0
---
### en-ja
* source group: English
* target group: Japanese
* OPUS readme: [eng-jpn](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-jpn/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): jpn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.zip)
* test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.test.txt)
* test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test.eng-jpn | 15.2 | 0.258 | 10000 | 99206 | 1.000 |
### System Info:
- hf_name: en-ja
- source_languages: eng
- target_languages: jpn
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-jpn/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ja']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Japanese', {'jpn', 'jpn_Latn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hira', 'jpn_Hang', 'jpn_Bopo', 'jpn_Hani'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-jpn
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.test.txt
- src_alpha3: eng
- tgt_alpha3: jpn
- chrF2_score: 0.258
- bleu: 15.2
- src_name: English
- tgt_name: Japanese
- train_date: 2021-04-10 00:00:00
- src_alpha2: en
- tgt_alpha2: ja
- prefer_old: False
- short_pair: en-ja
- helsinki_git_sha: 70b0a9621f054ef1d8ea81f7d55595d7f64d19ff
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-10-12-11:13
|
SHENMU007/neunit_BASE_V9.5.13
|
SHENMU007
| 2023-09-19T11:14:38Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-09-19T09:49:48Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThepineS/bert-finetuned-ner
|
ThepineS
| 2023-09-19T11:09:23Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-19T08:09:38Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9298303409652446
- name: Recall
type: recall
value: 0.9500168293503871
- name: F1
type: f1
value: 0.9398152001997837
- name: Accuracy
type: accuracy
value: 0.9862836286572084
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0586
- Precision: 0.9298
- Recall: 0.9500
- F1: 0.9398
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0768 | 1.0 | 1756 | 0.0700 | 0.9136 | 0.9376 | 0.9254 | 0.9808 |
| 0.0415 | 2.0 | 3512 | 0.0572 | 0.9239 | 0.9482 | 0.9359 | 0.9856 |
| 0.0266 | 3.0 | 5268 | 0.0586 | 0.9298 | 0.9500 | 0.9398 | 0.9863 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
dbecker1/test_lora_mdl3
|
dbecker1
| 2023-09-19T11:08:35Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-19T10:30:03Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: None
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - dbecker1/test_lora_mdl3
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the None dataset. You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
aaaaaaaqdqd/summary_tech
|
aaaaaaaqdqd
| 2023-09-19T11:07:11Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-19T09:09:58Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Phoenix10062002/llama2-faq-chatbot
|
Phoenix10062002
| 2023-09-19T11:06:33Z | 5 | 0 |
peft
|
[
"peft",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-08-04T14:36:15Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
MattStammers/appo-mujoco-Standup
|
MattStammers
| 2023-09-19T11:06:26Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-19T11:06:21Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: mujoco_standup
type: mujoco_standup
metrics:
- type: mean_reward
value: 160842.81 +/- 49335.32
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **mujoco_standup** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r MattStammers/appo-mujoco-Standup
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.mujoco.enjoy_mujoco --algo=APPO --env=mujoco_standup --train_dir=./train_dir --experiment=appo-mujoco-Standup
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.mujoco.train_mujoco --algo=APPO --env=mujoco_standup --train_dir=./train_dir --experiment=appo-mujoco-Standup --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
mhenrichsen/context-aware-splitter-7b
|
mhenrichsen
| 2023-09-19T10:58:01Z | 10 | 9 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"da",
"dataset:mhenrichsen/context-aware-splits",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-18T19:17:46Z |
---
license: apache-2.0
datasets:
- mhenrichsen/context-aware-splits
language:
- da
---
# Context Aware Splitter
1b model available [here](https://huggingface.co/mhenrichsen/context-aware-splitter-1b).
CAS is a text splitter for Retrieval Augmented Generation.
It's trained on 12.3k danish texts with a token count of 13.4m.
## What does it do?
CAS takes a text (str), reads and understands the contexts and then provides the best splits based on a defined word count.
It returns a dict with the keys:
- splits: list[str]
- topic: str
## Code example
```python
from transformers import AutoTokenizer, TextStreamer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("mhenrichsen/context-aware-splitter-7b")
tokenizer = AutoTokenizer.from_pretrained("mhenrichsen/context-aware-splitter-7b")
streamer = TextStreamer(tokenizer, skip_special_tokens=True)
WORD_SPLIT_COUNT = 50
prompt_template = """### Instruction:
Din opgave er at segmentere en given tekst i separate dele, så hver del giver mening og kan læses uafhængigt af de andre. Hvis det giver mening, må der kan være et overlap mellem delene. Hver del skal ideelt indeholde {word_count} ord.
### Input:
{text}
### Response:
"""
artikel = """Kina er stærkt utilfreds med, at Tysklands udenrigsminister, Annalena Baerbock, har omtalt den kinesiske præsident Xi Jinping som en diktator.
- Bemærkningerne fra Tyskland er ekstremt absurde, krænker Kinas politiske værdighed alvorligt og er en åben politisk provokation, udtalte talsperson fra det kinesiske udenrigsministerium Mao Ning i går ifølge CNN.
Bemærkningen fra udenrigsminister Annalena Baerbock faldt i et interview om krigen i Ukraine med Fox News i sidste uge.
- Hvis Putin skulle vinde denne krig, hvilket signal ville det så sende til andre diktatorer i verden, som Xi, som den kinesiske præsident?, sagde hun.
Tysklands ambassadør i Kina, Patricia Flor, har som konsekvens af udtalelsen været til en kammeratlig samtale, oplyser det tyske udenrigsministerium til CNN."""
tokens = tokenizer(
prompt_template.format(text=artikel, word_count=WORD_SPLIT_COUNT),
return_tensors='pt'
)['input_ids']
# Generate output
generation_output = model.generate(
tokens,
streamer=streamer,
max_length = 8194,
eos_token_id = 29913
)
```
Example:
```
### Instruction:
Din opgave er at segmentere en given tekst i separate dele, så hver del giver mening og kan læses uafhængigt af de andre. Hvis det giver mening, må der kan være et overlap mellem delene. Hver del skal ideelt indeholde 50 ord.
### Input:
Munkebjerg er et overvejende middelklassekvarter beliggende i det centrale Odense Munkebjerg grænser op til Hunderup i vest, hvor det afgrænses af Hjallesevej, og byens centrum i nord. Kvarteret har status som et familievenligt boligkvarter med både lejligheder (i området omkring H.C Andersensgade) og parcelhuse som på og omkring Munkebjergvej og Munkebjergskolen. Socialdemokratiet står traditionelt set stærkt i området, som det også ses på resultaterne af stemmer afgivet ved valgstedet Munkebjergskolen fra folketingsvalget i 2011, hvor partiet fik 24,8% af stemmerne. Dog vinder partiet Venstre samt Det Radikale Venstre også bred opbakning i kvarteret med henholdsvis 20,7 og 12,6% af stemmerne ligeledes fra valget i 2011. De fleste af kvarterets børn går på den lokale Munkebjergskolen, mens enkelte går på Odense Friskole og/eller Giersings Realskole. Munkebjergkvarteret er desuden hjemsted for fodboldklubben OKS. Munkebjergkvarteret kaldes i dagligtale for "Munken".
### Response:
```
This returns the following dictionary:
```
{'splits': ['Munkebjerg er et overvejende middelklassekvarter beliggende i det centrale Odense. Munkebjerg grænser op til Hunderup i vest, hvor det afgrænses af Hjallesevej, og byens centrum i nord. Kvarteret har status som et familievenligt boligkvarter med både lejligheder (i området omkring H.C Andersensgade) og parcelhuse som på og omkring Munkebjergvej og Munkebjergskolen.', 'Socialdemokratiet står traditionelt set stærkt i området, som det også ses på resultaterne af stemmer afgivet ved valgstedet Munkebjergskolen fra folketingsvalget i 2011, hvor partiet fik 24,8% af stemmerne. Dog vinder partiet Venstre samt Det Radikale Venstre også bred opbakning i kvarteret med henholdsvis 20,7 og 12,6% af stemmerne ligeledes fra valget i 2011.', "De fleste af kvarterets børn går på den lokale Munkebjergskolen, mens enkelte går på Odense Friskole og/eller Giersings Realskole. Munkebjergkvarteret er desuden hjemsted for fodboldklubben OKS. Munkebjergkvarteret kaldes i dagligtale for 'Munken'."], 'topic': 'Beskrivelse af Munkebjergkvarteret i Odense.'}
```
## Prompt format
The model follows alpaca format.
```
### Instruction:
Din opgave er at segmentere en given tekst i separate dele, så hver del giver mening og kan læses uafhængigt af de andre. Hvis det giver mening, må der kan være et overlap mellem delene. Hver del skal ideelt indeholde {WORD_COUNT} ord.
### Input:
{TEXT}
### Response:
```
|
pszemraj/BL-pythia-31m-simpleRW-lite-2048-scratch
|
pszemraj
| 2023-09-19T10:54:58Z | 171 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"en",
"dataset:pszemraj/simpleRW-lite",
"base_model:EleutherAI/pythia-31m",
"base_model:finetune:EleutherAI/pythia-31m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-15T13:12:34Z |
---
base_model: EleutherAI/pythia-31m
tags:
- generated_from_trainer
metrics:
- accuracy
inference:
parameters:
max_new_tokens: 64
do_sample: true
repetition_penalty: 1.1
no_repeat_ngram_size: 5
eta_cutoff: 0.001
widget:
- text: My name is El Microondas the Wise and
example_title: El Microondas
- text: Kennesaw State University is a public
example_title: Kennesaw State University
- text: Bungie Studios is an American video game developer. They are most famous for developing the award winning Halo series of video games. They also made Destiny. The studio was founded
example_title: Bungie
- text: The Mona Lisa is a world-renowned painting created by
example_title: Mona Lisa
- text: >-
The Harry Potter series, written by J.K. Rowling, begins with the book titled
example_title: Harry Potter Series
- text: >-
Question: I have cities, but no houses. I have mountains, but no trees.
I have water, but no fish. What am I?
Answer:
example_title: Riddle
- text: The process of photosynthesis involves the conversion of
example_title: Photosynthesis
- text: >-
Jane went to the store to buy some groceries. She picked up apples, oranges, and a loaf of bread. When she got home, she realized she forgot
example_title: Story Continuation
- text: >-
Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph,
and another train leaves Station B at 10:00 AM and travels at 80 mph,
when will they meet if the distance between the stations is 300 miles?
To determine
example_title: Math Problem
- text: >-
In the context of computer programming, an algorithm is
example_title: Algorithm Definition
pipeline_tag: text-generation
license: apache-2.0
language:
- en
datasets:
- pszemraj/simpleRW-lite
---
# BL-pythia-31m-simpleRW-lite-2048-scratch
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7136
- Accuracy: 0.2662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
```
2040 ***** eval metrics *****
2041 epoch = 3.0
2042 eval_accuracy = 0.2668
2043 eval_loss = 4.7076
2044 eval_runtime = 0:00:21.04
2045 eval_samples = 500
2046 eval_samples_per_second = 23.759
2047 eval_steps_per_second = 11.88
2048 perplexity = 110.7897
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 80085
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 7.0159 | 0.13 | 100 | 7.1022 | 0.1180 |
| 6.2257 | 0.27 | 200 | 6.3526 | 0.1508 |
| 5.8611 | 0.4 | 300 | 5.9888 | 0.1735 |
| 5.5514 | 0.54 | 400 | 5.7552 | 0.1855 |
| 5.3824 | 0.67 | 500 | 5.5883 | 0.1948 |
| 5.344 | 0.81 | 600 | 5.4697 | 0.2017 |
| 5.1925 | 0.94 | 700 | 5.3717 | 0.2073 |
| 5.0814 | 1.08 | 800 | 5.2932 | 0.2121 |
| 5.0865 | 1.21 | 900 | 5.2280 | 0.2162 |
| 4.9602 | 1.35 | 1000 | 5.1672 | 0.2207 |
| 4.957 | 1.48 | 1100 | 5.1144 | 0.2247 |
| 4.8489 | 1.62 | 1200 | 5.0617 | 0.2299 |
| 4.79 | 1.75 | 1300 | 5.0122 | 0.2349 |
| 4.8005 | 1.89 | 1400 | 4.9637 | 0.2400 |
| 4.7409 | 2.02 | 1500 | 4.9216 | 0.2448 |
| 4.6674 | 2.16 | 1600 | 4.8815 | 0.2488 |
| 4.6729 | 2.29 | 1700 | 4.8475 | 0.2526 |
| 4.7071 | 2.43 | 1800 | 4.8156 | 0.2555 |
| 4.4937 | 2.56 | 1900 | 4.7841 | 0.2588 |
| 4.5153 | 2.7 | 2000 | 4.7573 | 0.2615 |
| 4.5512 | 2.83 | 2100 | 4.7345 | 0.2637 |
| 4.5153 | 2.96 | 2200 | 4.7136 | 0.2662 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.2.0.dev20230915+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
legacy107/flan-t5-large-bottleneck-adapter-cpgQA-unique
|
legacy107
| 2023-09-19T10:54:04Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-01T10:46:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google/flan-t5-large
model-index:
- name: flan-t5-large-bottleneck-adapter-cpgQA-unique
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-bottleneck-adapter-cpgQA-unique
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Luciano/lora-4bit-Llama-2-7b-chat-hf-lener_br
|
Luciano
| 2023-09-19T10:52:00Z | 3 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-08-21T11:43:38Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
mhenrichsen/context-aware-splitter-1b
|
mhenrichsen
| 2023-09-19T10:45:24Z | 182 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"da",
"dataset:mhenrichsen/context-aware-splits",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-18T08:26:42Z |
---
license: apache-2.0
datasets:
- mhenrichsen/context-aware-splits
language:
- da
---
# Context Aware Splitter
7b model available [here](https://huggingface.co/mhenrichsen/context-aware-splitter-7b).
CAS is a text splitter for Retrieval Augmented Generation.
It's trained on 12.3k danish texts with a token count of 13.4m.
## What does it do?
CAS takes a text (str), reads and understands the contexts and then provides the best splits based on a defined word count.
It returns a dict with the keys:
- splits: list[str]
- topic: str
## Code example
```python
from transformers import AutoTokenizer, TextStreamer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("mhenrichsen/context-aware-splitter-1b")
tokenizer = AutoTokenizer.from_pretrained("mhenrichsen/context-aware-splitter-1b")
streamer = TextStreamer(tokenizer, skip_special_tokens=True)
WORD_SPLIT_COUNT = 50
prompt_template = """### Instruction:
Din opgave er at segmentere en given tekst i separate dele, så hver del giver mening og kan læses uafhængigt af de andre. Hvis det giver mening, må der kan være et overlap mellem delene. Hver del skal ideelt indeholde {word_count} ord.
### Input:
{text}
### Response:
"""
artikel = """Kina er stærkt utilfreds med, at Tysklands udenrigsminister, Annalena Baerbock, har omtalt den kinesiske præsident Xi Jinping som en diktator.
- Bemærkningerne fra Tyskland er ekstremt absurde, krænker Kinas politiske værdighed alvorligt og er en åben politisk provokation, udtalte talsperson fra det kinesiske udenrigsministerium Mao Ning i går ifølge CNN.
Bemærkningen fra udenrigsminister Annalena Baerbock faldt i et interview om krigen i Ukraine med Fox News i sidste uge.
- Hvis Putin skulle vinde denne krig, hvilket signal ville det så sende til andre diktatorer i verden, som Xi, som den kinesiske præsident?, sagde hun.
Tysklands ambassadør i Kina, Patricia Flor, har som konsekvens af udtalelsen været til en kammeratlig samtale, oplyser det tyske udenrigsministerium til CNN."""
tokens = tokenizer(
prompt_template.format(text=artikel, word_count=WORD_SPLIT_COUNT),
return_tensors='pt'
)['input_ids']
# Generate output
generation_output = model.generate(
tokens,
streamer=streamer,
max_length = 8194,
eos_token_id = 29913
)
```
Example:
```
### Instruction:
Din opgave er at segmentere en given tekst i separate dele, så hver del giver mening og kan læses uafhængigt af de andre. Hvis det giver mening, må der kan være et overlap mellem delene. Hver del skal ideelt indeholde 50 ord.
### Input:
Munkebjerg er et overvejende middelklassekvarter beliggende i det centrale Odense Munkebjerg grænser op til Hunderup i vest, hvor det afgrænses af Hjallesevej, og byens centrum i nord. Kvarteret har status som et familievenligt boligkvarter med både lejligheder (i området omkring H.C Andersensgade) og parcelhuse som på og omkring Munkebjergvej og Munkebjergskolen. Socialdemokratiet står traditionelt set stærkt i området, som det også ses på resultaterne af stemmer afgivet ved valgstedet Munkebjergskolen fra folketingsvalget i 2011, hvor partiet fik 24,8% af stemmerne. Dog vinder partiet Venstre samt Det Radikale Venstre også bred opbakning i kvarteret med henholdsvis 20,7 og 12,6% af stemmerne ligeledes fra valget i 2011. De fleste af kvarterets børn går på den lokale Munkebjergskolen, mens enkelte går på Odense Friskole og/eller Giersings Realskole. Munkebjergkvarteret er desuden hjemsted for fodboldklubben OKS. Munkebjergkvarteret kaldes i dagligtale for "Munken".
### Response:
```
This returns the following dictionary:
```
{'splits': ['Munkebjerg er et overvejende middelklassekvarter beliggende i det centrale Odense. Munkebjerg grænser op til Hunderup i vest, hvor det afgrænses af Hjallesevej, og byens centrum i nord. Kvarteret har status som et familievenligt boligkvarter med både lejligheder (i området omkring H.C Andersensgade) og parcelhuse som på og omkring Munkebjergvej og Munkebjergskolen.', 'Socialdemokratiet står traditionelt set stærkt i området, som det også ses på resultaterne af stemmer afgivet ved valgstedet Munkebjergskolen fra folketingsvalget i 2011, hvor partiet fik 24,8% af stemmerne. Dog vinder partiet Venstre samt Det Radikale Venstre også bred opbakning i kvarteret med henholdsvis 20,7 og 12,6% af stemmerne ligeledes fra valget i 2011.', "De fleste af kvarterets børn går på den lokale Munkebjergskolen, mens enkelte går på Odense Friskole og/eller Giersings Realskole. Munkebjergkvarteret er desuden hjemsted for fodboldklubben OKS. Munkebjergkvarteret kaldes i dagligtale for 'Munken'."], 'topic': 'Beskrivelse af Munkebjergkvarteret i Odense.'}
```
## Prompt format
The model follows alpaca format.
```
### Instruction:
Din opgave er at segmentere en given tekst i separate dele, så hver del giver mening og kan læses uafhængigt af de andre. Hvis det giver mening, må der kan være et overlap mellem delene. Hver del skal ideelt indeholde {WORD_COUNT} ord.
### Input:
{TEXT}
### Response:
```
|
monsterapi/Gptj-6b_alpaca-gpt4
|
monsterapi
| 2023-09-19T10:45:10Z | 14 | 0 |
peft
|
[
"peft",
"gptj-6b",
"instruct",
"instruct-alpaca",
"alpaca",
"gpt4",
"dataset:vicgalle/alpaca-gpt4",
"base_model:EleutherAI/gpt-j-6b",
"base_model:adapter:EleutherAI/gpt-j-6b",
"region:us"
] | null | 2023-06-28T06:44:13Z |
---
library_name: peft
tags:
- gptj-6b
- instruct
- instruct-alpaca
- alpaca
- gpt4
datasets:
- vicgalle/alpaca-gpt4
base_model: EleutherAI/gpt-j-6b
---
We finetuned gptj-6b on Code-Alpaca-Instruct Dataset (vicgalle/alpaca-gpt4) for 10 epochs or ~ 50,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
This dataset is vicgalle/alpaca-gpt4 unfiltered,
The finetuning session got completed in 7 hours and costed us only `$25` for the entire finetuning run!
#### Hyperparameters & Run details:
- Model Path: vicgalle/alpaca-gpt4
- Dataset: vicgalle/alpaca-gpt4
- Learning rate: 0.0003
- Number of epochs: 5
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
---
license: apache-2.0
---
|
monsterapi/llama2-code-generation
|
monsterapi
| 2023-09-19T10:44:43Z | 9 | 10 |
peft
|
[
"peft",
"llama2",
"llama2-7b",
"code generation",
"code-generation",
"code",
"instruct",
"instruct-code",
"code-alpaca",
"alpaca-instruct",
"alpaca",
"llama7b",
"gpt2",
"dataset:nampdn-ai/tiny-codes",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:apache-2.0",
"region:us"
] | null | 2023-08-16T04:39:54Z |
---
license: apache-2.0
library_name: peft
tags:
- llama2
- llama2-7b
- code generation
- code-generation
- code
- instruct
- instruct-code
- code-alpaca
- alpaca-instruct
- alpaca
- llama7b
- gpt2
datasets:
- nampdn-ai/tiny-codes
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
We finetuned [Llama 2 7B model](https://huggingface.co/meta-llama/Llama-2-7b-hf) from Meta on [nampdn-ai/tiny-codes](https://huggingface.co/datasets/nampdn-ai/tiny-codes) for ~ 10,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
This dataset contains **1.63 million rows** and is a collection of short and clear code snippets that can help LLM models learn how to reason with both natural and programming languages. The dataset covers a wide range of programming languages, such as Python, TypeScript, JavaScript, Ruby, Julia, Rust, C++, Bash, Java, C#, and Go. It also includes two database languages: Cypher (for graph databases) and SQL (for relational databases) in order to study the relationship of entities.
The finetuning session got completed in 53 hours and costed us ~ `$125` for the entire finetuning run!
#### Hyperparameters & Run details:
- Model Path: meta-llama/Llama-2-7b-hf
- Dataset: nampdn-ai/tiny-codes
- Learning rate: 0.0002
- Number of epochs: 1 (10k steps)
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
### Framework versions
- PEFT 0.4.0
### Loss metrics:

|
Carve/tracer_b7
|
Carve
| 2023-09-19T10:31:03Z | 0 | 12 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-09-01T15:43:56Z |
---
license: apache-2.0
---
`tracer-b7.pth` - Pretrained TRACER with EfficientNet v1 b7 encoder.
`tracer-b7-carveset-finetuned.pth` - The model of tracer b7, which has been finetuned on the CarveSet dataset. This model achieves an average F-Beta score of 96.2% on the test set.
|
CyberHarem/ooishi_izumi_idolmastercinderellagirls
|
CyberHarem
| 2023-09-19T10:25:51Z | 0 | 1 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/ooishi_izumi_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-19T10:08:09Z |
---
license: mit
datasets:
- CyberHarem/ooishi_izumi_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of ooishi_izumi_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5040, you need to download `5040/ooishi_izumi_idolmastercinderellagirls.pt` as the embedding and `5040/ooishi_izumi_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5040**, with the score of 0.949. The trigger words are:
1. `ooishi_izumi_idolmastercinderellagirls`
2. `long_hair, brown_eyes, blush, black_hair, breasts, smile, bangs, medium_breasts, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5400 | 0.929 | [Download](5400/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5400/previews/pattern_2.png) |  |  | [<NSFW, click to see>](5400/previews/pattern_5.png) | [<NSFW, click to see>](5400/previews/pattern_6.png) | [<NSFW, click to see>](5400/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](5400/previews/pattern_11.png) | [<NSFW, click to see>](5400/previews/bikini.png) | [<NSFW, click to see>](5400/previews/bondage.png) | [<NSFW, click to see>](5400/previews/free.png) |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| **5040** | **0.949** | [**Download**](5040/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5040/previews/pattern_2.png) |  |  | [<NSFW, click to see>](5040/previews/pattern_5.png) | [<NSFW, click to see>](5040/previews/pattern_6.png) | [<NSFW, click to see>](5040/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](5040/previews/pattern_11.png) | [<NSFW, click to see>](5040/previews/bikini.png) | [<NSFW, click to see>](5040/previews/bondage.png) | [<NSFW, click to see>](5040/previews/free.png) |  |  | [<NSFW, click to see>](5040/previews/nude.png) | [<NSFW, click to see>](5040/previews/nude2.png) |  |  |
| 4680 | 0.870 | [Download](4680/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4680/previews/pattern_2.png) |  |  | [<NSFW, click to see>](4680/previews/pattern_5.png) | [<NSFW, click to see>](4680/previews/pattern_6.png) | [<NSFW, click to see>](4680/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](4680/previews/pattern_11.png) | [<NSFW, click to see>](4680/previews/bikini.png) | [<NSFW, click to see>](4680/previews/bondage.png) | [<NSFW, click to see>](4680/previews/free.png) |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4320 | 0.866 | [Download](4320/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4320/previews/pattern_2.png) |  |  | [<NSFW, click to see>](4320/previews/pattern_5.png) | [<NSFW, click to see>](4320/previews/pattern_6.png) | [<NSFW, click to see>](4320/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](4320/previews/pattern_11.png) | [<NSFW, click to see>](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) | [<NSFW, click to see>](4320/previews/free.png) |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3960 | 0.865 | [Download](3960/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3960/previews/pattern_2.png) |  |  | [<NSFW, click to see>](3960/previews/pattern_5.png) | [<NSFW, click to see>](3960/previews/pattern_6.png) | [<NSFW, click to see>](3960/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](3960/previews/pattern_11.png) | [<NSFW, click to see>](3960/previews/bikini.png) | [<NSFW, click to see>](3960/previews/bondage.png) | [<NSFW, click to see>](3960/previews/free.png) |  |  | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) |  |  |
| 3600 | 0.904 | [Download](3600/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3600/previews/pattern_2.png) |  |  | [<NSFW, click to see>](3600/previews/pattern_5.png) | [<NSFW, click to see>](3600/previews/pattern_6.png) | [<NSFW, click to see>](3600/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](3600/previews/pattern_11.png) | [<NSFW, click to see>](3600/previews/bikini.png) | [<NSFW, click to see>](3600/previews/bondage.png) | [<NSFW, click to see>](3600/previews/free.png) |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3240 | 0.940 | [Download](3240/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3240/previews/pattern_2.png) |  |  | [<NSFW, click to see>](3240/previews/pattern_5.png) | [<NSFW, click to see>](3240/previews/pattern_6.png) | [<NSFW, click to see>](3240/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](3240/previews/pattern_11.png) | [<NSFW, click to see>](3240/previews/bikini.png) | [<NSFW, click to see>](3240/previews/bondage.png) | [<NSFW, click to see>](3240/previews/free.png) |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2880 | 0.903 | [Download](2880/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2880/previews/pattern_2.png) |  |  | [<NSFW, click to see>](2880/previews/pattern_5.png) | [<NSFW, click to see>](2880/previews/pattern_6.png) | [<NSFW, click to see>](2880/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](2880/previews/pattern_11.png) | [<NSFW, click to see>](2880/previews/bikini.png) | [<NSFW, click to see>](2880/previews/bondage.png) | [<NSFW, click to see>](2880/previews/free.png) |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2520 | 0.922 | [Download](2520/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2520/previews/pattern_2.png) |  |  | [<NSFW, click to see>](2520/previews/pattern_5.png) | [<NSFW, click to see>](2520/previews/pattern_6.png) | [<NSFW, click to see>](2520/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](2520/previews/pattern_11.png) | [<NSFW, click to see>](2520/previews/bikini.png) | [<NSFW, click to see>](2520/previews/bondage.png) | [<NSFW, click to see>](2520/previews/free.png) |  |  | [<NSFW, click to see>](2520/previews/nude.png) | [<NSFW, click to see>](2520/previews/nude2.png) |  |  |
| 2160 | 0.854 | [Download](2160/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2160/previews/pattern_2.png) |  |  | [<NSFW, click to see>](2160/previews/pattern_5.png) | [<NSFW, click to see>](2160/previews/pattern_6.png) | [<NSFW, click to see>](2160/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](2160/previews/pattern_11.png) | [<NSFW, click to see>](2160/previews/bikini.png) | [<NSFW, click to see>](2160/previews/bondage.png) | [<NSFW, click to see>](2160/previews/free.png) |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1800 | 0.801 | [Download](1800/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1800/previews/pattern_2.png) |  |  | [<NSFW, click to see>](1800/previews/pattern_5.png) | [<NSFW, click to see>](1800/previews/pattern_6.png) | [<NSFW, click to see>](1800/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](1800/previews/pattern_11.png) | [<NSFW, click to see>](1800/previews/bikini.png) | [<NSFW, click to see>](1800/previews/bondage.png) | [<NSFW, click to see>](1800/previews/free.png) |  |  | [<NSFW, click to see>](1800/previews/nude.png) | [<NSFW, click to see>](1800/previews/nude2.png) |  |  |
| 1440 | 0.803 | [Download](1440/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1440/previews/pattern_2.png) |  |  | [<NSFW, click to see>](1440/previews/pattern_5.png) | [<NSFW, click to see>](1440/previews/pattern_6.png) | [<NSFW, click to see>](1440/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](1440/previews/pattern_11.png) | [<NSFW, click to see>](1440/previews/bikini.png) | [<NSFW, click to see>](1440/previews/bondage.png) | [<NSFW, click to see>](1440/previews/free.png) |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 1080 | 0.769 | [Download](1080/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1080/previews/pattern_2.png) |  |  | [<NSFW, click to see>](1080/previews/pattern_5.png) | [<NSFW, click to see>](1080/previews/pattern_6.png) | [<NSFW, click to see>](1080/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](1080/previews/pattern_11.png) | [<NSFW, click to see>](1080/previews/bikini.png) | [<NSFW, click to see>](1080/previews/bondage.png) | [<NSFW, click to see>](1080/previews/free.png) |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 720 | 0.541 | [Download](720/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](720/previews/pattern_2.png) |  |  | [<NSFW, click to see>](720/previews/pattern_5.png) | [<NSFW, click to see>](720/previews/pattern_6.png) | [<NSFW, click to see>](720/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](720/previews/pattern_11.png) | [<NSFW, click to see>](720/previews/bikini.png) | [<NSFW, click to see>](720/previews/bondage.png) | [<NSFW, click to see>](720/previews/free.png) |  |  | [<NSFW, click to see>](720/previews/nude.png) | [<NSFW, click to see>](720/previews/nude2.png) |  |  |
| 360 | 0.621 | [Download](360/ooishi_izumi_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](360/previews/pattern_2.png) |  |  | [<NSFW, click to see>](360/previews/pattern_5.png) | [<NSFW, click to see>](360/previews/pattern_6.png) | [<NSFW, click to see>](360/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](360/previews/pattern_11.png) | [<NSFW, click to see>](360/previews/bikini.png) | [<NSFW, click to see>](360/previews/bondage.png) | [<NSFW, click to see>](360/previews/free.png) |  |  | [<NSFW, click to see>](360/previews/nude.png) | [<NSFW, click to see>](360/previews/nude2.png) |  |  |
|
trieudemo11/llama_7b_attrb_cate_4m_2
|
trieudemo11
| 2023-09-19T10:25:07Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-19T10:24:53Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
royokong/prompteol-opt-2.7b
|
royokong
| 2023-09-19T10:21:14Z | 390 | 0 |
peft
|
[
"peft",
"base_model:facebook/opt-2.7b",
"base_model:adapter:facebook/opt-2.7b",
"region:us"
] | null | 2023-07-27T15:02:56Z |
---
library_name: peft
base_model: facebook/opt-2.7b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Wariano/bsc-bio-ehr-es-vih-juicio_anam_urgen
|
Wariano
| 2023-09-19T10:13:27Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-18T06:38:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bsc-bio-ehr-es-vih-juicio_anam_urgen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bsc-bio-ehr-es-vih-juicio_anam_urgen
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0364
- Positives Preds: 1040
- Negative Preds: 208738
- Positives Refs: 1961
- Negative Refs: 207817
- Tp: 826
- Fn: 1135
- Fp: 214
- Tn: 207603
- Accuracy: 0.9936
- Precision: 0.7942
- Recall: 0.4212
- F1: 0.5505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Positives Preds | Negative Preds | Positives Refs | Negative Refs | Tp | Fn | Fp | Tn | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:------:|:---------------:|:---------------:|:--------------:|:--------------:|:-------------:|:---:|:----:|:---:|:------:|:--------:|:---------:|:------:|:------:|
| 0.0372 | 1.0 | 26223 | 0.0358 | 1276 | 208502 | 1961 | 207817 | 888 | 1073 | 388 | 207429 | 0.9930 | 0.6959 | 0.4528 | 0.5487 |
| 0.04 | 2.0 | 52446 | 0.0364 | 1223 | 208555 | 1961 | 207817 | 873 | 1088 | 350 | 207467 | 0.9931 | 0.7138 | 0.4452 | 0.5484 |
| 0.037 | 3.0 | 78669 | 0.0362 | 1251 | 208527 | 1961 | 207817 | 870 | 1091 | 381 | 207436 | 0.9930 | 0.6954 | 0.4437 | 0.5417 |
| 0.0368 | 4.0 | 104892 | 0.0361 | 1125 | 208653 | 1961 | 207817 | 848 | 1113 | 277 | 207540 | 0.9934 | 0.7538 | 0.4324 | 0.5496 |
| 0.0367 | 5.0 | 131115 | 0.0364 | 1040 | 208738 | 1961 | 207817 | 826 | 1135 | 214 | 207603 | 0.9936 | 0.7942 | 0.4212 | 0.5505 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MattStammers/appo-Humanoid
|
MattStammers
| 2023-09-19T10:11:16Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-19T10:11:11Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: mujoco_humanoid
type: mujoco_humanoid
metrics:
- type: mean_reward
value: 6743.15 +/- 2083.46
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **mujoco_humanoid** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r MattStammers/appo-Humanoid
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.mujoco.enjoy_mujoco --algo=APPO --env=mujoco_humanoid --train_dir=./train_dir --experiment=appo-Humanoid
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.mujoco.train_mujoco --algo=APPO --env=mujoco_humanoid --train_dir=./train_dir --experiment=appo-Humanoid --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
minhbui/viettel_v2
|
minhbui
| 2023-09-19T10:04:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-19T09:54:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
mychen76/donut-sroie2019
|
mychen76
| 2023-09-19T09:54:09Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-09-08T00:46:57Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-sroie2019
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-sroie2019
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
HumanCompatibleAI/sac-seals-Hopper-v1
|
HumanCompatibleAI
| 2023-09-19T09:52:21Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"seals/Hopper-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-19T09:51:24Z |
---
library_name: stable-baselines3
tags:
- seals/Hopper-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Hopper-v1
type: seals/Hopper-v1
metrics:
- type: mean_reward
value: 2279.30 +/- 124.09
name: mean_reward
verified: false
---
# **SAC** Agent playing **seals/Hopper-v1**
This is a trained model of a **SAC** agent playing **seals/Hopper-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env seals/Hopper-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Hopper-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env seals/Hopper-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Hopper-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env seals/Hopper-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env seals/Hopper-v1 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('gamma', 0.98),
('learning_rate', 0.001709807687567946),
('learning_starts', 1000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'log_std_init': -1.6829391077276037,
'net_arch': [256, 256],
'use_sde': False}),
('tau', 0.08),
('train_freq', 32),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
HumanCompatibleAI/sac-seals-Walker2d-v1
|
HumanCompatibleAI
| 2023-09-19T09:51:05Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"seals/Walker2d-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-19T09:50:01Z |
---
library_name: stable-baselines3
tags:
- seals/Walker2d-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Walker2d-v1
type: seals/Walker2d-v1
metrics:
- type: mean_reward
value: 5665.26 +/- 225.00
name: mean_reward
verified: false
---
# **SAC** Agent playing **seals/Walker2d-v1**
This is a trained model of a **SAC** agent playing **seals/Walker2d-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env seals/Walker2d-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Walker2d-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env seals/Walker2d-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Walker2d-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env seals/Walker2d-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env seals/Walker2d-v1 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('gamma', 0.99),
('learning_rate', 0.0005845844772048097),
('learning_starts', 1000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'log_std_init': 0.1955317469998743,
'net_arch': [400, 300],
'use_sde': False}),
('tau', 0.02),
('train_freq', 1),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
HumanCompatibleAI/ppo-seals-Humanoid-v1
|
HumanCompatibleAI
| 2023-09-19T09:47:36Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"seals/Humanoid-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-19T09:46:15Z |
---
library_name: stable-baselines3
tags:
- seals/Humanoid-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Humanoid-v1
type: seals/Humanoid-v1
metrics:
- type: mean_reward
value: 3224.12 +/- 925.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/Humanoid-v1**
This is a trained model of a **PPO** agent playing **seals/Humanoid-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Humanoid-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Humanoid-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Humanoid-v1 -orga HumanCompatibleAI -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Humanoid-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/Humanoid-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/Humanoid-v1 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 0.2),
('ent_coef', 2.0745206045994986e-05),
('gae_lambda', 0.92),
('gamma', 0.999),
('learning_rate', 2.0309225666232827e-05),
('max_grad_norm', 0.5),
('n_envs', 1),
('n_epochs', 20),
('n_steps', 2048),
('n_timesteps', 10000000.0),
('normalize',
{'gamma': 0.999, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.ReLU'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [256, 256], 'vf': [256, 256]}]}),
('vf_coef', 0.819262464558427),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.999,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
CyberHarem/tsujino_akari_idolmastercinderellagirls
|
CyberHarem
| 2023-09-19T09:34:28Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/tsujino_akari_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-19T09:18:12Z |
---
license: mit
datasets:
- CyberHarem/tsujino_akari_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of tsujino_akari_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7020, you need to download `7020/tsujino_akari_idolmastercinderellagirls.pt` as the embedding and `7020/tsujino_akari_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7020**, with the score of 0.968. The trigger words are:
1. `tsujino_akari_idolmastercinderellagirls`
2. `brown_hair, long_hair, blush, antenna_hair, smile, open_mouth, red_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.951 | [Download](8100/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.921 | [Download](7560/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| **7020** | **0.968** | [**Download**](7020/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.932 | [Download](6480/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.923 | [Download](5940/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5400 | 0.926 | [Download](5400/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.925 | [Download](4860/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.896 | [Download](4320/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.922 | [Download](3780/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.905 | [Download](3240/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.893 | [Download](2700/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.870 | [Download](2160/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.858 | [Download](1620/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.834 | [Download](1080/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.799 | [Download](540/tsujino_akari_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
newronai/clma2-13b-Chat-Adapter-text2sql-3epoch
|
newronai
| 2023-09-19T09:25:32Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-19T09:25:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
adhishezio/model
|
adhishezio
| 2023-09-19T09:07:01Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-19T08:02:47Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - adhishezio/model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
YaminiMahesh/llma2-7b-text-to-sql
|
YaminiMahesh
| 2023-09-19T09:05:51Z | 21 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-19T08:13:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
bongo2112/sdxl-db-mwijaku-headshot
|
bongo2112
| 2023-09-19T09:05:29Z | 2 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-19T09:01:16Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of mwijakudc man
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.