modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 12:28:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 12:28:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nakodanei/Blue-Orchid-2x7b_GGUF
|
nakodanei
| 2024-02-02T22:30:15Z | 3,254 | 17 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T15:24:36Z |
---
license: apache-2.0
---
GGUF version of: https://huggingface.co/nakodanei/Blue-Orchid-2x7b
|
Katelie/PixelcopterEnv
|
Katelie
| 2024-02-02T22:27:01Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T18:39:05Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PixelcopterEnv
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 39.40 +/- 30.18
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
zw1429/wb_ds_interview_fns
|
zw1429
| 2024-02-02T22:18:15Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T22:07:57Z |
This model is fine-tuned on OpenAI's GPT2 and 42 World Bank Group documents, including various types of project assessments related to Food Nutrition and Security in Africa for the year 2018. The aim is to produce the "results narratives" for Scorecard outcome area 7 – Sustainable Food Systems.
|
sbulut/finetuned-kde4-en-to-tr
|
sbulut
| 2024-02-02T21:57:41Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-tc-big-tr-en",
"base_model:finetune:Helsinki-NLP/opus-mt-tc-big-tr-en",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-02T19:53:18Z |
---
license: cc-by-4.0
base_model: Helsinki-NLP/opus-mt-tc-big-tr-en
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-tr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-tr
split: train
args: en-tr
metrics:
- name: Bleu
type: bleu
value: 29.832961482999476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-tr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-tr-en](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-tr-en) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0990
- Bleu: 29.8330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Taller3g1/poyecto_grupal
|
Taller3g1
| 2024-02-02T21:52:42Z | 3 | 0 |
keras
|
[
"keras",
"tf-keras",
"clip",
"region:us"
] | null | 2024-01-29T05:36:12Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
cvlab/pix2gestalt-weights
|
cvlab
| 2024-02-02T21:47:42Z | 0 | 5 | null |
[
"arxiv:2401.14398",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-26T16:24:59Z |
---
license: cc-by-nc-4.0
---
# pix2gestalt Model Weights
[Code](https://github.com/cvlab-columbia/pix2gestalt), [Website](https://gestalt.cs.columbia.edu/), [arXiv](https://arxiv.org/abs/2401.14398)
[pix2gestalt: Amodal Segmentation by Synthesizing Wholes](https://gestalt.cs.columbia.edu/)
[Ege Ozguroglu](https://egeozguroglu.github.io/)<sup>1</sup>, [Ruoshi Liu](https://ruoshiliu.github.io/)<sup>1</sup>, [Dídac Surís](https://www.didacsuris.com/)<sup>1</sup>, [Dian Chen](https://scholar.google.com/citations?user=zdAyna8AAAAJ&hl=en)<sup>2</sup>, [Achal Dave](https://www.achaldave.com/)<sup>2</sup>, [Pavel Tokmakov](https://pvtokmakov.github.io/home/)<sup>2</sup>, [Carl Vondrick](https://www.cs.columbia.edu/~vondrick/)<sup>1</sup> <br>
<sup>1</sup>Columbia University, <sup>2</sup>Toyota Research Institute
<div align="left">
<a href="https://gestalt.cs.columbia.edu/"><img height="80%" alt="pix2gestalt" src="https://gestalt.cs.columbia.edu/static/images/teaser/%20pix2gestalt_teaser.jpg"></a>
</div>
<b>pix2gestalt</b> synthesizes whole objects from only partially visible ones, enabling amodal segmentation, recognition, and 3D reconstruction of occluded objects.
## Citation
```
@misc{ozguroglu2024pix2gestalt,
title={pix2gestalt: Amodal Segmentation by Synthesizing Wholes},
author={Ege Ozguroglu and Ruoshi Liu and Dídac Surís and Dian Chen and Achal Dave and Pavel Tokmakov and Carl Vondrick},
year={2024},
eprint={2401.14398},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Acknowledgement
This research is based on work partially supported by the Toyota Research Institute, the DARPA MCS program under Federal Agreement No. N660011924032, the NSF NRI Award \#1925157, and the NSF AI Institute for Artificial and Natural Intelligence Award \#2229929. DS is supported by the Microsoft PhD Fellowship.
|
jbuch808/a2c-PandaReachDense-v3
|
jbuch808
| 2024-02-02T21:46:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T21:41:59Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.23 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jlbaker361/dcgan-lazy-wikiart1000-resized
|
jlbaker361
| 2024-02-02T21:37:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-01T14:17:05Z |
---
{}
---
Creative Adversarial Network
epochs: 2
dataset jlbaker361/wikiart-balanced1000
n classes 27
batch_size 32
images where resized to 768
and then center cropped to: 512
used clip=False
discriminator parameters:
init_dim: 32
final_dim 512
generator parameters:
input noise_dim: 100
|
CLMBR/full-transformer-4
|
CLMBR
| 2024-02-02T21:36:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T10:07:15Z |
---
tags:
- generated_from_trainer
model-index:
- name: full2-transformer-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full2-transformer-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2223 | 0.03 | 76320 | 4.1935 |
| 4.0184 | 1.03 | 152640 | 4.0257 |
| 3.9091 | 0.03 | 228960 | 3.9515 |
| 3.845 | 1.03 | 305280 | 3.9101 |
| 3.7943 | 0.03 | 381600 | 3.8851 |
| 3.7537 | 0.03 | 457920 | 3.8688 |
| 3.7243 | 1.03 | 534240 | 3.8585 |
| 3.6946 | 0.03 | 610560 | 3.8522 |
| 3.6634 | 1.03 | 686880 | 3.8472 |
| 3.6406 | 0.03 | 763200 | 3.8446 |
| 3.6184 | 1.03 | 839520 | 3.8431 |
| 3.5959 | 0.03 | 915840 | 3.8432 |
| 3.5817 | 1.03 | 992160 | 3.8423 |
| 3.5621 | 0.03 | 1068480 | 3.8429 |
| 3.5438 | 1.03 | 1144800 | 3.8439 |
| 3.5273 | 0.03 | 1221120 | 3.8440 |
| 3.5096 | 1.03 | 1297440 | 3.8458 |
| 3.4966 | 0.03 | 1373760 | 3.8464 |
| 3.4822 | 1.03 | 1450080 | 3.8478 |
| 3.4746 | 0.03 | 1526400 | 3.8491 |
| 3.4649 | 1.03 | 1602720 | 3.8508 |
| 3.4573 | 0.03 | 1679040 | 3.8530 |
| 3.4517 | 1.03 | 1755360 | 3.8537 |
| 3.4416 | 0.03 | 1831680 | 3.8544 |
| 3.4297 | 1.03 | 1908000 | 3.8557 |
| 3.4193 | 0.03 | 1984320 | 3.8570 |
| 3.4087 | 1.03 | 2060640 | 3.8579 |
| 3.3961 | 0.03 | 2136960 | 3.8595 |
| 3.3885 | 1.03 | 2213280 | 3.8609 |
| 3.3768 | 0.03 | 2289600 | 3.8616 |
| 3.3645 | 1.03 | 2365920 | 3.8617 |
| 3.3515 | 0.03 | 2442240 | 3.8626 |
| 3.337 | 0.03 | 2518560 | 3.8631 |
| 3.3292 | 0.03 | 2594880 | 3.8627 |
| 3.3153 | 1.03 | 2671200 | 3.8646 |
| 3.3131 | 0.03 | 2747520 | 3.8646 |
| 3.3088 | 0.03 | 2823840 | 3.8638 |
| 3.3024 | 1.03 | 2900160 | 3.8636 |
| 3.3024 | 0.03 | 2976480 | 3.8629 |
| 3.2966 | 0.02 | 3052726 | 3.8620 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
KreigerNadir/LavLora
|
KreigerNadir
| 2024-02-02T21:35:58Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-02-02T21:29:23Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
((blood, dismemberment, disgust)), girl, (the pentagram), curved demonic
horns, gothic dress, (red tone, fire in the background), slate atmosphere,
cinematic, dimmed colors, dark shot, muted colors, film grainy, lut, spooky
<lora:Lav_Lune-Harriet_Cains-000001:1>
<lora:Lav_Lune-Harriet_Cains-000002:1>
parameters:
negative_prompt: >-
(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong
anatomy, extra limb, missing limb, floating limbs, (mutated hands and
fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting,
blurry, amputation, lots of navels, lots of ears
output:
url: images/00008-4004957634.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# First Test Lora
<Gallery />
## Model description

## Download model
Weights for this model are available in Safetensors format.
[Download](/KreigerNadir/LavLora/tree/main) them in the Files & versions tab.
|
pleasefill/mesolo
|
pleasefill
| 2024-02-02T21:34:40Z | 0 | 0 |
mlx
|
[
"mlx",
"music",
"robotics",
"an",
"dataset:HuggingFaceM4/WebSight",
"license:bigscience-bloom-rail-1.0",
"region:us"
] |
robotics
| 2024-02-02T21:32:11Z |
---
license: bigscience-bloom-rail-1.0
datasets:
- HuggingFaceM4/WebSight
language:
- an
metrics:
- character
library_name: mlx
pipeline_tag: robotics
tags:
- music
---
|
CLMBR/full-lstm-4
|
CLMBR
| 2024-02-02T21:30:58Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-26T10:08:19Z |
---
tags:
- generated_from_trainer
model-index:
- name: full2-lstm-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full2-lstm-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7959 | 0.03 | 76320 | 4.7638 |
| 4.5079 | 1.03 | 152640 | 4.4815 |
| 4.3625 | 0.03 | 228960 | 4.3450 |
| 4.2769 | 1.03 | 305280 | 4.2616 |
| 4.2133 | 0.03 | 381600 | 4.2046 |
| 4.167 | 0.03 | 457920 | 4.1638 |
| 4.1327 | 0.03 | 534240 | 4.1323 |
| 4.1009 | 1.03 | 610560 | 4.1073 |
| 4.0712 | 0.03 | 686880 | 4.0878 |
| 4.0477 | 1.03 | 763200 | 4.0715 |
| 4.0282 | 0.03 | 839520 | 4.0582 |
| 4.0086 | 1.03 | 915840 | 4.0472 |
| 3.9979 | 0.03 | 992160 | 4.0375 |
| 3.9819 | 1.03 | 1068480 | 4.0296 |
| 3.9663 | 0.03 | 1144800 | 4.0231 |
| 3.9521 | 1.03 | 1221120 | 4.0175 |
| 3.9375 | 0.03 | 1297440 | 4.0120 |
| 3.929 | 1.03 | 1373760 | 4.0072 |
| 3.9164 | 0.03 | 1450080 | 4.0034 |
| 3.9149 | 1.03 | 1526400 | 3.9997 |
| 3.9088 | 0.03 | 1602720 | 3.9969 |
| 3.9042 | 1.03 | 1679040 | 3.9941 |
| 3.9046 | 0.03 | 1755360 | 3.9915 |
| 3.8983 | 1.03 | 1831680 | 3.9891 |
| 3.8929 | 0.03 | 1908000 | 3.9868 |
| 3.8857 | 1.03 | 1984320 | 3.9849 |
| 3.8807 | 0.03 | 2060640 | 3.9831 |
| 3.8737 | 0.03 | 2136960 | 3.9818 |
| 3.8729 | 1.03 | 2213280 | 3.9802 |
| 3.8669 | 0.03 | 2289600 | 3.9789 |
| 3.8603 | 0.03 | 2365920 | 3.9781 |
| 3.8562 | 1.03 | 2442240 | 3.9771 |
| 3.8473 | 0.03 | 2518560 | 3.9763 |
| 3.8437 | 1.03 | 2594880 | 3.9755 |
| 3.8393 | 0.03 | 2671200 | 3.9749 |
| 3.8432 | 0.03 | 2747520 | 3.9742 |
| 3.8431 | 1.03 | 2823840 | 3.9735 |
| 3.8439 | 0.03 | 2900160 | 3.9730 |
| 3.8476 | 1.03 | 2976480 | 3.9726 |
| 3.8466 | 0.02 | 3052726 | 3.9723 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Doctor-Shotgun/TinyLlama-1.1B-32k
|
Doctor-Shotgun
| 2024-02-02T21:25:35Z | 116 | 28 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"llama 2",
"en",
"dataset:togethercomputer/RedPajama-Data-1T-Sample",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-29T05:19:34Z |
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T-Sample
language:
- en
tags:
- llama
- llama 2
---
# TinyLlama-1.1B-32k
32k context finetune of TinyLlama-1.1B using increased rope theta (rope frequency base) meant to serve as a long-context speculative decoding model.
Created using [TinyLlama-1.1B](https://huggingface.co/TinyLlama/tinyLlama-intermediate-checkpoints-after-1T-token) and further pretraining at 32768 context length on [togethercomputer/RedPajama-Data-1T-Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
Of note, the base checkpoint used was from commit "final model" fad4f1a5cd0563ac41349b8fec2e6e51156568a0 which was subsequently reverted, and not the current main branch 3T checkpoint of TinyLlama-1.1B.
[EXL2 Quants by turboderp](https://huggingface.co/turboderp/TinyLlama-1B-32k-exl2)
The quantized model fits alongside a 4.25bpw 70B model at 32k sequence length on a single A6000 and provides noticeable speed-up with speculative decoding.
### Wikitext (wikitext-2-raw-v1_train) Perplexity (64 rows) as evaluated via [exllamav2](https://github.com/turboderp/exllamav2):
| Model | 2048 | 4096 | 8192 | 16384 | 32768 |
| ---------------------- | ---------- | ---------- | ---------- | ---------- | ---------- |
| TinyLlama-1.1B | **8.5633** | 208.3586 | 863.7507 | 1600.5021 | 6981.9021 |
| **TinyLlama-1.1B-32k** | 8.6548 | **7.8339** | **7.4904** | **7.3674** | **7.1338** |
### Evaluation on HumanEval by [turboderp](https://huggingface.co/turboderp):
| Model | Pass@1 | Pass@10 |
| -------------------------------------- | --------------- | ----------- |
| TinyLlama-1.1B | **0.0841** | **0.1524** |
| TinyLlama-1.1B (NTK alpha=7.7) | 0.0598 | 0.1098 |
| TinyLlama-1.1B-32k-ckpt-554 | 0.0732 | 0.1402 |
| **TinyLlama-1.1B-32k** | 0.0829 | **0.1524** |
|
stablediffusionapi/ae-sdxl-v4
|
stablediffusionapi
| 2024-02-02T21:15:14Z | 1 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-02-02T21:13:19Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# AE-SDXL-v4 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "ae-sdxl-v4"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/ae-sdxl-v4)
Model link: [View model](https://modelslab.com/models/ae-sdxl-v4)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "ae-sdxl-v4",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
ameerazam08/Real3DPortrait
|
ameerazam08
| 2024-02-02T21:11:19Z | 0 | 6 | null |
[
"tflite",
"arxiv:2401.08503",
"arxiv:2305.00787",
"arxiv:2301.13430",
"region:us"
] | null | 2024-02-02T21:10:08Z |
# Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis | ICLR 2024 Spotlight
[](https://arxiv.org/abs/2401.08503)| [](https://github.com/yerfor/Real3DPortrait) | [中文文档](./README-zh.md)
This is the official repo of Real3D-Portrait with Pytorch implementation, for one-shot and high video reality talking portrait synthesis. You can visit our [Demo Page](https://real3dportrait.github.io/) for watching demo videos, and read our [Paper](https://arxiv.org/pdf/2401.08503.pdf) for technical details.
<p align="center">
<br>
<img src="assets/real3dportrait.png" width="100%"/>
<br>
</p>
# Quick Start!
## Environment Installation
Please refer to [Installation Guide](docs/prepare_env/install_guide.md), prepare a Conda environment `real3dportrait`.
## Download Pre-trained & Third-Party Models
### 3DMM BFM Model
Download 3DMM BFM Model from [Google Drive](https://drive.google.com/drive/folders/1o4t5YIw7w4cMUN4bgU9nPf6IyWVG1bEk?usp=sharing) or [BaiduYun Disk](https://pan.baidu.com/s/1aqv1z_qZ23Vp2VP4uxxblQ?pwd=m9q5 ) with Password m9q5.
Put all the files in `deep_3drecon/BFM`, the file structure will be like this:
```
deep_3drecon/BFM/
├── 01_MorphableModel.mat
├── BFM_exp_idx.mat
├── BFM_front_idx.mat
├── BFM_model_front.mat
├── Exp_Pca.bin
├── facemodel_info.mat
├── index_mp468_from_mesh35709.npy
├── mediapipe_in_bfm53201.npy
└── std_exp.txt
```
### Pre-trained Real3D-Portrait
Download Pre-trained Real3D-Portrait:[Google Drive](https://drive.google.com/drive/folders/1MAveJf7RvJ-Opg1f5qhLdoRoC_Gc6nD9?usp=sharing) or [BaiduYun Disk](https://pan.baidu.com/s/1Mjmbn0UtA1Zm9owZ7zWNgQ?pwd=6x4f ) with Password 6x4f
Put the zip files in `checkpoints` and unzip them, the file structure will be like this:
```
checkpoints/
├── 240126_real3dportrait_orig
│ ├── audio2secc_vae
│ │ ├── config.yaml
│ │ └── model_ckpt_steps_400000.ckpt
│ └── secc2plane_torso_orig
│ ├── config.yaml
│ └── model_ckpt_steps_100000.ckpt
└── pretrained_ckpts
└── mit_b0.pth
```
## Inference
Currently, we provide **CLI** and **Gradio WebUI** for inference, and Google Colab will be provided in the future. We support both Audio-Driven and Video-Driven methods:
- For audio-driven, at least prepare `source image` and `driving audio`
- For video-driven, at least prepare `source image` and `driving expression video`
### Gradio WebUI
Run Gradio WebUI demo, upload resouces in webpage,click `Generate` button to inference:
```bash
python inference/app_real3dportrait.py
```
### CLI Inference
Firstly, switch to project folder and activate conda environment:
```bash
cd <Real3DPortraitRoot>
conda activate real3dportrait
export PYTHON_PATH=./
```
For audio-driven, provide source image and driving audio:
```bash
python inference/real3d_infer.py \
--src_img <PATH_TO_SOURCE_IMAGE> \
--drv_aud <PATH_TO_AUDIO> \
--drv_pose <PATH_TO_POSE_VIDEO, OPTIONAL> \
--bg_img <PATH_TO_BACKGROUND_IMAGE, OPTIONAL> \
--out_name <PATH_TO_OUTPUT_VIDEO, OPTIONAL>
```
For video-driven, provide source image and driving expression video(as `--drv_aud` parameter):
```bash
python inference/real3d_infer.py \
--src_img <PATH_TO_SOURCE_IMAGE> \
--drv_aud <PATH_TO_EXP_VIDEO> \
--drv_pose <PATH_TO_POSE_VIDEO, OPTIONAL> \
--bg_img <PATH_TO_BACKGROUND_IMAGE, OPTIONAL> \
--out_name <PATH_TO_OUTPUT_VIDEO, OPTIONAL>
```
Some optional parameters:
- `--drv_pose` provide motion pose information, default to be static poses
- `--bg_img` provide background information, default to be image extracted from source
- `--mouth_amp` mouth amplitude, higher value leads to wider mouth
- `--map_to_init_pose` when set to `True`, the initial pose will be mapped to source pose, and other poses will be equally transformed
- `--temperature` stands for the sampling temperature of audio2motion, higher for more diverse results at the expense of lower accuracy
- `--out_name` When not assigned, the results will be stored at `infer_out/tmp/`.
- `--out_mode` When `final`, only outputs the final result; when `concat_debug`, also outputs visualization of several intermediate process.
Commandline example:
```bash
python inference/real3d_infer.py \
--src_img data/raw/examples/Macron.png \
--drv_aud data/raw/examples/Obama_5s.wav \
--drv_pose data/raw/examples/May_5s.mp4 \
--bg_img data/raw/examples/bg.png \
--out_name output.mp4 \
--out_mode concat_debug
```
# ToDo
- [x] **Release Pre-trained weights of Real3D-Portrait.**
- [x] **Release Inference Code of Real3D-Portrait.**
- [x] **Release Gradio Demo of Real3D-Portrait..**
- [ ] **Release Google Colab of Real3D-Portrait..**
- [ ] **Release Training Code of Real3D-Portrait.**
# Citation
If you found this repo helpful to your work, please consider cite us:
```
@article{ye2024real3d,
title={Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis},
author={Ye, Zhenhui and Zhong, Tianyun and Ren, Yi and Yang, Jiaqi and Li, Weichuang and Huang, Jiawei and Jiang, Ziyue and He, Jinzheng and Huang, Rongjie and Liu, Jinglin and others},
journal={arXiv preprint arXiv:2401.08503},
year={2024}
}
@article{ye2023geneface++,
title={GeneFace++: Generalized and Stable Real-Time Audio-Driven 3D Talking Face Generation},
author={Ye, Zhenhui and He, Jinzheng and Jiang, Ziyue and Huang, Rongjie and Huang, Jiawei and Liu, Jinglin and Ren, Yi and Yin, Xiang and Ma, Zejun and Zhao, Zhou},
journal={arXiv preprint arXiv:2305.00787},
year={2023}
}
@article{ye2023geneface,
title={GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis},
author={Ye, Zhenhui and Jiang, Ziyue and Ren, Yi and Liu, Jinglin and He, Jinzheng and Zhao, Zhou},
journal={arXiv preprint arXiv:2301.13430},
year={2023}
}
```
|
something-else/HF-rwkv-3B-48-2048-ctx16384
|
something-else
| 2024-02-02T21:00:29Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"rwkv5",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T20:51:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
test123nananan/detr-resnet-50_finetuned_cppe5
|
test123nananan
| 2024-02-02T20:59:21Z | 35 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-02-02T20:25:25Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
tom-beer/my_awesome_food_model
|
tom-beer
| 2024-02-02T20:47:12Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-base-patch4-window12-384",
"base_model:finetune:microsoft/swin-base-patch4-window12-384",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-02T20:30:40Z |
---
license: apache-2.0
base_model: microsoft/swin-base-patch4-window12-384
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [microsoft/swin-base-patch4-window12-384](https://huggingface.co/microsoft/swin-base-patch4-window12-384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2147
- Accuracy: 0.921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2326 | 0.99 | 62 | 0.2147 | 0.921 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
pharaouk/fusedyi
|
pharaouk
| 2024-02-02T20:40:54Z | 50 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T20:19:22Z |
---
tags:
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
model-index:
- name: fusedYi
results: []
license: apache-2.0
language:
- en
---
# Model Card for FusedYi
<!-- Provide a quick summary of what the model is/does. -->
This is fused Yi-6B.
I took a Yi, merged it with another Yi, and got a new Yi out of the Yis.
Yi-ceptionized.
## Model Details
1.9 x Yi-6B
|
m7n/rijks-sdxl-lora-001
|
m7n
| 2024-02-02T20:34:33Z | 6 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-02T19:07:17Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'An oil painting in the style of <s0><s1> still life of a skull made of cauliflower'
output:
url:
"image_0.png"
- text: 'An oil painting in the style of <s0><s1> still life of a skull made of cauliflower'
output:
url:
"image_1.png"
- text: 'An oil painting in the style of <s0><s1> still life of a skull made of cauliflower'
output:
url:
"image_2.png"
- text: 'An oil painting in the style of <s0><s1> still life of a skull made of cauliflower'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: oil painting in the style of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - m7n/rijks-sdxl-lora-001
<Gallery />
## Model description
### These are m7n/rijks-sdxl-lora-001 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`rijks-sdxl-lora-001.safetensors` here 💾](/m7n/rijks-sdxl-lora-001/blob/main/rijks-sdxl-lora-001.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:rijks-sdxl-lora-001:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`rijks-sdxl-lora-001_emb.safetensors` here 💾](/m7n/rijks-sdxl-lora-001/blob/main/rijks-sdxl-lora-001_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `rijks-sdxl-lora-001_emb` to your prompt. For example, `oil painting in the style of rijks-sdxl-lora-001_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('m7n/rijks-sdxl-lora-001', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='m7n/rijks-sdxl-lora-001', filename='rijks-sdxl-lora-001_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('An oil painting in the style of <s0><s1> still life of a skull made of cauliflower').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/m7n/rijks-sdxl-lora-001/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
janhq/stealth-rag-v1.1-GGUF
|
janhq
| 2024-02-02T20:31:28Z | 0 | 0 | null |
[
"gguf",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"dataset:jan-hq/bagel_sft_binarized",
"dataset:jan-hq/dolphin_binarized",
"dataset:jan-hq/openhermes_binarized",
"base_model:jan-hq/stealth-rag-v1.1",
"base_model:quantized:jan-hq/stealth-rag-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-02T20:17:12Z |
---
license: apache-2.0
base_model: jan-hq/stealth-rag-v1.1
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- jan-hq/bagel_sft_binarized
- jan-hq/dolphin_binarized
- jan-hq/openhermes_binarized
model-index:
- name: LlamaCorn-sft-adapter
results: []
model_creator: jan-hq
model_name: stealth-rag-v1.1
quantized_by: JanHQ
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This is a GGUF version of [jan-hq/stealth-rag-v1.1](https://huggingface.co/jan-hq/stealth-rag-v1.1)
- Model creator: [jan-hq](https://huggingface.co/jan-hq)
- Original model: [stealth-rag-v1.1](https://huggingface.co/jan-hq/stealth-rag-v1.1)
- Model description: [Readme](https://huggingface.co/jan-hq/stealth-rag-v1.1/blob/main/README.md)
# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Converter
This is a repository for the [open-source converter](https://github.com/janhq/model-converter. We would be grateful if the community could contribute and strengthen this repository. We are aiming to expand the repo that can convert into various types of format
|
Patcas/codet5-no-doc-new-v3
|
Patcas
| 2024-02-02T20:24:02Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-base",
"base_model:finetune:Salesforce/codet5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T18:58:35Z |
---
license: apache-2.0
base_model: Salesforce/codet5-base
tags:
- generated_from_trainer
model-index:
- name: codet5-no-doc-new-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-no-doc-new-v3
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 1.6722 |
| No log | 2.0 | 460 | 1.4156 |
| 2.1009 | 3.0 | 690 | 1.2881 |
| 2.1009 | 4.0 | 920 | 1.2215 |
| 1.108 | 5.0 | 1150 | 1.1894 |
| 1.108 | 6.0 | 1380 | 1.1622 |
| 0.841 | 7.0 | 1610 | 1.1551 |
| 0.841 | 8.0 | 1840 | 1.1417 |
| 0.6694 | 9.0 | 2070 | 1.1381 |
| 0.6694 | 10.0 | 2300 | 1.1403 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
cashewEnthusiast/Taxi-v3-attempt1
|
cashewEnthusiast
| 2024-02-02T20:23:00Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T20:22:58Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-attempt1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.42 +/- 2.82
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cashewEnthusiast/Taxi-v3-attempt1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LunaticTanuki/oop-de-qg-flan-t5-base-v5
|
LunaticTanuki
| 2024-02-02T20:14:29Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T11:16:41Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: oop-de-qg-flan-t5-base-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oop-de-qg-flan-t5-base-v5
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8305
- Rouge1: 60.2858
- Rouge2: 47.0551
- Rougel: 58.5541
- Rougelsum: 58.5986
- Gen Len: 14.6254
- Bleu: 0.3585
- Precisions: [0.6612685560053981, 0.4800607671857197, 0.39139878366637704, 0.3257229832572298]
- Brevity Penalty: 0.7993
- Length Ratio: 0.8170
- Translation Length: 2964
- Reference Length: 3628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:------:|:-----------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|
| No log | 0.99 | 72 | 0.9838 | 58.281 | 44.4811 | 56.6252 | 56.6047 | 14.6042 | 0.3304 | [0.6428324697754749, 0.4543681747269891, 0.367666815942678, 0.30546792849631965] | 0.7763 | 0.7980 | 2895 | 3628 |
| No log | 1.99 | 145 | 0.9010 | 55.8534 | 42.0605 | 54.3596 | 54.3148 | 14.6586 | 0.3076 | [0.6021433355659745, 0.41167608286252355, 0.3253012048192771, 0.26241846462619167] | 0.8065 | 0.8230 | 2986 | 3628 |
| No log | 3.0 | 218 | 0.8767 | 57.7174 | 44.1283 | 56.4402 | 56.3292 | 14.5136 | 0.3323 | [0.6361781706902414, 0.4509578544061303, 0.36287845546292236, 0.2982546201232033] | 0.7917 | 0.8106 | 2941 | 3628 |
| No log | 4.0 | 291 | 0.8583 | 60.2113 | 47.3135 | 58.8257 | 58.7408 | 14.3233 | 0.3580 | [0.6711758584807492, 0.49490595611285265, 0.4074741107609185, 0.3412698412698413] | 0.7723 | 0.7947 | 2883 | 3628 |
| No log | 4.99 | 363 | 0.8396 | 59.8588 | 46.8718 | 58.3234 | 58.2478 | 14.4894 | 0.3539 | [0.6580469547465124, 0.47929447852760737, 0.39042599912165127, 0.32528263103802674] | 0.7910 | 0.8101 | 2939 | 3628 |
| No log | 5.99 | 436 | 0.8316 | 59.7653 | 46.5459 | 58.066 | 58.1354 | 14.4804 | 0.3548 | [0.6613342409802587, 0.4798619102416571, 0.3914762741652021, 0.3264781491002571] | 0.7907 | 0.8098 | 2938 | 3628 |
| 0.9411 | 7.0 | 509 | 0.8305 | 60.2858 | 47.0551 | 58.5541 | 58.5986 | 14.6254 | 0.3585 | [0.6612685560053981, 0.4800607671857197, 0.39139878366637704, 0.3257229832572298] | 0.7993 | 0.8170 | 2964 | 3628 |
| 0.9411 | 7.92 | 576 | 0.8309 | 60.2226 | 47.1068 | 58.611 | 58.5902 | 14.6526 | 0.3605 | [0.6590450571620713, 0.4801362088535755, 0.39273356401384085, 0.3276123170116103] | 0.8026 | 0.8197 | 2974 | 3628 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
jak414/liedetect_fold2
|
jak414
| 2024-02-02T20:03:45Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-12-11T22:38:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
Bharath924/LunarLander
|
Bharath924
| 2024-02-02T20:01:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T19:47:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 233.04 +/- 84.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ameerazam08/DiffSynth-Studio
|
ameerazam08
| 2024-02-02T20:00:54Z | 0 | 8 | null |
[
"arxiv:2401.16224",
"region:us"
] | null | 2024-02-02T19:55:39Z |
# DiffSynth Studio
## Introduction
DiffSynth is a new Diffusion engine. We have restructured architectures including Text Encoder, UNet, VAE, among others, maintaining compatibility with models from the open-source community while enhancing computational performance. This version is currently in its initial stage, supporting SD and SDXL architectures. In the future, we plan to develop more interesting features based on this new codebase.
## Installation
Create Python environment:
```
conda env create -f environment.yml
```
We find that sometimes `conda` cannot install `cupy` correctly, please install it manually. See [this document](https://docs.cupy.dev/en/stable/install.html) for more details.
Enter the Python environment:
```
conda activate DiffSynthStudio
```
## Usage (in WebUI)
```
python -m streamlit run Diffsynth_Studio.py
```
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/93085557-73f3-4eee-a205-9829591ef954
## Usage (in Python code)
### Example 1: Stable Diffusion
We can generate images with very high resolution. Please see `examples/sd_text_to_image.py` for more details.
|512*512|1024*1024|2048*2048|4096*4096|
|-|-|-|-|
|||||
### Example 2: Stable Diffusion XL
Generate images with Stable Diffusion XL. Please see `examples/sdxl_text_to_image.py` for more details.
|1024*1024|2048*2048|
|-|-|
|||
### Example 3: Stable Diffusion XL Turbo
Generate images with Stable Diffusion XL Turbo. You can see `examples/sdxl_turbo.py` for more details, but we highly recommend you to use it in the WebUI.
|"black car"|"red car"|
|-|-|
|||
### Example 4: Toon Shading (Diffutoon)
This example is implemented based on [Diffutoon](https://arxiv.org/abs/2401.16224). This approach is adept for rendering high-resoluton videos with rapid motion. You can easily modify the parameters in the config dict. See `examples/diffutoon_toon_shading.py`.
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/b54c05c5-d747-4709-be5e-b39af82404dd
### Example 5: Toon Shading with Editing Signals (Diffutoon)
Coming soon.
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/20528af5-5100-474a-8cdc-440b9efdd86c
### Example 6: Toon Shading (in native Python code)
This example is provided for developers. If you don't want to use the config to manage parameters, you can see `examples/sd_toon_shading.py` to learn how to use it in native Python code.
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/607c199b-6140-410b-a111-3e4ffb01142c
### Example 7: Text to Video
Given a prompt, DiffSynth Studio can generate a video using a Stable Diffusion model and an AnimateDiff model. We can break the limitation of number of frames! See `examples/sd_text_to_video.py`.
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/8f556355-4079-4445-9b48-e9da77699437
### Example 8: Video Stylization
We provide an example for video stylization. In this pipeline, the rendered video is completely different from the original video, thus we need a powerful deflickering algorithm. We use FastBlend to implement the deflickering module. Please see `examples/sd_video_rerender.py` for more details.
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/59fb2f7b-8de0-4481-b79f-0c3a7361a1ea
### Example 9: Prompt Processing
If you are not native English user, we provide translation service for you. Our prompter can translate other language to English and refine it using "BeautifulPrompt" models. Please see `examples/sd_prompt_refining.py` for more details.
Prompt: "一个漂亮的女孩". The [translation model](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) will translate it to English.
|seed=0|seed=1|seed=2|seed=3|
|-|-|-|-|
|||||
Prompt: "一个漂亮的女孩". The [translation model](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) will translate it to English. Then the [refining model](https://huggingface.co/alibaba-pai/pai-bloom-1b1-text2prompt-sd) will refine the translated prompt for better visual quality.
|seed=0|seed=1|seed=2|seed=3|
|-|-|-|-|
|||||
|
Dhanraj1503/PixelCopter
|
Dhanraj1503
| 2024-02-02T20:00:04Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-18T11:34:28Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 43.70 +/- 30.74
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jlbaker361/dcgan-lazy-wikiart500-clip-resized-cond
|
jlbaker361
| 2024-02-02T19:53:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-01T19:53:41Z |
---
{}
---
Creative Adversarial Network
epochs: 2
dataset jlbaker361/wikiart-balanced500
n classes 27
batch_size 4
images where resized to 768
and then center cropped to: 512
used clip=True
conditional =True
discriminator parameters:
init_dim: 32
final_dim 512
generator parameters:
input noise_dim: 100
|
mjm4dl/ps_intent_7b_r16
|
mjm4dl
| 2024-02-02T19:52:28Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T19:48:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
joshberg65/mistral_7b_Rassle
|
joshberg65
| 2024-02-02T19:48:21Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-31T21:29:09Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
More information coming soon! I've trained this model on pro wrestling results and information, on top of the base Mistral 7B model and the Guanaco dataset. The final version of this model will be a pro wrestling and sports entertainment guru!
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mtc/mistralai-Mistral-7B-v0.1-7b-xsum-with-all-explanations-5-epochs-full-dataset-lora-full
|
mtc
| 2024-02-02T19:39:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-02-02T19:39:25Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
nrivkin/sd-class-butterflies-32
|
nrivkin
| 2024-02-02T19:33:45Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-02-02T19:33:38Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('nrivkin/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
esahit/t5-medical-text-simplification
|
esahit
| 2024-02-02T19:32:04Z | 22 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:mrm8488/t5-small-finetuned-text-simplification",
"base_model:finetune:mrm8488/t5-small-finetuned-text-simplification",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T18:15:55Z |
---
license: apache-2.0
base_model: mrm8488/t5-small-finetuned-text-simplification
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-medical-text-simplification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-medical-text-simplification
This model is a fine-tuned version of [mrm8488/t5-small-finetuned-text-simplification](https://huggingface.co/mrm8488/t5-small-finetuned-text-simplification) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4158
- Bleu: {'bleu': 0.24913061085239344, 'precisions': [0.6300697552884507, 0.46170603353322726, 0.3783389479827051, 0.3190805662507599], 'brevity_penalty': 0.5754971743889961, 'length_ratio': 0.6441136869219061, 'translation_length': 44011, 'reference_length': 68328}
- Sari: {'sari': 21.772869578730884}
- Fkgl: 10.2474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Sari | Fkgl |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------------------------:|:-------:|
| 1.5524 | 1.0 | 1578 | 1.4317 | {'bleu': 0.24854970426705067, 'precisions': [0.626776178839714, 0.45794346978557504, 0.37443247809101465, 0.3154227136604469], 'brevity_penalty': 0.5792493345645447, 'length_ratio': 0.646821215314366, 'translation_length': 44196, 'reference_length': 68328} | {'sari': 21.542679628603977} | 10.2949 |
| 1.5282 | 2.0 | 3156 | 1.4249 | {'bleu': 0.24886563197246125, 'precisions': [0.6285792076961474, 0.4604086221222934, 0.3770192256766061, 0.3176616771658094], 'brevity_penalty': 0.5767757332645675, 'length_ratio': 0.6450357101042032, 'translation_length': 44074, 'reference_length': 68328} | {'sari': 21.665573517166536} | 10.2937 |
| 1.4997 | 3.0 | 4734 | 1.4176 | {'bleu': 0.24852094682922746, 'precisions': [0.629403208945048, 0.4605591734808794, 0.377421066595914, 0.3182660566398332], 'brevity_penalty': 0.5753144561890373, 'length_ratio': 0.6439819693244351, 'translation_length': 44002, 'reference_length': 68328} | {'sari': 21.700716936778782} | 10.2544 |
| 1.5028 | 4.0 | 6312 | 1.4176 | {'bleu': 0.24876653336273433, 'precisions': [0.6299538437052363, 0.4615309246785058, 0.37816241471767237, 0.3188943296728769], 'brevity_penalty': 0.5748880487421792, 'length_ratio': 0.6436746282636694, 'translation_length': 43981, 'reference_length': 68328} | {'sari': 21.750120178010484} | 10.2531 |
| 1.4976 | 5.0 | 7890 | 1.4158 | {'bleu': 0.24913061085239344, 'precisions': [0.6300697552884507, 0.46170603353322726, 0.3783389479827051, 0.3190805662507599], 'brevity_penalty': 0.5754971743889961, 'length_ratio': 0.6441136869219061, 'translation_length': 44011, 'reference_length': 68328} | {'sari': 21.772869578730884} | 10.2474 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
sbulut/distilbert-base-uncased-finetuned-imdb
|
sbulut
| 2024-02-02T19:29:15Z | 4 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-02-02T19:25:36Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7024 | 1.0 | 157 | 2.4965 |
| 2.5792 | 2.0 | 314 | 2.4280 |
| 2.5354 | 3.0 | 471 | 2.4508 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
meetplace1/bertsmallclassifier100
|
meetplace1
| 2024-02-02T19:28:19Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-02T18:59:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
provetgrizzner/rare-puppers
|
provetgrizzner
| 2024-02-02T19:06:04Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-02T19:05:57Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9402984976768494
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
minchyeom/MemGPT-DPO-MoE-test
|
minchyeom
| 2024-02-02T18:59:57Z | 11 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"MemGPT",
"function",
"function calling",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-31T18:44:26Z |
---
library_name: transformers
license: apache-2.0
language:
- en
tags:
- MemGPT
- function
- function calling
---
This is a test release of DPO version of [MemGPT](https://github.com/cpacker/MemGPT) Language Model.
# Model Description
This repository contains a MoE (Mixture of Experts) model of [Mistral 7B Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). It has 2 experts per token. This model is specifically designed for function calling in MemGPT. It demonstrates comparable performances to GPT-4 when it comes to working with MemGPT.
# Key Features
* Function calling
* Dedicated to working with MemGPT
* Supports medium-length context, up to sequences of 8,192
# Prompt Format
This model uses **ChatML** prompt format:
```
<|im_start|>system
{system_instruction}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>
```
# Usage
This model is designed to be ran on multiple backends, such as [oogabooga's textgen WebUI](https://github.com/oobabooga/text-generation-webui).
Simply install your preferred backend, and then load up this model.
Then, configure MemGPT using `memgpt configure`, and chat with MemGPT via `memgpt run` command!
# Model Details
* Developed by: @starsnatched
* Model type: This repo contains a language model based on the transformer decoder architecture.
* Language: English
* Contact: For any questions, concerns or comments about this model, please contact me at Discord, @starsnatched.
# Training Infrastructure
* Hardware: The model in this repo was trained on 2x A100 80GB GPUs.
# Intended Use
The model is designed to be used as the base model for MemGPT agents.
# Limitations and Risks
The model may exhibit unreliable, unsafe, or biased behaviours. Please double check the results this model may produce.
|
LoneStriker/Synatra-Mixtral-8x7B-GGUF
|
LoneStriker
| 2024-02-02T18:57:17Z | 1 | 1 | null |
[
"gguf",
"moe",
"ko",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-02T16:57:59Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- moe
---
# **Synatra-Mixtral-8x7B**
<img src="./Synatra-Mixtral.png" alt="Synatra-Mixtral-8x7B" width="512"/>
**Synatra-Mixtral-8x7B** is a fine-tuned version of the Mixtral-8x7B-Instruct-v0.1 model using **Korean** datasets.
This model features overwhelmingly superior comprehension and inference capabilities and is licensed under apache-2.0.
# **Join Our Discord**
[Server Link](https://discord.gg/MrBt3PXdXc)
# **License**
**OPEN**, Apache-2.0.
# **Model Details**
**Base Model**
[mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
**Trained On**
A100 80GB * 6
**Instruction format**
It follows **Alpaca** format.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{input}
### Response:
{output}
```
# **Model Benchmark**
TBD
# **Implementation Code**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-Mixtral-8x7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-Mixtral-8x7B")
messages = [
{"role": "user", "content": "아인슈타인의 상대성이론에 대해서 자세히 설명해줘."},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
# **Author's Message**
This model's training got sponsered by no one but support from people around Earth.
[Support Me](https://www.buymeacoffee.com/mwell)
Contact Me on Discord - **is.maywell**
Follow me on twitter: https://twitter.com/stablefluffy
|
Katelie/Cartpole-v1
|
Katelie
| 2024-02-02T18:49:58Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T18:39:05Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kviai/KviGPT-7b-Chat
|
kviai
| 2024-02-02T18:49:49Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"LLM",
"Chat",
"KVIGPT",
"Llama",
"Lora",
"KVIAI",
"en",
"ru",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T16:39:18Z |
---
license: cc-by-sa-4.0
language:
- en
- ru
pipeline_tag: text-generation
tags:
- LLM
- Chat
- KVIGPT
- Llama
- Lora
- KVIAI
library_name: transformers
---
# KviGPT 7b
KviGPT - powerful LLM text generation model.
## Usage
You can use KVIGPT using transformers library, here it is how:
```Python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kviai/KviGPT-7b-Chat")
model = AutoModelForCausalLM.from_pretrained("kviai/KviGPT-7b-Chat")
prompt = "Hi, what do you know about TON coin?"
output = pipeline(prompt)
print(output)
```
## Model Details
You can train it using Amazon SageMaker or Auto Train
## Credits
- **Developed by:** KviAI
- **Funded byu:** Katsyka Vasiliy
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **License:** Creative Commons Attribution Share Alike 4.0
## Demo
- **Demo:** [https://hf.co/spaces/kviai/kvigpt]
|
APaul1/microsoft-layoutlm-FUNDS
|
APaul1
| 2024-02-02T18:46:27Z | 0 | 0 |
transformers, peft
|
[
"transformers, peft",
"safetensors",
"dataset:nielsr/funsd",
"region:us"
] | null | 2024-02-01T19:29:21Z |
---
library_name: transformers, peft
datasets:
- nielsr/funsd
---
# Model Card for Model ID
This model is based on the notebook provided [here](https://huggingface.co/spaces/PEFT/token-classification/blob/main/peft_lora_token_cls.ipynb)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model learns to understand different sections of form in noisy scanned documents. An example is shown below

It is based on a model developed by microsoft research for language understanding and generation of scanned documents.
- **Developed by:** Microsoft Research
- **Model type:** Multimodal
- **Language(s) (NLP):** [More Information Needed]
- **Finetuned from model:** microsoft/layoutlm-base-uncased
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
|
bartowski/CodeMate-v0.1-exl2
|
bartowski
| 2024-02-02T18:45:08Z | 11 | 0 |
transformers
|
[
"transformers",
"CodeMate",
"Code",
"text-generation",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-31T05:23:14Z |
---
license: llama2
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- CodeMate
- Code
quantized_by: bartowski
---
## Exllama v2 Quantizations of CodeMate-v0.1
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/codemateai/CodeMate-v0.1
<a href="https://huggingface.co/bartowski/CodeMate-v0.1-exl2/tree/6_5">6.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/CodeMate-v0.1-exl2/tree/4_25">4.25 bits per weight</a>
<a href="https://huggingface.co/bartowski/CodeMate-v0.1-exl2/tree/3_5">3.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/CodeMate-v0.1-exl2/tree/3_0">3.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/CodeMate-v0.1-exl2/tree/2_4">2.4 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/CodeMate-v0.1-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `CodeMate-v0.1-exl2`:
```shell
mkdir CodeMate-v0.1-exl2
huggingface-cli download bartowski/CodeMate-v0.1-exl2 --local-dir CodeMate-v0.1-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir CodeMate-v0.1-exl2-6_5
huggingface-cli download bartowski/CodeMate-v0.1-exl2 --revision 6_5 --local-dir CodeMate-v0.1-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir CodeMate-v0.1-exl2-6.5
huggingface-cli download bartowski/CodeMate-v0.1-exl2 --revision 6_5 --local-dir CodeMate-v0.1-exl2-6.5 --local-dir-use-symlinks False
```
|
jan-hq/stealth-rag-v1.1
|
jan-hq
| 2024-02-02T18:25:24Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:jan-hq/bagel_sft_binarized",
"dataset:jan-hq/dolphin_binarized",
"dataset:jan-hq/openhermes_binarized",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T18:23:00Z |
---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- jan-hq/bagel_sft_binarized
- jan-hq/dolphin_binarized
- jan-hq/openhermes_binarized
model-index:
- name: LlamaCorn-sft-adapter
results: []
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto"
>
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner"
style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Prompt template
ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **
An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
|
Patcas/plbartAssert-doc-new-v3
|
Patcas
| 2024-02-02T18:07:16Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"plbart",
"text2text-generation",
"generated_from_trainer",
"base_model:Patcas/my_awesome-assert-new",
"base_model:finetune:Patcas/my_awesome-assert-new",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T17:11:02Z |
---
base_model: Patcas/my_awesome-assert-new
tags:
- generated_from_trainer
model-index:
- name: plbartAssert-doc-new-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbartAssert-doc-new-v3
This model is a fine-tuned version of [Patcas/my_awesome-assert-new](https://huggingface.co/Patcas/my_awesome-assert-new) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 1.1462 |
| No log | 2.0 | 460 | 0.9920 |
| 1.3808 | 3.0 | 690 | 0.9736 |
| 1.3808 | 4.0 | 920 | 0.9924 |
| 0.4648 | 5.0 | 1150 | 0.9777 |
| 0.4648 | 6.0 | 1380 | 0.9835 |
| 0.2359 | 7.0 | 1610 | 0.9949 |
| 0.2359 | 8.0 | 1840 | 0.9979 |
| 0.1429 | 9.0 | 2070 | 1.0030 |
| 0.1429 | 10.0 | 2300 | 1.0066 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
anasselhoud/ppo-V1-LunarLander
|
anasselhoud
| 2024-02-02T18:06:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-01T20:49:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.31 +/- 15.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Patcas/plbartAssert-doc-new-v2
|
Patcas
| 2024-02-02T18:05:57Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"plbart",
"text2text-generation",
"generated_from_trainer",
"base_model:Patcas/my_awesome-assert-new",
"base_model:finetune:Patcas/my_awesome-assert-new",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T16:54:19Z |
---
base_model: Patcas/my_awesome-assert-new
tags:
- generated_from_trainer
model-index:
- name: plbartAssert-doc-new-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbartAssert-doc-new-v2
This model is a fine-tuned version of [Patcas/my_awesome-assert-new](https://huggingface.co/Patcas/my_awesome-assert-new) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 1.2323 |
| No log | 2.0 | 460 | 1.1191 |
| 1.3803 | 3.0 | 690 | 1.0838 |
| 1.3803 | 4.0 | 920 | 1.1141 |
| 0.4603 | 5.0 | 1150 | 1.1005 |
| 0.4603 | 6.0 | 1380 | 1.1077 |
| 0.2338 | 7.0 | 1610 | 1.1242 |
| 0.2338 | 8.0 | 1840 | 1.1391 |
| 0.1476 | 9.0 | 2070 | 1.1341 |
| 0.1476 | 10.0 | 2300 | 1.1342 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Trubnik1967/Trubnik1967
|
Trubnik1967
| 2024-02-02T17:56:25Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2024-02-01T17:58:58Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# Trubnik1967
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("Trubnik1967/Trubnik1967")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 18
* Number of training documents: 930
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | measurement - material - researcher - new - used | 396 | 0_measurement_material_researcher_new |
| 1 | security - information - system - federal - control | 146 | 1_security_information_system_federal |
| 2 | face - recognition - algorithm - study - data | 51 | 2_face_recognition_algorithm_study |
| 3 | yous - standardization - international - program - trade | 32 | 3_yous_standardization_international_program |
| 4 | framework - cybersecurity - critical - infrastructure - organization | 35 | 4_framework_cybersecurity_critical_infrastructure |
| 5 | online - pilot - service - secure - solution | 25 | 5_online_pilot_service_secure |
| 6 | data - environment - communication - model - measurement | 36 | 6_data_environment_communication_model |
| 7 | cloud - computing - working - government - service | 21 | 7_cloud_computing_working_government |
| 8 | cybersecurity - education - training - job - worker | 23 | 8_cybersecurity_education_training_job |
| 9 | algorithm - key - public - comment - document | 19 | 9_algorithm_key_public_comment |
| 10 | data - digital - function - system - algorithm | 28 | 10_data_digital_function_system |
| 11 | building - code - report - recommendation - community | 32 | 11_building_code_report_recommendation |
| 12 | mobile - security - device - organization - guide | 17 | 12_mobile_security_device_organization |
| 13 | health - information - patient - security - electronic | 16 | 13_health_information_patient_security |
| 14 | card - personal - federal - employee - publication | 14 | 14_card_personal_federal_employee |
| 15 | smart - interoperability - framework - cyber - network | 13 | 15_smart_interoperability_framework_cyber |
| 16 | safety - public - communication - network - emergency | 13 | 16_safety_public_communication_network |
| 17 | comment - system - draft - guideline - yous | 13 | 17_comment_system_draft_guideline |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.3.1
* Transformers: 4.35.2
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
|
CLMBR/rel-cl-transformer-1
|
CLMBR
| 2024-02-02T17:55:07Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T10:19:49Z |
---
tags:
- generated_from_trainer
model-index:
- name: rel-cl2-transformer-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rel-cl2-transformer-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2329 | 0.03 | 76320 | 4.2009 |
| 4.0289 | 1.03 | 152640 | 4.0307 |
| 3.9231 | 0.03 | 228960 | 3.9576 |
| 3.8535 | 1.03 | 305280 | 3.9171 |
| 3.804 | 0.03 | 381600 | 3.8918 |
| 3.7641 | 1.03 | 457920 | 3.8763 |
| 3.7275 | 0.03 | 534240 | 3.8658 |
| 3.696 | 0.03 | 610560 | 3.8590 |
| 3.6663 | 1.03 | 686880 | 3.8544 |
| 3.6421 | 0.03 | 763200 | 3.8515 |
| 3.6169 | 1.03 | 839520 | 3.8503 |
| 3.5968 | 0.03 | 915840 | 3.8493 |
| 3.5769 | 1.03 | 992160 | 3.8496 |
| 3.558 | 0.03 | 1068480 | 3.8508 |
| 3.547 | 1.03 | 1144800 | 3.8513 |
| 3.5347 | 0.03 | 1221120 | 3.8519 |
| 3.5203 | 0.03 | 1297440 | 3.8537 |
| 3.5052 | 1.03 | 1373760 | 3.8551 |
| 3.4959 | 0.03 | 1450080 | 3.8548 |
| 3.4838 | 0.03 | 1526400 | 3.8566 |
| 3.4748 | 1.03 | 1602720 | 3.8588 |
| 3.4668 | 0.03 | 1679040 | 3.8602 |
| 3.4557 | 1.03 | 1755360 | 3.8608 |
| 3.4437 | 0.03 | 1831680 | 3.8632 |
| 3.4302 | 1.03 | 1908000 | 3.8629 |
| 3.4168 | 0.03 | 1984320 | 3.8652 |
| 3.4053 | 1.03 | 2060640 | 3.8663 |
| 3.3953 | 0.03 | 2136960 | 3.8667 |
| 3.3831 | 1.03 | 2213280 | 3.8682 |
| 3.371 | 0.03 | 2289600 | 3.8691 |
| 3.3647 | 1.03 | 2365920 | 3.8695 |
| 3.3613 | 0.03 | 2442240 | 3.8700 |
| 3.3501 | 0.03 | 2518560 | 3.8709 |
| 3.3403 | 1.03 | 2594880 | 3.8718 |
| 3.3309 | 0.03 | 2671200 | 3.8718 |
| 3.3237 | 1.03 | 2747520 | 3.8718 |
| 3.3183 | 0.03 | 2823840 | 3.8718 |
| 3.3119 | 1.03 | 2900160 | 3.8714 |
| 3.3057 | 0.03 | 2976480 | 3.8705 |
| 3.2965 | 1.02 | 3052726 | 3.8695 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
golesheed/whisper-native-children-9-dutch
|
golesheed
| 2024-02-02T17:53:21Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"nl",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-02T13:49:12Z |
---
language:
- nl
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Large V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large V2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1495
- Wer: 6.1057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4234 | 0.38 | 30 | 0.1891 | 7.6341 |
| 0.1849 | 0.75 | 60 | 0.1619 | 6.4918 |
| 0.1234 | 1.12 | 90 | 0.1579 | 6.2475 |
| 0.0766 | 1.5 | 120 | 0.1490 | 6.1136 |
| 0.0769 | 1.88 | 150 | 0.1415 | 6.0191 |
| 0.049 | 2.25 | 180 | 0.1418 | 6.0112 |
| 0.0336 | 2.62 | 210 | 0.1412 | 5.8773 |
| 0.0333 | 3.0 | 240 | 0.1389 | 6.1372 |
| 0.0163 | 3.38 | 270 | 0.1513 | 6.2081 |
| 0.016 | 3.75 | 300 | 0.1410 | 5.4439 |
| 0.011 | 4.12 | 330 | 0.1442 | 5.4833 |
| 0.0081 | 4.5 | 360 | 0.1489 | 5.9797 |
| 0.0066 | 4.88 | 390 | 0.1495 | 6.1057 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
fsicoli/whisper-large-v3-pt-cv16
|
fsicoli
| 2024-02-02T17:47:28Z | 7 | 2 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"pt",
"dataset:fsicoli/common_voice_16_0",
"arxiv:1910.09700",
"license:cc",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-26T14:22:35Z |
---
license: cc
datasets:
- fsicoli/common_voice_16_0
language:
- pt
metrics:
- wer 0.1076
---
# Whisper Large v3 Portuguese
<!-- Provide a quick summary of what the model is/does. -->
This model is a fine-tuned version of [openai/whisper-large-v3] (https://huggingface.co/openai/whisper-large-v3) on Portuguese using the train and validation split of [Common Voice 16] (https://huggingface.co/datasets/fsicoli/common_voice_16_0).
## Usage
```
from transformers import pipeline
transcriber = pipeline(
"automatic-speech-recognition",
model="fsicoli/whisper-large-v3-pt-cv16"
)
transcriber.model.config.forced_decoder_ids = (
transcriber.tokenizer.get_decoder_prompt_ids(
language="pt",
task="transcribe"
)
)
transcription = transcriber("path/to/my_audio.wav")
```
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
Test split of [Common Voice 16] (https://huggingface.co/datasets/fsicoli/common_voice_16_0).
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
indischepartij/MiaLatte-Indo-Mistral-7b-GGUF
|
indischepartij
| 2024-02-02T17:46:39Z | 0 | 0 | null |
[
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T10:22:03Z |
---
license: cc-by-nc-4.0
---
some GGUF quantized model for [OpenMia](https://huggingface.co/indischepartij/MiaLatte-Indo-Mistral-7b)
|
aws-neuron/zephyr-7b-beta-neuron
|
aws-neuron
| 2024-02-02T17:40:01Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"generated_from_trainer",
"inferentia2",
"neuron",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:finetune:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-06T04:38:05Z |
---
base_model: HuggingFaceH4/zephyr-7b-beta
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
license: mit
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- generated_from_trainer
- inferentia2
- neuron
model-index:
- name: zephyr-7b-beta
results: []
model_creator: Hugging Face H4
model_name: Zephyr 7B Beta
model_type: mistral
prompt_template: '<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
'
---
# Please read
Zephyr is now supported by optimum. See [aws-neuron/zephyr-7b-seqlen-2048-bs-4-cores-2](https://huggingface.co/aws-neuron/zephyr-7b-seqlen-2048-bs-4-cores-2) for an updated model.
# Neuronx model for Zephyr-7b-beta
This repository contains [AWS Inferentia2](https://aws.amazon.com/ec2/instance-types/inf2/) and [neuronx](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/) compatible checkpoints for [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta).
However, this file includes an example of how to compile various versions of Zephyr. Support isn’t available yet (as of 1/9/2024) in the [optimum neuron](https://huggingface.co/docs/optimum-neuron/index) framework, so we use the base transformers library.
These instructions closely follow the [Developer Guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/transformers-neuronx/transformers-neuronx-developer-guide.html#grouped-query-attention-gqa-support-beta). Look there for more detailed explanations, especially for the GQA settings.
This model has been compiled to run on an inf2.xlarge (the smallest Inferentia2 instance). You can run it on a bigger instance, but it will only use two cores no matter how many are available, unless you change the core number available in compilation. Remember that each Neuron processor has two cores.
## Set up the environment
First, use the [DLAMI image from Hugging Face](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2). It has most of the utilities and drivers preinstalled. However, you will need to update transformers-neruonx from the source to get Mistral/Zephyr support.
```
python -m pip install git+https://github.com/aws-neuron/transformers-neuronx.git
```
## Running inference from this repository
If you want to run a quick test or if the exact model you want to use is [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), you can run it directly using the steps below. Otherwise, jump to the Compilation of other Mistral/Zephyr versions section.
First, you will need a local copy of the library. This is because one of the nice things that the Hugging Face optimum library does is abstract local loads from repository loads. However, Mistral/Zephyr inference isn't supported yet.
```
# To speed up downloads we can use hf_transfer
pip install hf_transfer
HF_HUB_ENABLE_HF_TRANSFER=1
# use huggingface-cli to download model to local dir
huggingface-cli download aws-neuron/zephyr-7b-beta-neuron --local-dir zephyr-7b-beta-neuron
```
This should put a local copy in zephyr-7b-beta-neuron. This process should take a 5-10 minutes. If it completes in a few seconds the first time you run it, you are likely having problems with git-lfs. You can see this by using ls -al to check the size of the files downloaded. You will also notice it later when you get parsing errors.
Next, load the model and neff files from disk into the Neuron processors:
```
import torch
from transformers_neuronx import constants
from transformers_neuronx.mistral.model import MistralForSampling
from transformers_neuronx.config import NeuronConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
# Set sharding strategy for GQA to be shard over heads
neuron_config = NeuronConfig(
grouped_query_attention=constants.GQA.SHARD_OVER_HEADS
)
# define the model. These are the settings used in compilation.
# If you want to change these settings, skip to "Compilation of other Mistral versions"
model_neuron = MistralForSampling.from_pretrained("zephyr-7b-beta-neuron", batch_size=1, \
tp_degree=2, n_positions=256, amp='bf16', neuron_config=neuron_config)
# load the neff files from the local directory instead of compiling
model_neuron.load("zephyr-7b-beta-neuron")
# load the neff files into the neuron processors.
# you can see this process happening if you run neuron-top from the command line in another console.
# if you didn't do the previous load command, this will also compile the neff files
model_neuron.to_neuron()
```
## Inference example
This points to the original model for the tokenizer because the tokenizer is the same.
If you are compiling your own and want to have a single reference for everything, you can copy the special_tokens_map.json and tokenizer* from the original model to your local copy.
```
# Get a tokenizer and example input. This points to original tokenizer.
# tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-alpha")
# this refers to tokenizer from local copy
tokenizer = AutoTokenizer.from_pretrained("zephyr-7b-beta-neuron")
text = "[INST] What is your favourite condiment? [/INST]"
encoded_input = tokenizer(text, return_tensors='pt')
# Run inference
with torch.inference_mode():
generated_sequence = model_neuron.sample(encoded_input.input_ids, sequence_length=256, start_ids=None)
print([tokenizer.decode(tok) for tok in generated_sequence])
```
Example output:
```
["<s> [INST] What is your favourite condiment? [/INST]\nHere's a little script to test people's favorite condiment.\n\nYou can do this with paper cones and have people guess what's in it, but they need to write their guess on a piece of of paper and put it in a jar before they take a bite.\n\nIn this version, we have ketchup, mustard,mayonnaise,bbq sauce, and relish.\n\nThe script is straightforward, so as long as your bottle isn’t too tiny, you can add to the bottom of the script,or re-shape the form of the script a bit.\n\nIf you put their guesses in a jar before they take a bite,you can put all their guesses in the jar as soon as they're done,and show the container as they guess.\nAs for removing lines from the script,you'll probably be removing the ones from the bottom of the script,or adding lines to the top of of the script.\nIf for no matter reason your bottle is too tiny to set all the guesses in,you can write their guesses on cards or bits of paper,and set"]
```
## Compilation of other Mistral versions
If you want to use a different version of Mistral or Zephyr from Hugging Face, use the slightly modified code below. It essentially removes the “load” command. When the “to_neuron()” command sees that the model object doesn’t include the neff files, it will kick off the recompile. You can save them at the end so you only have to do the compilation process once. After that, you can use the code above to load a model and the neff files from the local directory.
```
import torch
from transformers_neuronx import constants
from transformers_neuronx.mistral.model import MistralForSampling
from transformers_neuronx.module import save_pretrained_split
from transformers_neuronx.config import NeuronConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id="HuggingFaceH4/zephyr-7b-beta"
# Load and save the CPU model with bfloat16 casting. This also gives us a local copy
# change the Hugging Face model name (HuggingFaceH4/zephyr-7b-beta) below to what you want
# You can update the other model names if you want, but they just reference a directory on the local disk.
model_cpu = AutoModelForCausalLM.from_pretrained(model_id)
save_pretrained_split(model_cpu, model_id)
# Set sharding strategy for GQA to be shard over heads
neuron_config = NeuronConfig(
grouped_query_attention=constants.GQA.SHARD_OVER_HEADS
)
# Create and compile the Neuron model
model_neuron = MistralForSampling.from_pretrained(model_id, batch_size=1, \
tp_degree=2, n_positions=256, amp='bf16', neuron_config=neuron_config)
model_neuron.to_neuron()
#save compiled neff files out to the same directory
model_neuron.save("HuggingFaceH4/zephyr-7b-beta")
```
## Arguments passed during compilation
The settings use in compilation are the same as shown above in the code. If you want to change these, you will need to recompile. If you don’t want to pass them in each time, you could update the config.json file. This is another nice thing the Hugging Face optimum neuron framework does for us. You can see an example of the format by looking at one of the Llama model config.json files. For [example](https://huggingface.co/aws-neuron/Llama-2-7b-hf-neuron-latency/blob/main/config.json).
```
neuron_config = NeuronConfig(
grouped_query_attention=constants.GQA.SHARD_OVER_HEADS
)
("zephyr-7b-beta-neuron", batch_size=1, tp_degree=2, n_positions=256, amp='bf16', neuron_config=neuron_config)
```
|
FilippoLampa/temp_checkpoints
|
FilippoLampa
| 2024-02-02T17:30:02Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-01T18:05:51Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: temp_checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# temp_checkpoints
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9559
- Accuracy: 0.7167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 495 | 1.1216 | 0.6909 |
| 1.8026 | 2.0 | 990 | 0.9559 | 0.7167 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.2.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1
|
AKILESH18/lamam1
|
AKILESH18
| 2024-02-02T17:16:47Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-02T17:05:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
panos-span/ppo-LunarLander-v2
|
panos-span
| 2024-02-02T17:07:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T17:06:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.65 +/- 17.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HeydarS/flan-t5-base_peft_v4
|
HeydarS
| 2024-02-02T17:02:00Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/flan-t5-base",
"base_model:adapter:google/flan-t5-base",
"region:us"
] | null | 2024-02-02T17:01:58Z |
---
library_name: peft
base_model: google/flan-t5-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
pierre-pessarossi/wikiGPT
|
pierre-pessarossi
| 2024-02-02T16:52:08Z | 0 | 0 | null |
[
"text-generation-inference",
"wikipedia",
"en",
"dataset:wikimedia/wikipedia",
"license:mit",
"region:us"
] | null | 2024-02-02T16:39:46Z |
---
license: mit
datasets:
- wikimedia/wikipedia
language:
- en
tags:
- text-generation-inference
- wikipedia
---
|
abdulmatinomotoso/pegasus-samsum
|
abdulmatinomotoso
| 2024-02-02T16:50:11Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-multi_news",
"base_model:finetune:google/pegasus-multi_news",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T16:48:50Z |
---
base_model: google/pegasus-multi_news
tags:
- generated_from_trainer
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5784 | 0.16 | 500 | 0.9738 |
| 0.8156 | 0.32 | 1000 | 0.7155 |
| 0.7658 | 0.48 | 1500 | 0.6780 |
| 0.7112 | 0.64 | 2000 | 0.6582 |
| 0.6611 | 0.81 | 2500 | 0.6469 |
| 0.7505 | 0.97 | 3000 | 0.6418 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
ybzz/detr-finetuned-pothole-v2
|
ybzz
| 2024-02-02T16:47:20Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-02-01T17:41:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Patcas/plbartAssert-docnew-v2
|
Patcas
| 2024-02-02T16:47:18Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"plbart",
"text2text-generation",
"generated_from_trainer",
"base_model:Patcas/my_awesome-assert-new",
"base_model:finetune:Patcas/my_awesome-assert-new",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T15:50:09Z |
---
base_model: Patcas/my_awesome-assert-new
tags:
- generated_from_trainer
model-index:
- name: plbartAssert-docnew-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbartAssert-docnew-v2
This model is a fine-tuned version of [Patcas/my_awesome-assert-new](https://huggingface.co/Patcas/my_awesome-assert-new) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 1.1573 |
| No log | 2.0 | 460 | 0.9996 |
| 1.3727 | 3.0 | 690 | 0.9757 |
| 1.3727 | 4.0 | 920 | 0.9605 |
| 0.4516 | 5.0 | 1150 | 0.9732 |
| 0.4516 | 6.0 | 1380 | 0.9845 |
| 0.2234 | 7.0 | 1610 | 0.9685 |
| 0.2234 | 8.0 | 1840 | 0.9745 |
| 0.136 | 9.0 | 2070 | 0.9773 |
| 0.136 | 10.0 | 2300 | 0.9764 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
CLMBR/pp-mod-subj-transformer-1
|
CLMBR
| 2024-02-02T16:35:15Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T10:07:42Z |
---
tags:
- generated_from_trainer
model-index:
- name: pp-mod-subj2-transformer-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pp-mod-subj2-transformer-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2239 | 0.03 | 76320 | 4.2376 |
| 4.0224 | 1.03 | 152640 | 4.0686 |
| 3.9119 | 0.03 | 228960 | 3.9972 |
| 3.844 | 1.03 | 305280 | 3.9558 |
| 3.7955 | 0.03 | 381600 | 3.9310 |
| 3.7466 | 1.03 | 457920 | 3.9171 |
| 3.7114 | 0.03 | 534240 | 3.9071 |
| 3.678 | 1.03 | 610560 | 3.9014 |
| 3.6505 | 0.03 | 686880 | 3.8976 |
| 3.6283 | 1.03 | 763200 | 3.8972 |
| 3.604 | 0.03 | 839520 | 3.8951 |
| 3.5821 | 1.03 | 915840 | 3.8951 |
| 3.5663 | 0.03 | 992160 | 3.8951 |
| 3.5503 | 1.03 | 1068480 | 3.8963 |
| 3.5299 | 0.03 | 1144800 | 3.8980 |
| 3.5172 | 1.03 | 1221120 | 3.8993 |
| 3.5033 | 0.03 | 1297440 | 3.9014 |
| 3.4932 | 1.03 | 1373760 | 3.9026 |
| 3.4776 | 0.03 | 1450080 | 3.9043 |
| 3.4675 | 1.03 | 1526400 | 3.9066 |
| 3.463 | 0.03 | 1602720 | 3.9080 |
| 3.4455 | 1.03 | 1679040 | 3.9105 |
| 3.4337 | 0.03 | 1755360 | 3.9127 |
| 3.4207 | 0.03 | 1831680 | 3.9144 |
| 3.4094 | 1.03 | 1908000 | 3.9159 |
| 3.4 | 0.03 | 1984320 | 3.9178 |
| 3.3854 | 1.03 | 2060640 | 3.9192 |
| 3.3739 | 0.03 | 2136960 | 3.9209 |
| 3.366 | 1.03 | 2213280 | 3.9216 |
| 3.3551 | 0.03 | 2289600 | 3.9244 |
| 3.3427 | 1.03 | 2365920 | 3.9252 |
| 3.3348 | 0.03 | 2442240 | 3.9254 |
| 3.3254 | 1.03 | 2518560 | 3.9270 |
| 3.3187 | 0.03 | 2594880 | 3.9282 |
| 3.3089 | 1.03 | 2671200 | 3.9282 |
| 3.3021 | 0.03 | 2747520 | 3.9279 |
| 3.2982 | 1.03 | 2823840 | 3.9286 |
| 3.2864 | 0.03 | 2900160 | 3.9283 |
| 3.279 | 1.03 | 2976480 | 3.9276 |
| 3.2717 | 0.02 | 3052726 | 3.9262 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
thugCodeNinja/SpoofDetection
|
thugCodeNinja
| 2024-02-02T16:30:52Z | 0 | 0 |
keras
|
[
"keras",
"audio-classification",
"dataset:DynamicSuperb/SpoofDetection_ASVspoof2015",
"license:mit",
"region:us"
] |
audio-classification
| 2024-01-22T10:44:14Z |
---
license: mit
datasets:
- DynamicSuperb/SpoofDetection_ASVspoof2015
library_name: keras
pipeline_tag: audio-classification
---
|
CLMBR/full-lstm-3
|
CLMBR
| 2024-02-02T16:22:49Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-26T10:08:19Z |
---
tags:
- generated_from_trainer
model-index:
- name: full2-lstm-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full2-lstm-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7949 | 0.03 | 76320 | 4.7623 |
| 4.5075 | 1.03 | 152640 | 4.4794 |
| 4.3602 | 0.03 | 228960 | 4.3429 |
| 4.2766 | 1.03 | 305280 | 4.2588 |
| 4.2113 | 0.03 | 381600 | 4.2022 |
| 4.1648 | 1.03 | 457920 | 4.1606 |
| 4.1336 | 0.03 | 534240 | 4.1292 |
| 4.1008 | 1.03 | 610560 | 4.1050 |
| 4.0722 | 0.03 | 686880 | 4.0849 |
| 4.0491 | 1.03 | 763200 | 4.0689 |
| 4.0263 | 0.03 | 839520 | 4.0557 |
| 4.0088 | 1.03 | 915840 | 4.0452 |
| 3.9954 | 0.03 | 992160 | 4.0355 |
| 3.9784 | 1.03 | 1068480 | 4.0274 |
| 3.9641 | 0.03 | 1144800 | 4.0212 |
| 3.9491 | 1.03 | 1221120 | 4.0152 |
| 3.9347 | 0.03 | 1297440 | 4.0090 |
| 3.9257 | 1.03 | 1373760 | 4.0047 |
| 3.9144 | 0.03 | 1450080 | 4.0009 |
| 3.9137 | 1.03 | 1526400 | 3.9975 |
| 3.9061 | 0.03 | 1602720 | 3.9940 |
| 3.9037 | 1.03 | 1679040 | 3.9917 |
| 3.9045 | 0.03 | 1755360 | 3.9893 |
| 3.8999 | 1.03 | 1831680 | 3.9873 |
| 3.8897 | 0.03 | 1908000 | 3.9854 |
| 3.8842 | 1.03 | 1984320 | 3.9832 |
| 3.8789 | 0.03 | 2060640 | 3.9805 |
| 3.8724 | 1.03 | 2136960 | 3.9793 |
| 3.8717 | 0.03 | 2213280 | 3.9778 |
| 3.8658 | 1.03 | 2289600 | 3.9768 |
| 3.8594 | 0.03 | 2365920 | 3.9757 |
| 3.8523 | 1.03 | 2442240 | 3.9751 |
| 3.8455 | 0.03 | 2518560 | 3.9739 |
| 3.8431 | 0.03 | 2594880 | 3.9734 |
| 3.8368 | 0.03 | 2671200 | 3.9728 |
| 3.8431 | 1.03 | 2747520 | 3.9721 |
| 3.8423 | 0.03 | 2823840 | 3.9716 |
| 3.8432 | 0.03 | 2900160 | 3.9712 |
| 3.8477 | 1.03 | 2976480 | 3.9706 |
| 3.8461 | 0.02 | 3052726 | 3.9703 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Sharathhebbar24/Med_GPT2
|
Sharathhebbar24
| 2024-02-02T16:21:16Z | 244 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"medical",
"en",
"dataset:gamino/wiki_medical_terms",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-27T16:44:48Z |
---
license: apache-2.0
datasets:
- gamino/wiki_medical_terms
language:
- en
pipeline_tag: text-generation
tags:
- medical
---
This is a finetuned version of [gamino/wiki_medical_terms](https://huggingface.co/datasets/gamino/wiki_medical_terms)
## Model description
GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This
means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifting one token (word or piece of word) to the right. The model uses a masking mechanism to make sure the
predictions for the token `i` only use the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a
prompt.
### To use this model
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> model_name = "Sharathhebbar24/chat_gpt2_dpo"
>>> model = AutoModelForCausalLM.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
>>> def generate_text(prompt):
>>> inputs = tokenizer.encode(prompt, return_tensors='pt')
>>> outputs = model.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id)
>>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
>>> return generated[:generated.rfind(".")+1]
>>> prompt = "What is Paracetamol"
>>> res = generate_text(prompt)
>>> res
```
|
gwongz/bear-classifier
|
gwongz
| 2024-02-02T16:16:56Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-02-02T16:12:11Z |
---
license: apache-2.0
---
A classifier for identifying polar, black, giant panda and red panda bears.
|
raman07/SD-finetuned-MIMIC-bias
|
raman07
| 2024-02-02T16:16:01Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"en",
"arxiv:2305.08252",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-31T16:06:26Z |
---
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
---
## Model Details
### Model Description
This model is fine-tuned from [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on 110,000 image-text pairs from the MIMIC dataset using the Bias tuning PEFT method. Under this fine-tuning strategy, fine-tune only the bias weights in the U-Net while keeping everything else frozen.
- **Developed by:** [Raman Dutt](https://twitter.com/RamanDutt4)
- **Shared by:** [Raman Dutt](https://twitter.com/RamanDutt4)
- **Model type:** [Stable Diffusion fine-tuned using Parameter-Efficient Fine-Tuning]
- **Finetuned from model:** [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
### Model Sources
- **Paper:** [Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity](https://arxiv.org/abs/2305.08252)
- **Demo:** [MIMIC-SD-PEFT-Demo](https://huggingface.co/spaces/raman07/MIMIC-SD-Demo-Memory-Optimized?logs=container)
## Direct Use
This model can be directly used to generate realistic medical images from text prompts.
## How to Get Started with the Model
```python
import os
from safetensors.torch import load_file
from diffusers.pipelines import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(sd_folder_path, revision="fp16")
exp_path = os.path.join('unet', 'diffusion_pytorch_model.safetensors')
state_dict = load_file(exp_path)
# Load the adapted U-Net
pipe.unet.load_state_dict(state_dict, strict=False)
pipe.to('cuda:0')
# Generate images with text prompts
TEXT_PROMPT = "No acute cardiopulmonary abnormality."
GUIDANCE_SCALE = 4
INFERENCE_STEPS = 75
result_image = pipe(
prompt=TEXT_PROMPT,
height=224,
width=224,
guidance_scale=GUIDANCE_SCALE,
num_inference_steps=INFERENCE_STEPS,
)
result_pil_image = result_image["images"][0]
```
## Training Details
### Training Data
This model has been fine-tuned on 110K image-text pairs from the MIMIC dataset.
### Training Procedure
The training procedure has been described in detail in Section 4.3 of this [paper](https://arxiv.org/abs/2305.08252).
#### Metrics
This model has been evaluated using the Fréchet inception distance (FID) Score on MIMIC dataset.
### Results
| Fine-Tuning Strategy | FID Score |
|------------------------|-----------|
| Full FT | 58.74 |
| Attention | 52.41 |
| Bias | 20.81 |
| Norm | 29.84 |
| Bias+Norm+Attention | 35.93 |
| LoRA | 439.65 |
| SV-Diff | 23.59 |
| DiffFit | 42.50 |
## Environmental Impact
Using Parameter-Efficient Fine-Tuning potentially causes **lesser** harm to the environment since we fine-tune a significantly lesser number of parameters in a model. This results in much lesser computing and hardware requirements.
## Citation
**BibTeX:**
@article{dutt2023parameter,
title={Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity},
author={Dutt, Raman and Ericsson, Linus and Sanchez, Pedro and Tsaftaris, Sotirios A and Hospedales, Timothy},
journal={arXiv preprint arXiv:2305.08252},
year={2023}
}
**APA:**
Dutt, R., Ericsson, L., Sanchez, P., Tsaftaris, S. A., & Hospedales, T. (2023). Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity. arXiv preprint arXiv:2305.08252.
## Model Card Authors
Raman Dutt
[Twitter](https://twitter.com/RamanDutt4)
[LinkedIn](https://www.linkedin.com/in/raman-dutt/)
[Email](mailto:[email protected])
|
raman07/SD-finetuned-MIMIC-norm-bias-attention
|
raman07
| 2024-02-02T16:14:50Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"arxiv:2305.08252",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-31T16:18:10Z |
---
library_name: diffusers
pipeline_tag: text-to-image
---
## Model Details
### Model Description
This model is fine-tuned from [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on 110,000 image-text pairs from the MIMIC dataset using Norm-Bias-Attention tuning PEFT method. Under this fine-tuning strategy, fine-tune only the normalization, bias and attention parameters in the U-Net while keeping everything else frozen.
- **Developed by:** [Raman Dutt](https://twitter.com/RamanDutt4)
- **Shared by:** [Raman Dutt](https://twitter.com/RamanDutt4)
- **Model type:** [Stable Diffusion fine-tuned using Parameter-Efficient Fine-Tuning]
- **Finetuned from model:** [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
### Model Sources
- **Paper:** [Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity](https://arxiv.org/abs/2305.08252)
- **Demo:** [MIMIC-SD-PEFT-Demo](https://huggingface.co/spaces/raman07/MIMIC-SD-Demo-Memory-Optimized?logs=container)
## Direct Use
This model can be directly used to generate realistic medical images from text prompts.
## How to Get Started with the Model
```python
import os
from safetensors.torch import load_file
from diffusers.pipelines import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(sd_folder_path, revision="fp16")
exp_path = os.path.join('unet', 'diffusion_pytorch_model.safetensors')
state_dict = load_file(exp_path)
# Load the adapted U-Net
pipe.unet.load_state_dict(state_dict, strict=False)
pipe.to('cuda:0')
# Generate images with text prompts
TEXT_PROMPT = "No acute cardiopulmonary abnormality."
GUIDANCE_SCALE = 4
INFERENCE_STEPS = 75
result_image = pipe(
prompt=TEXT_PROMPT,
height=224,
width=224,
guidance_scale=GUIDANCE_SCALE,
num_inference_steps=INFERENCE_STEPS,
)
result_pil_image = result_image["images"][0]
```
## Training Details
### Training Data
This model has been fine-tuned on 110K image-text pairs from the MIMIC dataset.
### Training Procedure
The training procedure has been described in detail in Section 4.3 of this [paper](https://arxiv.org/abs/2305.08252).
#### Metrics
This model has been evaluated using the Fréchet inception distance (FID) Score on MIMIC dataset.
### Results
| Fine-Tuning Strategy | FID Score |
|------------------------|-----------|
| Full FT | 58.74 |
| Attention | 52.41 |
| Bias | 20.81 |
| Norm | 29.84 |
| Bias+Norm+Attention | 35.93 |
| LoRA | 439.65 |
| SV-Diff | 23.59 |
| DiffFit | 42.50 |
## Environmental Impact
Using Parameter-Efficient Fine-Tuning potentially causes **lesser** harm to the environment since we fine-tune a significantly lesser number of parameters in a model. This results in much lesser computing and hardware requirements.
## Citation
**BibTeX:**
@article{dutt2023parameter,
title={Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity},
author={Dutt, Raman and Ericsson, Linus and Sanchez, Pedro and Tsaftaris, Sotirios A and Hospedales, Timothy},
journal={arXiv preprint arXiv:2305.08252},
year={2023}
}
**APA:**
Dutt, R., Ericsson, L., Sanchez, P., Tsaftaris, S. A., & Hospedales, T. (2023). Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity. arXiv preprint arXiv:2305.08252.
## Model Card Authors
Raman Dutt
[Twitter](https://twitter.com/RamanDutt4)
[LinkedIn](https://www.linkedin.com/in/raman-dutt/)
[Email](mailto:[email protected])
|
loony-huggingface/QA_model_with_squad
|
loony-huggingface
| 2024-02-02T16:08:23Z | 47 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-02T15:52:03Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: loony-huggingface/QA_model_with_squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# loony-huggingface/QA_model_with_squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3114
- Validation Loss: 1.5795
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8411 | 1.8063 | 0 |
| 1.4893 | 1.5795 | 1 |
| 1.3114 | 1.5795 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
CLMBR/re-irr-sv-agr-transformer-4
|
CLMBR
| 2024-02-02T16:06:38Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T09:21:21Z |
---
tags:
- generated_from_trainer
model-index:
- name: re-irr-sv-agr-transformer-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# re-irr-sv-agr-transformer-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2133 | 0.03 | 76320 | 4.2076 |
| 4.0078 | 1.03 | 152640 | 4.0406 |
| 3.9019 | 0.03 | 228960 | 3.9673 |
| 3.8342 | 1.03 | 305280 | 3.9273 |
| 3.7842 | 0.03 | 381600 | 3.9023 |
| 3.7453 | 1.03 | 457920 | 3.8877 |
| 3.7122 | 0.03 | 534240 | 3.8773 |
| 3.6801 | 0.03 | 610560 | 3.8725 |
| 3.6523 | 1.03 | 686880 | 3.8682 |
| 3.6281 | 0.03 | 763200 | 3.8650 |
| 3.6022 | 1.03 | 839520 | 3.8651 |
| 3.5871 | 0.03 | 915840 | 3.8637 |
| 3.5709 | 1.03 | 992160 | 3.8647 |
| 3.5497 | 0.03 | 1068480 | 3.8658 |
| 3.5332 | 0.03 | 1144800 | 3.8667 |
| 3.5166 | 1.03 | 1221120 | 3.8687 |
| 3.4988 | 0.03 | 1297440 | 3.8686 |
| 3.484 | 1.03 | 1373760 | 3.8692 |
| 3.4711 | 0.03 | 1450080 | 3.8724 |
| 3.463 | 1.03 | 1526400 | 3.8742 |
| 3.4548 | 0.03 | 1602720 | 3.8753 |
| 3.4478 | 1.03 | 1679040 | 3.8769 |
| 3.4397 | 0.03 | 1755360 | 3.8786 |
| 3.4264 | 1.03 | 1831680 | 3.8802 |
| 3.4143 | 0.03 | 1908000 | 3.8824 |
| 3.4034 | 1.03 | 1984320 | 3.8831 |
| 3.3907 | 0.03 | 2060640 | 3.8834 |
| 3.3843 | 1.03 | 2136960 | 3.8860 |
| 3.3732 | 0.03 | 2213280 | 3.8868 |
| 3.3616 | 0.03 | 2289600 | 3.8872 |
| 3.3499 | 1.03 | 2365920 | 3.8889 |
| 3.3388 | 0.03 | 2442240 | 3.8892 |
| 3.3263 | 1.03 | 2518560 | 3.8893 |
| 3.316 | 0.03 | 2594880 | 3.8906 |
| 3.3078 | 1.03 | 2671200 | 3.8908 |
| 3.3029 | 0.03 | 2747520 | 3.8910 |
| 3.2953 | 1.03 | 2823840 | 3.8906 |
| 3.2926 | 0.03 | 2900160 | 3.8910 |
| 3.2871 | 1.03 | 2976480 | 3.8899 |
| 3.2795 | 0.02 | 3052726 | 3.8889 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Kooten/MiquMaid-v1-70B-6bpw-exl2
|
Kooten
| 2024-02-02T15:56:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T08:28:38Z |
# MiquMaid-v1-70B 6bpw
## Description
Exllama quant of [NeverSleep/MiquMaid-v1-70B](https://huggingface.co/NeverSleep/MiquMaid-v1-70B)
## Other quants:
EXL2: [6bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-4bpw-exl2), [3.5bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-3.5bpw-exl2), [3bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-3bpw-exl2), [2.4bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-2.4bpw-exl2)
2.4bpw is probably the most you can fit in a 24gb card
GGUF:
[2bit Imatrix GGUF](https://huggingface.co/Kooten/MiquMaid-v1-70B-IQ2-GGUF)
### Custom format:
```
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
```
## Contact
Kooten on discord
[ko-fi.com/kooten](https://ko-fi.com/kooten)
|
Kooten/MiquMaid-v1-70B-5bpw-exl2
|
Kooten
| 2024-02-02T15:56:41Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T12:11:15Z |
# MiquMaid-v1-70B 5bpw
## Description
Exllama quant of [NeverSleep/MiquMaid-v1-70B](https://huggingface.co/NeverSleep/MiquMaid-v1-70B)
## Other quants:
EXL2: [6bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-4bpw-exl2), [3.5bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-3.5bpw-exl2), [3bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-3bpw-exl2), [2.4bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-2.4bpw-exl2)
2.4bpw is probably the most you can fit in a 24gb card
GGUF:
[2bit Imatrix GGUF](https://huggingface.co/Kooten/MiquMaid-v1-70B-IQ2-GGUF)
### Custom format:
```
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
```
## Contact
Kooten on discord
[ko-fi.com/kooten](https://ko-fi.com/kooten)
|
Kooten/MiquMaid-v1-70B-3.5bpw-exl2
|
Kooten
| 2024-02-02T15:55:52Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T13:10:57Z |
# MiquMaid-v1-70B 3.5bpw
## Description
Exllama quant of [NeverSleep/MiquMaid-v1-70B](https://huggingface.co/NeverSleep/MiquMaid-v1-70B)
## Other quants:
EXL2: [6bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-4bpw-exl2), [3.5bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-3.5bpw-exl2), [3bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-3bpw-exl2), [2.4bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-2.4bpw-exl2)
2.4bpw is probably the most you can fit in a 24gb card
GGUF:
[2bit Imatrix GGUF](https://huggingface.co/Kooten/MiquMaid-v1-70B-IQ2-GGUF)
### Custom format:
```
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
```
## Contact
Kooten on discord
[ko-fi.com/kooten](https://ko-fi.com/kooten)
|
Kooten/MiquMaid-v1-70B-3bpw-exl2
|
Kooten
| 2024-02-02T15:55:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T13:10:23Z |
# MiquMaid-v1-70B 3bpw
## Description
Exllama quant of [NeverSleep/MiquMaid-v1-70B](https://huggingface.co/NeverSleep/MiquMaid-v1-70B)
## Other quants:
EXL2: [6bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-4bpw-exl2), [3.5bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-3.5bpw-exl2), [3bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-3bpw-exl2), [2.4bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-2.4bpw-exl2)
2.4bpw is probably the most you can fit in a 24gb card
GGUF:
[2bit Imatrix GGUF](https://huggingface.co/Kooten/MiquMaid-v1-70B-IQ2-GGUF)
### Custom format:
```
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
```
## Contact
Kooten on discord
[ko-fi.com/kooten](https://ko-fi.com/kooten)
|
Kooten/MiquMaid-v1-70B-2.4bpw-exl2
|
Kooten
| 2024-02-02T15:55:18Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T18:09:50Z |
# MiquMaid-v1-70B 2.4bpw
## Description
Exllama quant of [NeverSleep/MiquMaid-v1-70B](https://huggingface.co/NeverSleep/MiquMaid-v1-70B)
## Other quants:
EXL2: [6bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-4bpw-exl2), [3.5bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-3.5bpw-exl2), [3bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-3bpw-exl2), [2.4bpw](https://huggingface.co/Kooten/MiquMaid-v1-70B-2.4bpw-exl2)
2.4bpw is probably the most you can fit in a 24gb card
GGUF:
[2bit Imatrix GGUF](https://huggingface.co/Kooten/MiquMaid-v1-70B-IQ2-GGUF)
### Custom format:
```
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
```
## Contact
Kooten on discord
[ko-fi.com/kooten](https://ko-fi.com/kooten)
|
bartowski/LongAlign-13B-64k-exl2
|
bartowski
| 2024-02-02T15:53:47Z | 1 | 1 |
transformers
|
[
"transformers",
"Long Context",
"llama",
"text-generation",
"en",
"zh",
"dataset:THUDM/LongAlign-10k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T15:26:47Z |
---
language:
- en
- zh
library_name: transformers
tags:
- Long Context
- llama
datasets:
- THUDM/LongAlign-10k
pipeline_tag: text-generation
license: apache-2.0
quantized_by: bartowski
---
## Exllama v2 Quantizations of LongAlign-13B-64k
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/THUDM/LongAlign-13B-64k
No GQA - VRAM requirements will be higher
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------------ |
| [6_5](https://huggingface.co/Bartowski/LongAlign-13B-64k-exl2/tree/6_5) | 6.5 | 8.0 | 14.4 GB | 24.0 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/LongAlign-13B-64k-exl2/tree/5_0) | 5.0 | 6.0 | 12.1 GB | 21.7 GB | Slightly lower perplexity vs 6.5, can fit in 12 GB card with even lower context. |
| [4_25](https://huggingface.co/Bartowski/LongAlign-13B-64k-exl2/tree/4_25) | 4.25 | 6.0 | 10.9 GB | 20.5 GB | GPTQ equivalent bits per weight. |
| [3_75](https://huggingface.co/Bartowski/LongAlign-13B-64k-exl2/tree/3_75) | 3.75 | 6.0 | 10.1 GB | 19.7 GB | Lower quality but still generally usable. |
| [3_0](https://huggingface.co/Bartowski/LongAlign-13B-64k-exl2/tree/3_0) | 3.0 | 6.0 | 9.1 GB | 18.7 GB | Very low quality, not recommended unless you have to. |
VRAM requirements listed for both 4k context and 16k context since without GQA the differences are massive (9.6 GB)
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/LongAlign-13B-64k-exl2 LongAlign-13B-64k-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `LongAlign-13B-64k-exl2`:
```shell
mkdir LongAlign-13B-64k-exl2
huggingface-cli download bartowski/LongAlign-13B-64k-exl2 --local-dir LongAlign-13B-64k-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir LongAlign-13B-64k-exl2-6_5
huggingface-cli download bartowski/LongAlign-13B-64k-exl2 --revision 6_5 --local-dir LongAlign-13B-64k-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir LongAlign-13B-64k-exl2-6.5
huggingface-cli download bartowski/LongAlign-13B-64k-exl2 --revision 6_5 --local-dir LongAlign-13B-64k-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
AntoineGourru/Mistral_telecom
|
AntoineGourru
| 2024-02-02T15:48:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-31T14:22:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
felixbrock/labeled_mistral_instruct_vllm
|
felixbrock
| 2024-02-02T15:45:22Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T15:40:49Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** felixbrock
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YKM11/Mistral-7B-adaptv1
|
YKM11
| 2024-02-02T15:44:56Z | 0 | 0 | null |
[
"safetensors",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-02-02T15:42:45Z |
---
license: cc-by-nc-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mlx-community/MBX-7B-v3-q
|
mlx-community
| 2024-02-02T15:40:35Z | 5 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/MBX-7B",
"flemmingmiguel/MBX-7B-v3",
"license:apache-2.0",
"region:us"
] | null | 2024-02-02T15:36:16Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- flemmingmiguel/MBX-7B
- flemmingmiguel/MBX-7B-v3
- mlx
---
# mlx-community/MBX-7B-v3-q
This model was converted to MLX format from [`flemmingmiguel/MBX-7B-v3`]().
Refer to the [original model card](https://huggingface.co/flemmingmiguel/MBX-7B-v3) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/MBX-7B-v3-q")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
merge-crew/Scandi3
|
merge-crew
| 2024-02-02T15:40:34Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"bineric/NorskGPT-Mistral-7b",
"base_model:bineric/NorskGPT-Mistral-7b",
"base_model:finetune:bineric/NorskGPT-Mistral-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T15:36:56Z |
---
tags:
- merge
- mergekit
- lazymergekit
- bineric/NorskGPT-Mistral-7b
base_model:
- bineric/NorskGPT-Mistral-7b
---
# Scandi3
Scandi3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [bineric/NorskGPT-Mistral-7b](https://huggingface.co/bineric/NorskGPT-Mistral-7b)
## 🧩 Configuration
```yaml
models:
- model: timpal0l/BeagleCatMunin
# No parameters necessary for base model
- model: bineric/NorskGPT-Mistral-7b
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: timpal0l/BeagleCatMunin
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "merge-crew/Scandi3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
macarious/torgo_xlsr_finetune_M02_keep_all
|
macarious
| 2024-02-02T15:35:56Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-02T05:04:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: torgo_xlsr_finetune_M02_keep_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_xlsr_finetune_M02_keep_all
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6539
- Wer: 0.2436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5043 | 0.56 | 1000 | 3.3139 | 1.0 |
| 2.1248 | 1.12 | 2000 | 1.9926 | 0.8898 |
| 1.0178 | 1.67 | 3000 | 1.5324 | 0.6683 |
| 0.7315 | 2.23 | 4000 | 1.7989 | 0.5959 |
| 0.6289 | 2.79 | 5000 | 1.3984 | 0.4987 |
| 0.5123 | 3.35 | 6000 | 1.2977 | 0.4228 |
| 0.4751 | 3.91 | 7000 | 1.3967 | 0.3988 |
| 0.4354 | 4.47 | 8000 | 1.5080 | 0.4274 |
| 0.3817 | 5.03 | 9000 | 1.7897 | 0.4014 |
| 0.3758 | 5.58 | 10000 | 1.3421 | 0.3385 |
| 0.358 | 6.14 | 11000 | 1.6429 | 0.3427 |
| 0.3083 | 6.7 | 12000 | 1.2683 | 0.3084 |
| 0.2805 | 7.26 | 13000 | 1.7095 | 0.3122 |
| 0.2856 | 7.82 | 14000 | 1.7918 | 0.3317 |
| 0.2574 | 8.38 | 15000 | 1.5411 | 0.2947 |
| 0.2495 | 8.93 | 16000 | 1.4551 | 0.2997 |
| 0.2651 | 9.49 | 17000 | 1.5073 | 0.2825 |
| 0.2517 | 10.05 | 18000 | 1.6405 | 0.2920 |
| 0.2274 | 10.61 | 19000 | 1.4440 | 0.2604 |
| 0.2278 | 11.17 | 20000 | 1.4020 | 0.2875 |
| 0.2472 | 11.73 | 21000 | 1.6264 | 0.2897 |
| 0.1875 | 12.28 | 22000 | 1.5901 | 0.2783 |
| 0.175 | 12.84 | 23000 | 1.4056 | 0.2501 |
| 0.1751 | 13.4 | 24000 | 1.4809 | 0.2631 |
| 0.1607 | 13.96 | 25000 | 1.4363 | 0.2551 |
| 0.1712 | 14.52 | 26000 | 1.6480 | 0.2524 |
| 0.1581 | 15.08 | 27000 | 1.5084 | 0.2615 |
| 0.1623 | 15.63 | 28000 | 1.4066 | 0.2482 |
| 0.1397 | 16.19 | 29000 | 1.7111 | 0.2619 |
| 0.1536 | 16.75 | 30000 | 1.4691 | 0.2402 |
| 0.1343 | 17.31 | 31000 | 1.5406 | 0.2329 |
| 0.1428 | 17.87 | 32000 | 1.5261 | 0.2413 |
| 0.1125 | 18.43 | 33000 | 1.6416 | 0.2337 |
| 0.1214 | 18.98 | 34000 | 1.6803 | 0.2425 |
| 0.124 | 19.54 | 35000 | 1.6539 | 0.2436 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.13.3
|
dog-god/texture-synthesis-sdxl-lora
|
dog-god
| 2024-02-02T15:35:07Z | 403 | 20 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-02-01T10:00:14Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
widget:
- text: >-
a front right right view of a normal map of a high quality 3D model of a
croissant, flaky, buttery, freshly baked, delicious, breakfast pastry.
normal map
output:
url: images/00016-generated-56421668267.png
- text: >-
a front right view of a albedo map of a high quality 3D model of a
croissant, flaky, buttery, freshly baked, delicious. VAR2, colormap, muted
realistic colors
output:
url: images/00017-generated-56421668268.png
- text: >-
a front right view of a ambient occlusion map of a high quality 3D model of
a croissant, flaky pastry, buttery, delicious, freshly baked, traditional
french pastry. black and white, ambmap
output:
url: images/00018-generated-56421668269.png
- text: >-
a front right view of a roughness map of a high quality 3D model of a
croissant, flaky, buttery, freshly baked, delicious, French pastry,
breakfast food,. black and white, small roughness variations, roughmap
output:
url: images/00019-generated-56421668270.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: heighmap, colormap, roughmap, normalmap, specmap, ambmap
license: apache-2.0
language:
- en
---
# Material Synthesis With SDXL
NOTE: The Inference API uses the 3D model by default, so if you want to generate 2D textures you may want to clone to a space or run the model yourself.
## Model description
This is a SDXL LoRA specifically designed for generating material textures, as well as images of 3D objects with the textures already applied to them.
It's recommended to use controlnet for maximum consistency between images.
The model is already in a standard safetensors format, and can be readily used in A1111, ComfyUI, or any other model inference API of your choosing.
This model is part of the "Material synthesis with diffusion models" project by Artemiy Zhukov.
## Trigger words
To represent different texture types, the model uses a two-token system for the type of texture.
`heighmap` - height texture
`colormap` - albedo texture
`roughmap` - roughness texture
`normalmap` - normal texture
`specmap` - specular texture
`ambmap` - ambient occlusion texture
`metalmap` - metallic texture
Here are some examples of the 3D model in action (although you may want to use the base if you just want top-down textures)
<Gallery />
|
Patcas/plbartAssert-docnew-v1
|
Patcas
| 2024-02-02T15:34:51Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"plbart",
"text2text-generation",
"generated_from_trainer",
"base_model:Patcas/my_awesome-assert-new",
"base_model:finetune:Patcas/my_awesome-assert-new",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T14:37:57Z |
---
base_model: Patcas/my_awesome-assert-new
tags:
- generated_from_trainer
model-index:
- name: plbartAssert-docnew-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbartAssert-docnew-v1
This model is a fine-tuned version of [Patcas/my_awesome-assert-new](https://huggingface.co/Patcas/my_awesome-assert-new) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 1.1573 |
| No log | 2.0 | 460 | 0.9996 |
| 1.3727 | 3.0 | 690 | 0.9757 |
| 1.3727 | 4.0 | 920 | 0.9605 |
| 0.4516 | 5.0 | 1150 | 0.9732 |
| 0.4516 | 6.0 | 1380 | 0.9845 |
| 0.2234 | 7.0 | 1610 | 0.9685 |
| 0.2234 | 8.0 | 1840 | 0.9745 |
| 0.136 | 9.0 | 2070 | 0.9773 |
| 0.136 | 10.0 | 2300 | 0.9764 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
zaenalium/indonesia-distilgpt2
|
zaenalium
| 2024-02-02T15:34:41Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:wikimedia/wikipedia",
"base_model:zaenalium/indonesia-distilgpt2",
"base_model:finetune:zaenalium/indonesia-distilgpt2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T07:45:28Z |
---
base_model: zaenalium/indonesia-distilgpt2
tags:
- generated_from_trainer
datasets:
- wikimedia/wikipedia
metrics:
- accuracy
model-index:
- name: indonesia-distilgpt2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: wikimedia/wikipedia 20231101.id
type: wikimedia/wikipedia
args: 20231101.id
metrics:
- name: Accuracy
type: accuracy
value: 0.47740403110973484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indonesia-distilgpt2
This model is a fine-tuned version of [zaenalium/indonesia-distilgpt2](https://huggingface.co/zaenalium/indonesia-distilgpt2) on the wikimedia/wikipedia 20231101.id dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5680
- Accuracy: 0.4774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
fblin/mistral-inmob-2-1
|
fblin
| 2024-02-02T15:32:48Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T15:32:40Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
vanshtyagi203/cars-xzg
|
vanshtyagi203
| 2024-02-02T15:32:14Z | 1 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-02T15:28:10Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### CARS-xzg Dreambooth model trained by vanshtyagi203 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 1222189
Sample pictures of this concept:
.jpg)
|
fiza12/my_awesome_mind_model
|
fiza12
| 2024-02-02T15:30:36Z | 146 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-31T10:50:41Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4483
- Accuracy: 0.8462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.86 | 3 | 0.4483 | 0.8462 |
| No log | 2.0 | 7 | 0.4827 | 0.8462 |
| 0.4433 | 2.86 | 10 | 0.4312 | 0.8462 |
| 0.4433 | 4.0 | 14 | 0.4371 | 0.8462 |
| 0.4433 | 4.86 | 17 | 0.4490 | 0.8462 |
| 0.3769 | 6.0 | 21 | 0.4524 | 0.8462 |
| 0.3769 | 6.86 | 24 | 0.4367 | 0.8462 |
| 0.3769 | 8.0 | 28 | 0.4324 | 0.8462 |
| 0.3639 | 8.57 | 30 | 0.4336 | 0.8462 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
AISE-TUDelft/BinT5-NoFunName
|
AISE-TUDelft
| 2024-02-02T15:28:56Z | 180 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"code",
"dataset:AISE-TUDelft/Capybara",
"arxiv:2301.01701",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T14:41:54Z |
---
license: apache-2.0
datasets:
- AISE-TUDelft/Capybara
tags:
- code
---
# BinT5
- **Repository: https://github.com/AISE-TUDelft/Capybara-BinT5**
- **Paper: https://huggingface.co/papers/2301.01701**
- **Point of Contact: https://huggingface.co/aalkaswan**
- **Raw Data: https://zenodo.org/records/7229913**
BinT5 is a Binary Code Summarization model, the base models are [CodeT5]() and fine-tuned with [Capybara]().
We offer 5 variations of the model:
| Name | Training Data |
|-----------------------------------------------------|------------------------------------------------------|
| [BinT5-C](https://huggingface.co/AISE-TUDelft/BinT5-C) | C Source |
| [BinT5-Decom](https://huggingface.co/AISE-TUDelft/BinT5-Decom) | Decompiled C Binaries |
| [BinT5-Stripped](https://huggingface.co/AISE-TUDelft/BinT5-Stripped) | Stripped Decompiled C Binaries |
| [BinT5-Demi](https://huggingface.co/AISE-TUDelft/BinT5-Demi) | Demi-stripped Decompiled C Binaries |
| [BinT5-NoFunName](https://huggingface.co/AISE-TUDelft/BinT5-NoFunName) | Decompiled C Binaries with the Function Name removed |
### Citation Information
```
@inproceedings{alkaswan2023extending,
title={Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binaries},
author={Al-Kaswan, Ali and Ahmed, Toufique and Izadi, Maliheh and Sawant, Anand Ashok and Devanbu, Premkumar and van Deursen, Arie},
booktitle={2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)},
pages={260--271},
year={2023},
organization={IEEE}
}
```
|
AISE-TUDelft/BinT5-C
|
AISE-TUDelft
| 2024-02-02T15:28:44Z | 179 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"code",
"dataset:AISE-TUDelft/Capybara",
"arxiv:2301.01701",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T14:41:20Z |
---
license: apache-2.0
datasets:
- AISE-TUDelft/Capybara
tags:
- code
---
# BinT5
- **Repository: https://github.com/AISE-TUDelft/Capybara-BinT5**
- **Paper: https://huggingface.co/papers/2301.01701**
- **Point of Contact: https://huggingface.co/aalkaswan**
- **Raw Data: https://zenodo.org/records/7229913**
BinT5 is a Binary Code Summarization model, the base models are [CodeT5]() and fine-tuned with [Capybara]().
We offer 5 variations of the model:
| Name | Training Data |
|-----------------------------------------------------|------------------------------------------------------|
| [BinT5-C](https://huggingface.co/AISE-TUDelft/BinT5-C) | C Source |
| [BinT5-Decom](https://huggingface.co/AISE-TUDelft/BinT5-Decom) | Decompiled C Binaries |
| [BinT5-Stripped](https://huggingface.co/AISE-TUDelft/BinT5-Stripped) | Stripped Decompiled C Binaries |
| [BinT5-Demi](https://huggingface.co/AISE-TUDelft/BinT5-Demi) | Demi-stripped Decompiled C Binaries |
| [BinT5-NoFunName](https://huggingface.co/AISE-TUDelft/BinT5-NoFunName) | Decompiled C Binaries with the Function Name removed |
### Citation Information
```
@inproceedings{alkaswan2023extending,
title={Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binaries},
author={Al-Kaswan, Ali and Ahmed, Toufique and Izadi, Maliheh and Sawant, Anand Ashok and Devanbu, Premkumar and van Deursen, Arie},
booktitle={2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)},
pages={260--271},
year={2023},
organization={IEEE}
}
```
|
AISE-TUDelft/BinT5-Stripped
|
AISE-TUDelft
| 2024-02-02T15:28:21Z | 183 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"code",
"dataset:AISE-TUDelft/Capybara",
"arxiv:2301.01701",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T14:39:31Z |
---
license: apache-2.0
datasets:
- AISE-TUDelft/Capybara
tags:
- code
---
# BinT5
- **Repository: https://github.com/AISE-TUDelft/Capybara-BinT5**
- **Paper: https://huggingface.co/papers/2301.01701**
- **Point of Contact: https://huggingface.co/aalkaswan**
- **Raw Data: https://zenodo.org/records/7229913**
BinT5 is a Binary Code Summarization model, the base models are [CodeT5]() and fine-tuned with [Capybara]().
We offer 5 variations of the model:
| Name | Training Data |
|-----------------------------------------------------|------------------------------------------------------|
| [BinT5-C](https://huggingface.co/AISE-TUDelft/BinT5-C) | C Source |
| [BinT5-Decom](https://huggingface.co/AISE-TUDelft/BinT5-Decom) | Decompiled C Binaries |
| [BinT5-Stripped](https://huggingface.co/AISE-TUDelft/BinT5-Stripped) | Stripped Decompiled C Binaries |
| [BinT5-Demi](https://huggingface.co/AISE-TUDelft/BinT5-Demi) | Demi-stripped Decompiled C Binaries |
| [BinT5-NoFunName](https://huggingface.co/AISE-TUDelft/BinT5-NoFunName) | Decompiled C Binaries with the Function Name removed |
### Citation Information
```
@inproceedings{alkaswan2023extending,
title={Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binaries},
author={Al-Kaswan, Ali and Ahmed, Toufique and Izadi, Maliheh and Sawant, Anand Ashok and Devanbu, Premkumar and van Deursen, Arie},
booktitle={2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)},
pages={260--271},
year={2023},
organization={IEEE}
}
```
|
AISE-TUDelft/BinT5-Decom
|
AISE-TUDelft
| 2024-02-02T15:27:54Z | 181 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"code",
"dataset:AISE-TUDelft/Capybara",
"arxiv:2301.01701",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T14:40:39Z |
---
license: apache-2.0
datasets:
- AISE-TUDelft/Capybara
tags:
- code
---
# BinT5
- **Repository: https://github.com/AISE-TUDelft/Capybara-BinT5**
- **Paper: https://huggingface.co/papers/2301.01701**
- **Point of Contact: https://huggingface.co/aalkaswan**
- **Raw Data: https://zenodo.org/records/7229913**
BinT5 is a Binary Code Summarization model, the base models are [CodeT5]() and fine-tuned with [Capybara]().
We offer 5 variations of the model:
| Name | Training Data |
|-----------------------------------------------------|------------------------------------------------------|
| [BinT5-C](https://huggingface.co/AISE-TUDelft/BinT5-C) | C Source |
| [BinT5-Decom](https://huggingface.co/AISE-TUDelft/BinT5-Decom) | Decompiled C Binaries |
| [BinT5-Stripped](https://huggingface.co/AISE-TUDelft/BinT5-Stripped) | Stripped Decompiled C Binaries |
| [BinT5-Demi](https://huggingface.co/AISE-TUDelft/BinT5-Demi) | Demi-stripped Decompiled C Binaries |
| [BinT5-NoFunName](https://huggingface.co/AISE-TUDelft/BinT5-NoFunName) | Decompiled C Binaries with the Function Name removed |
### Citation Information
```
@inproceedings{alkaswan2023extending,
title={Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binaries},
author={Al-Kaswan, Ali and Ahmed, Toufique and Izadi, Maliheh and Sawant, Anand Ashok and Devanbu, Premkumar and van Deursen, Arie},
booktitle={2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)},
pages={260--271},
year={2023},
organization={IEEE}
}
```
|
AISE-TUDelft/BinT5-Demi
|
AISE-TUDelft
| 2024-02-02T15:27:12Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"code",
"dataset:AISE-TUDelft/Capybara",
"arxiv:2301.01701",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T14:40:03Z |
---
license: apache-2.0
datasets:
- AISE-TUDelft/Capybara
tags:
- code
---
# BinT5
- **Repository: https://github.com/AISE-TUDelft/Capybara-BinT5**
- **Paper: https://huggingface.co/papers/2301.01701**
- **Point of Contact: https://huggingface.co/aalkaswan**
- **Raw Data: https://zenodo.org/records/7229913**
BinT5 is a Binary Code Summarization model, the base models are [CodeT5]() and fine-tuned with [Capybara]().
We offer 5 variations of the model:
| Name | Training Data |
|-----------------------------------------------------|------------------------------------------------------|
| [BinT5-C](https://huggingface.co/AISE-TUDelft/BinT5-C) | C Source |
| [BinT5-Decom](https://huggingface.co/AISE-TUDelft/BinT5-Decom) | Decompiled C Binaries |
| [BinT5-Stripped](https://huggingface.co/AISE-TUDelft/BinT5-Stripped) | Stripped Decompiled C Binaries |
| [BinT5-Demi](https://huggingface.co/AISE-TUDelft/BinT5-Demi) | Demi-stripped Decompiled C Binaries |
| [BinT5-NoFunName](https://huggingface.co/AISE-TUDelft/BinT5-NoFunName) | Decompiled C Binaries with the Function Name removed |
### Citation Information
```
@inproceedings{alkaswan2023extending,
title={Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binaries},
author={Al-Kaswan, Ali and Ahmed, Toufique and Izadi, Maliheh and Sawant, Anand Ashok and Devanbu, Premkumar and van Deursen, Arie},
booktitle={2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)},
pages={260--271},
year={2023},
organization={IEEE}
}
```
|
Rupesh2/mistral_lora_model_2
|
Rupesh2
| 2024-02-02T15:12:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T15:11:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CLMBR/rel-cl-transformer-4
|
CLMBR
| 2024-02-02T15:02:08Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T10:46:19Z |
---
tags:
- generated_from_trainer
model-index:
- name: rel-cl2-transformer-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rel-cl2-transformer-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2396 | 0.03 | 76320 | 4.2054 |
| 4.0324 | 1.03 | 152640 | 4.0344 |
| 3.9268 | 0.03 | 228960 | 3.9612 |
| 3.858 | 1.03 | 305280 | 3.9191 |
| 3.8054 | 0.03 | 381600 | 3.8933 |
| 3.7659 | 1.03 | 457920 | 3.8774 |
| 3.7299 | 0.03 | 534240 | 3.8676 |
| 3.6983 | 1.03 | 610560 | 3.8601 |
| 3.6681 | 0.03 | 686880 | 3.8566 |
| 3.6432 | 1.03 | 763200 | 3.8536 |
| 3.6169 | 0.03 | 839520 | 3.8521 |
| 3.596 | 1.03 | 915840 | 3.8518 |
| 3.5764 | 0.03 | 992160 | 3.8511 |
| 3.5594 | 1.03 | 1068480 | 3.8525 |
| 3.5453 | 0.03 | 1144800 | 3.8519 |
| 3.5375 | 1.03 | 1221120 | 3.8525 |
| 3.5204 | 0.03 | 1297440 | 3.8539 |
| 3.5061 | 1.03 | 1373760 | 3.8566 |
| 3.494 | 0.03 | 1450080 | 3.8576 |
| 3.483 | 1.03 | 1526400 | 3.8582 |
| 3.4723 | 0.03 | 1602720 | 3.8598 |
| 3.4649 | 1.03 | 1679040 | 3.8618 |
| 3.4552 | 0.03 | 1755360 | 3.8638 |
| 3.4422 | 0.03 | 1831680 | 3.8636 |
| 3.4279 | 1.03 | 1908000 | 3.8664 |
| 3.4158 | 0.03 | 1984320 | 3.8670 |
| 3.4032 | 1.03 | 2060640 | 3.8694 |
| 3.392 | 0.03 | 2136960 | 3.8695 |
| 3.3802 | 1.03 | 2213280 | 3.8707 |
| 3.3686 | 0.03 | 2289600 | 3.8726 |
| 3.3611 | 1.03 | 2365920 | 3.8729 |
| 3.359 | 0.03 | 2442240 | 3.8727 |
| 3.3459 | 1.03 | 2518560 | 3.8743 |
| 3.3369 | 0.03 | 2594880 | 3.8742 |
| 3.328 | 0.03 | 2671200 | 3.8751 |
| 3.3197 | 1.03 | 2747520 | 3.8751 |
| 3.3156 | 0.03 | 2823840 | 3.8752 |
| 3.3092 | 1.03 | 2900160 | 3.8751 |
| 3.3014 | 0.03 | 2976480 | 3.8737 |
| 3.2948 | 1.02 | 3052726 | 3.8723 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Locutusque/gpt2-large-medical
|
Locutusque
| 2024-02-02T15:01:13Z | 1,105 | 11 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"medical",
"dataset:BI55/MedText",
"dataset:pubmed_qa",
"doi:10.57967/hf/1367",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-01T22:04:10Z |
---
datasets:
- BI55/MedText
- pubmed_qa
metrics:
- bleu
- perplexity
pipeline_tag: text-generation
widget:
- text: <|USER|> What is Schizophrenia? <|ASSISTANT|>
inference:
parameters:
temperature: 0.8
top_p: 0.14
top_k: 41
max_new_tokens: 15
repetition_penalty: 1.176
tags:
- medical
---
A further fine-tuned version of Locutusque/gpt2-large-conversational on MedText and pubmed_qa
# Evaluation
This model was evaluated using GPT-3.5, and it was asked medical questions. It achieved an average accuracy of 80%.
|
ksyint/mountain_landscape
|
ksyint
| 2024-02-02T14:48:28Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"diffusers:StableDiffusionInpaintPipeline",
"region:us"
] |
image-to-image
| 2024-02-02T14:37:12Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/miquella-120b-6.0bpw-h6-exl2
|
LoneStriker
| 2024-02-02T14:33:04Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T13:53:45Z |
---
base_model: []
tags:
- mergekit
- merge
---
# Miquella 120B
## Model has been remade with the [fixed dequantization](https://huggingface.co/152334H/miqu-1-70b-sf) of miqu.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
An attempt at re-creating [goliath-120b](https://huggingface.co/alpindale/goliath-120b) using the new miqu-1-70b model instead of Xwin.
The merge ratios are the same as goliath, only that Xwin is swapped with miqu.
### Models Merged
The following models were included in the merge:
* [miqu-1-70b](https://huggingface.co/alpindale/miqu-1-70b-fp16)
* [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B)

Miquella the Unalloyed, by @eldrtchmoon
|
helenai/EleutherAI-gpt-neox-20b-ov-int8
|
helenai
| 2024-02-02T14:31:46Z | 4 | 0 |
transformers
|
[
"transformers",
"openvino",
"gpt_neox",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T14:23:10Z |
---
language:
- en
tags:
- openvino
---
# EleutherAI-gpt-neox-20b-ov-int8
This is the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) model converted to [OpenVINO](https://openvino.ai), for accelerated inference.
Model weights are compressed to INT8 with weight compression using [nncf](https://github.com/openvinotoolkit/nncf).
Use [optimum-intel](https://github.com/huggingface/optimum-intel) for inference ([documentation](https://huggingface.co/docs/optimum/intel/inference#inference)).
|
Krisbiantoro/mixtral-id-chatml
|
Krisbiantoro
| 2024-02-02T14:29:24Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-02T07:40:02Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mixtral-8x7B-v0.1
model-index:
- name: mixtral-id-chatml
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mixtral-id-chatml
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.