modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
1aurent/speecht5_finetuned_fleurs_fr
|
1aurent
| 2023-08-01T20:12:30Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"fr",
"dataset:google/fleurs",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-26T18:25:54Z |
---
language:
- fr
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- google/fleurs
model-index:
- name: speecht5_finetuned_fleurs_fr
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_fleurs_fr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the google/fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 36
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 250
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3958 | 2.82 | 250 | 0.3692 |
| 0.3942 | 5.64 | 500 | 0.3651 |
| 0.3924 | 8.46 | 750 | 0.3615 |
| 0.3927 | 11.28 | 1000 | 0.3627 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
s3nh/lmsys-vicuna-7b-v1.5-16k-GGML
|
s3nh
| 2023-08-01T20:09:55Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"en",
"arxiv:2307.09288",
"arxiv:2306.05685",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-01T19:57:53Z |
---
license: openrail
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/James-WYang/BigTranslate).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
## Model Details
Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture
- **License:** Llama 2 Community License Agreement
- **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288)
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api
## Training Details
Vicuna v1.5 (16k) is fine-tuned from Llama 2 with supervised instruction fine-tuning and linear RoPE scaling.
The training data is around 125K conversations collected from ShareGPT.com. These conversations are packed into sequences that contain 16K tokens each.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
OUniv/qloraFalcon7B_test
|
OUniv
| 2023-08-01T20:01:51Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"RefinedWebModel",
"custom_code",
"region:us"
] | null | 2023-08-01T19:41:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
Sookeyy/Reinforce-CartPole-v1
|
Sookeyy
| 2023-08-01T19:48:11Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T19:48:01Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
s3nh/lmsys-longchat-7b-v1.5-32k-GGML
|
s3nh
| 2023-08-01T19:38:49Z | 0 | 4 |
transformers
|
[
"transformers",
"text-generation",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-01T19:23:46Z |
---
license: openrail
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/lmsys/longchat-7b-v1.5-32k/tree/main).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
|
NasimB/aochildes_gutenberg_fixed_log_rarity-mixed-seed
|
NasimB
| 2023-08-01T19:38:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-01T17:20:41Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: aochildes_gutenberg_fixed_log_rarity-mixed-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aochildes_gutenberg_fixed_log_rarity-mixed-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3592 | 0.29 | 500 | 5.3385 |
| 5.0519 | 0.59 | 1000 | 4.9203 |
| 4.7218 | 0.88 | 1500 | 4.6948 |
| 4.4531 | 1.17 | 2000 | 4.5553 |
| 4.3081 | 1.47 | 2500 | 4.4381 |
| 4.201 | 1.76 | 3000 | 4.3366 |
| 4.0871 | 2.05 | 3500 | 4.2702 |
| 3.9013 | 2.35 | 4000 | 4.2246 |
| 3.88 | 2.64 | 4500 | 4.1727 |
| 3.8385 | 2.93 | 5000 | 4.1257 |
| 3.6438 | 3.23 | 5500 | 4.1265 |
| 3.5973 | 3.52 | 6000 | 4.0988 |
| 3.5805 | 3.81 | 6500 | 4.0654 |
| 3.4762 | 4.11 | 7000 | 4.0753 |
| 3.3279 | 4.4 | 7500 | 4.0724 |
| 3.3237 | 4.69 | 8000 | 4.0616 |
| 3.3081 | 4.99 | 8500 | 4.0509 |
| 3.1497 | 5.28 | 9000 | 4.0682 |
| 3.1439 | 5.57 | 9500 | 4.0663 |
| 3.1428 | 5.87 | 10000 | 4.0656 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
toughdata/flan-t5-base-eli5-question-generation-54500
|
toughdata
| 2023-08-01T19:33:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:eli5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-01T15:41:38Z |
---
datasets:
- eli5
language:
- en
---
This model generates short questions based on long answers.
To use, prend "rephrase this as a question: " to your input text.
|
kingbri/airoboros-l2-13b-gpt4-m2.0-GPTQ
|
kingbri
| 2023-08-01T19:26:40Z | 4 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-31T19:37:24Z |
---
language:
- en
---
This is a GPTQ quantized version of [airoboros-l2-13b-gpt4-m2.0](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-m2.0) using float16
Please refer to the original creator for more information.
Branches:
- main: 4 bits, groupsize 128, act order false
- 4bit-128g-actorder: 4 bits, groupsize 128, act order true
- 4bit-32g-actorder: 4 bits, groupsize 32, act order true
|
liusong299/sd-class-butterflies-32
|
liusong299
| 2023-08-01T19:22:26Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-08-01T19:22:14Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('liusong299/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
jariasn/poca-SoccerTwos
|
jariasn
| 2023-08-01T19:19:24Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-01T19:17:35Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jariasn/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Erfanfz/distilbert-base-uncased-finetuned-emotion
|
Erfanfz
| 2023-08-01T19:11:25Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-01T18:59:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9234086034451895
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2165
- Accuracy: 0.9235
- F1: 0.9234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8148 | 1.0 | 250 | 0.3120 | 0.898 | 0.8937 |
| 0.2475 | 2.0 | 500 | 0.2165 | 0.9235 | 0.9234 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
VictorEigen/funcname_codebert_20235201_1652
|
VictorEigen
| 2023-08-01T19:02:29Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"encoder-decoder",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-01T16:53:55Z |
---
base_model: ''
tags:
- generated_from_keras_callback
model-index:
- name: VictorEigen/funcname_codebert_20235201_1652
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# VictorEigen/funcname_codebert_20235201_1652
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0014
- Validation Loss: 0.0005
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 28378, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.2481 | 0.0014 | 0 |
| 0.0014 | 0.0005 | 1 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.2
- Tokenizers 0.13.3
|
digiplay/perfectlevel10
|
digiplay
| 2023-08-01T18:49:48Z | 4,873 | 11 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-01T18:10:46Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/117591/perfectlevel10
Sample image :

|
timliu007/falcon-7b-qlora-ft-adapters
|
timliu007
| 2023-08-01T18:49:20Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-01T18:49:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
MaxBas/01-08-2023_19-25-49
|
MaxBas
| 2023-08-01T18:39:01Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:common_voice_13_0",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-08-01T17:25:53Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
model-index:
- name: 01-08-2023_19-25-49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 01-08-2023_19-25-49
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 6000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5581 | 1.0 | 884 | 0.6011 |
| 0.5129 | 2.0 | 1768 | 0.5674 |
| 0.5038 | 3.0 | 2652 | 0.5604 |
| 0.5068 | 4.0 | 3536 | 0.5502 |
| 0.4936 | 5.0 | 4420 | 0.5465 |
| 0.4871 | 6.0 | 5304 | 0.5426 |
| 0.4855 | 6.79 | 6000 | 0.5452 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 2.13.1
- Tokenizers 0.13.3
|
leondz/artgpt2tox
|
leondz
| 2023-08-01T18:31:08Z | 315 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-08T23:58:57Z |
---
license: apache-2.0
language:
- en
---
|
davidmunechika/coreml-landscape-anime-pro
|
davidmunechika
| 2023-08-01T18:29:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-01T18:09:44Z |
---
license: creativeml-openrail-m
---
|
lmsys/vicuna-7b-v1.3
|
lmsys
| 2023-08-01T18:26:56Z | 45,515 | 129 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.05685",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-18T03:36:42Z |
---
inference: false
---
**NOTE: New version available**
Please check out a newer version of the weights [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md).
<br>
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 125K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
lmsys/vicuna-13b-v1.3
|
lmsys
| 2023-08-01T18:26:48Z | 15,885 | 197 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.05685",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-18T03:38:59Z |
---
inference: false
---
**NOTE: New version available**
Please check out a newer version of the weights [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md).
<br>
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 125K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
lmsys/vicuna-7b-v1.1
|
lmsys
| 2023-08-01T18:26:25Z | 3,527 | 77 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.05685",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-04-12T21:43:30Z |
---
inference: false
---
**NOTE: New version available**
Please check out a newer version of the weights [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md).
<br>
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.1 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 70K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
## Acknowledgement
Special thanks to [@TheBloke](https://huggingface.co/TheBloke) for hosting this merged version of weights earlier.
|
lmsys/vicuna-13b-v1.1
|
lmsys
| 2023-08-01T18:26:15Z | 3,614 | 98 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.05685",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-04-12T21:23:50Z |
---
inference: false
---
**NOTE: New version available**
Please check out a newer version of the weights [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md).
<br>
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.1 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 70K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
## Acknowledgement
Special thanks to [@TheBloke](https://huggingface.co/TheBloke) for hosting this merged version of weights earlier.
|
jclynn/CodeBERTa-small-v1-finetuned-codesearchnet
|
jclynn
| 2023-08-01T18:22:00Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"base_model:huggingface/CodeBERTa-small-v1",
"base_model:finetune:huggingface/CodeBERTa-small-v1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-01T18:19:55Z |
---
base_model: huggingface/CodeBERTa-small-v1
tags:
- generated_from_keras_callback
model-index:
- name: jclynn/CodeBERTa-small-v1-finetuned-codesearchnet
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jclynn/CodeBERTa-small-v1-finetuned-codesearchnet
This model is a fine-tuned version of [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7179
- Validation Loss: 1.4649
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -996, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.7179 | 1.4649 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.2
- Tokenizers 0.13.3
|
ssbuild/chatglm2-6b-32k-int4
|
ssbuild
| 2023-08-01T18:21:00Z | 104 | 2 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"custom_code",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-01T08:18:16Z |
---
license: apache-2.0
---
---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM2-6B
## 软件依赖
```shell
pip install protobuf transformers==4.30.2 cpm_kernels torch>=2.0 gradio mdtex2html sentencepiece accelerate
```
## 代码调用
可以通过如下代码调用 ChatGLM-6B 模型来生成对话:
```ipython
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True)
>>> model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).half().cuda()
>>> model = model.eval()
>>> response, history = model.chat(tokenizer, "你好", history=[])
>>> print(response)
你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。
>>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
>>> print(response)
晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法:
1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。
2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。
3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。
4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。
5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。
6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。
如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。
```
关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM2-6B)。
For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/ChatGLM2-6B).
## Change Log
* v1.0
## 协议
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM2-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
|
fimu-docproc-research/CIVQA_layoutXLM_model
|
fimu-docproc-research
| 2023-08-01T17:55:48Z | 62 | 1 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv2",
"document-question-answering",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2023-06-18T21:37:07Z |
# The finetuned LayoutXLm model on Czech dataset for Visual Question Answering
The original model can be found [here](microsoft/layoutxlm-base)
The CIVQA dataset is the Czech Invoice dataset for Visual Question Answering
Achieved results:
eval_answer_text_recall = 0.7065
eval_answer_text_f1 = 0.6998
eval_answer_text_precision = 0.7319
|
elifm/swin-tiny-patch4-window7-224-finetuned-sar
|
elifm
| 2023-08-01T17:40:34Z | 214 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-01T15:08:58Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-sar
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9880478087649402
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-sar
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0351
- Accuracy: 0.9880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3706 | 1.0 | 53 | 0.1639 | 0.9442 |
| 0.3062 | 2.0 | 106 | 0.1337 | 0.9509 |
| 0.264 | 3.0 | 159 | 0.0671 | 0.9748 |
| 0.1861 | 4.0 | 212 | 0.0470 | 0.9854 |
| 0.2131 | 5.0 | 265 | 0.0351 | 0.9880 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.2
- Tokenizers 0.13.3
|
fimu-docproc-research/xlm-roberta-large-ner
|
fimu-docproc-research
| 2023-08-01T17:36:25Z | 133 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-08T23:25:35Z |
# The finetuned XLM-RoBERTa (large-sized model) for Named Entity Recognition
This model was finetuned on the [Czech invoice dataset](fimu-docproc-research/tesseract-ocr-annotations).
## Achieved results:
eval_accuracy = 0.9618613
eval_f1 = 0.7825681
eval_precision = 0.7752081
|
BaojunJia/sd-class-butterflies-32
|
BaojunJia
| 2023-08-01T17:27:41Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-08-01T16:59:47Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('BaojunJia/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Emperor-WS/ppo-LunarLander-v2-u8
|
Emperor-WS
| 2023-08-01T17:26:39Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T17:26:33Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -102.93 +/- 43.17
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 16
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 16
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Emperor-WS/ppo-CartPole-v1'
'batch_size': 2048
'minibatch_size': 128}
```
|
Shiro/roberta-large-movie-genre
|
Shiro
| 2023-08-01T17:25:49Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-31T22:02:05Z |
license: mit
# roberta-large-movies
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the movie competition dataset.
link: https://huggingface.co/spaces/competitions/movie-genre-prediction
This model is nased on a MLM (Mask language modeling) finetuning. The goal is to apply a domain transfer. It needs then to be finetuned on labels.
It achieves the following results on the evaluation set:
- Loss: 1.3261
- Accuracy: 0.7375
## Model description
roberta-large
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7698 | 0.18 | 500 | 1.6168 | 0.6738 |
| 1.7761 | 0.36 | 1000 | 1.6522 | 0.6830 |
| 1.7626 | 0.54 | 1500 | 1.6534 | 0.6660 |
| 1.7602 | 0.72 | 2000 | 1.6576 | 0.6787 |
| 1.7587 | 0.89 | 2500 | 1.6266 | 0.6773 |
| 1.7047 | 1.07 | 3000 | 1.6060 | 0.6852 |
| 1.6782 | 1.25 | 3500 | 1.5990 | 0.6906 |
| 1.6733 | 1.43 | 4000 | 1.5377 | 0.6967 |
| 1.6664 | 1.61 | 4500 | 1.6435 | 0.6747 |
| 1.6719 | 1.79 | 5000 | 1.4839 | 0.6907 |
| 1.6502 | 1.97 | 5500 | 1.5351 | 0.6897 |
| 1.6233 | 2.15 | 6000 | 1.6818 | 0.6763 |
| 1.6127 | 2.32 | 6500 | 1.5865 | 0.6853 |
| 1.6274 | 2.5 | 7000 | 1.5004 | 0.7004 |
| 1.601 | 2.68 | 7500 | 1.4522 | 0.6930 |
| 1.6123 | 2.86 | 8000 | 1.5371 | 0.6894 |
| 1.6074 | 3.04 | 8500 | 1.5342 | 0.6952 |
| 1.563 | 3.22 | 9000 | 1.5682 | 0.6876 |
| 1.5746 | 3.4 | 9500 | 1.5705 | 0.6958 |
| 1.5539 | 3.58 | 10000 | 1.4711 | 0.7041 |
| 1.578 | 3.75 | 10500 | 1.5466 | 0.6889 |
| 1.5492 | 3.93 | 11000 | 1.4629 | 0.6969 |
| 1.5291 | 4.11 | 11500 | 1.4265 | 0.7200 |
| 1.5079 | 4.29 | 12000 | 1.5053 | 0.6966 |
| 1.5283 | 4.47 | 12500 | 1.5257 | 0.6903 |
| 1.5141 | 4.65 | 13000 | 1.5063 | 0.6950 |
| 1.4979 | 4.83 | 13500 | 1.5636 | 0.6956 |
| 1.5294 | 5.01 | 14000 | 1.5878 | 0.6835 |
| 1.4641 | 5.18 | 14500 | 1.5575 | 0.6962 |
| 1.4754 | 5.36 | 15000 | 1.4779 | 0.7007 |
| 1.4696 | 5.54 | 15500 | 1.4520 | 0.6965 |
| 1.4655 | 5.72 | 16000 | 1.6320 | 0.6830 |
| 1.4792 | 5.9 | 16500 | 1.4152 | 0.7134 |
| 1.4379 | 6.08 | 17000 | 1.4900 | 0.7042 |
| 1.4281 | 6.26 | 17500 | 1.5407 | 0.6990 |
| 1.436 | 6.44 | 18000 | 1.5343 | 0.6914 |
| 1.4342 | 6.61 | 18500 | 1.5324 | 0.7024 |
| 1.4176 | 6.79 | 19000 | 1.4486 | 0.7133 |
| 1.4308 | 6.97 | 19500 | 1.4598 | 0.7032 |
| 1.4014 | 7.15 | 20000 | 1.5750 | 0.6938 |
| 1.3661 | 7.33 | 20500 | 1.5404 | 0.6985 |
| 1.3857 | 7.51 | 21000 | 1.4692 | 0.7037 |
| 1.3846 | 7.69 | 21500 | 1.5511 | 0.6941 |
| 1.3867 | 7.87 | 22000 | 1.5321 | 0.6925 |
| 1.3658 | 8.04 | 22500 | 1.5500 | 0.7021 |
| 1.3406 | 8.22 | 23000 | 1.5239 | 0.6960 |
| 1.3405 | 8.4 | 23500 | 1.4414 | 0.7055 |
| 1.3373 | 8.58 | 24000 | 1.5994 | 0.6784 |
| 1.3527 | 8.76 | 24500 | 1.5106 | 0.6970 |
| 1.3436 | 8.94 | 25000 | 1.4714 | 0.7080 |
| 1.3069 | 9.12 | 25500 | 1.4990 | 0.6953 |
| 1.2969 | 9.3 | 26000 | 1.4810 | 0.6964 |
| 1.3009 | 9.47 | 26500 | 1.5965 | 0.6876 |
| 1.3227 | 9.65 | 27000 | 1.4296 | 0.7014 |
| 1.3259 | 9.83 | 27500 | 1.4137 | 0.7189 |
| 1.3131 | 10.01 | 28000 | 1.5342 | 0.7020 |
| 1.271 | 10.19 | 28500 | 1.4708 | 0.7113 |
| 1.2684 | 10.37 | 29000 | 1.4342 | 0.7046 |
| 1.2767 | 10.55 | 29500 | 1.4703 | 0.7094 |
| 1.2861 | 10.73 | 30000 | 1.3323 | 0.7309 |
| 1.2617 | 10.9 | 30500 | 1.4562 | 0.7003 |
| 1.2551 | 11.08 | 31000 | 1.4361 | 0.7170 |
| 1.2404 | 11.26 | 31500 | 1.4537 | 0.7035 |
| 1.2562 | 11.44 | 32000 | 1.4039 | 0.7132 |
| 1.2489 | 11.62 | 32500 | 1.4372 | 0.7064 |
| 1.2406 | 11.8 | 33000 | 1.4926 | 0.7087 |
| 1.2285 | 11.98 | 33500 | 1.4080 | 0.7152 |
| 1.2213 | 12.16 | 34000 | 1.4031 | 0.7170 |
| 1.1998 | 12.33 | 34500 | 1.3541 | 0.7223 |
| 1.2184 | 12.51 | 35000 | 1.3630 | 0.7308 |
| 1.2195 | 12.69 | 35500 | 1.3125 | 0.7281 |
| 1.2178 | 12.87 | 36000 | 1.4257 | 0.7119 |
| 1.1918 | 13.05 | 36500 | 1.4108 | 0.7153 |
| 1.1664 | 13.23 | 37000 | 1.3577 | 0.7227 |
| 1.1754 | 13.41 | 37500 | 1.3777 | 0.7206 |
| 1.1855 | 13.59 | 38000 | 1.3501 | 0.7354 |
| 1.1644 | 13.76 | 38500 | 1.3747 | 0.7207 |
| 1.1709 | 13.94 | 39000 | 1.3704 | 0.7184 |
| 1.1613 | 14.12 | 39500 | 1.4307 | 0.7247 |
| 1.1443 | 14.3 | 40000 | 1.3190 | 0.7221 |
| 1.1356 | 14.48 | 40500 | 1.3288 | 0.7331 |
| 1.1493 | 14.66 | 41000 | 1.3505 | 0.7240 |
| 1.1417 | 14.84 | 41500 | 1.3146 | 0.7320 |
| 1.1349 | 15.02 | 42000 | 1.3546 | 0.7333 |
| 1.1169 | 15.19 | 42500 | 1.3709 | 0.7247 |
| 1.1187 | 15.37 | 43000 | 1.4243 | 0.7218 |
| 1.118 | 15.55 | 43500 | 1.3835 | 0.7264 |
| 1.1165 | 15.73 | 44000 | 1.3240 | 0.7254 |
| 1.114 | 15.91 | 44500 | 1.3264 | 0.7382 |
| 1.105 | 16.09 | 45000 | 1.3214 | 0.7334 |
| 1.0924 | 16.27 | 45500 | 1.3847 | 0.7282 |
| 1.0915 | 16.45 | 46000 | 1.3604 | 0.7317 |
| 1.0968 | 16.62 | 46500 | 1.3540 | 0.7319 |
| 1.0772 | 16.8 | 47000 | 1.2475 | 0.7306 |
| 1.0975 | 16.98 | 47500 | 1.2636 | 0.7448 |
| 1.0708 | 17.16 | 48000 | 1.4056 | 0.7182 |
| 1.0654 | 17.34 | 48500 | 1.3769 | 0.7276 |
| 1.0676 | 17.52 | 49000 | 1.3357 | 0.7224 |
| 1.0507 | 17.7 | 49500 | 1.4088 | 0.7124 |
| 1.0424 | 17.88 | 50000 | 1.3146 | 0.7315 |
| 1.0524 | 18.06 | 50500 | 1.2896 | 0.7393 |
| 1.0349 | 18.23 | 51000 | 1.3987 | 0.7192 |
| 1.0217 | 18.41 | 51500 | 1.2938 | 0.7381 |
| 1.0238 | 18.59 | 52000 | 1.2962 | 0.7387 |
| 1.0292 | 18.77 | 52500 | 1.3195 | 0.7371 |
| 1.0426 | 18.95 | 53000 | 1.2835 | 0.7412 |
| 1.0196 | 19.13 | 53500 | 1.2346 | 0.7473 |
| 1.012 | 19.31 | 54000 | 1.3666 | 0.7338 |
| 1.0256 | 19.49 | 54500 | 1.3140 | 0.7365 |
| 0.9824 | 19.66 | 55000 | 1.2764 | 0.7416 |
| 1.0048 | 19.84 | 55500 | 1.2514 | 0.7488 |
| 0.9947 | 20.02 | 56000 | 1.3351 | 0.7432 |
| 0.977 | 20.2 | 56500 | 1.2854 | 0.7451 |
| 0.9862 | 20.38 | 57000 | 1.3666 | 0.7285 |
| 0.9699 | 20.56 | 57500 | 1.3123 | 0.7348 |
| 0.977 | 20.74 | 58000 | 1.3426 | 0.7255 |
| 0.9749 | 20.92 | 58500 | 1.3763 | 0.7297 |
| 0.9505 | 21.09 | 59000 | 1.2372 | 0.7434 |
| 0.9438 | 21.27 | 59500 | 1.4334 | 0.7159 |
| 0.944 | 21.45 | 60000 | 1.2690 | 0.7508 |
| 0.9427 | 21.63 | 60500 | 1.2186 | 0.7486 |
| 0.9553 | 21.81 | 61000 | 1.3941 | 0.7269 |
| 0.9571 | 21.99 | 61500 | 1.4163 | 0.7274 |
| 0.932 | 22.17 | 62000 | 1.2717 | 0.7523 |
| 0.9166 | 22.35 | 62500 | 1.2177 | 0.7396 |
| 0.9301 | 22.52 | 63000 | 1.3264 | 0.7378 |
| 0.9351 | 22.7 | 63500 | 1.2570 | 0.7520 |
| 0.9211 | 22.88 | 64000 | 1.2639 | 0.75 |
| 0.9211 | 23.06 | 64500 | 1.2377 | 0.7606 |
| 0.9196 | 23.24 | 65000 | 1.2739 | 0.7485 |
| 0.9062 | 23.42 | 65500 | 1.3263 | 0.7365 |
| 0.8965 | 23.6 | 66000 | 1.2814 | 0.7455 |
| 0.9004 | 23.78 | 66500 | 1.2109 | 0.7562 |
| 0.9094 | 23.95 | 67000 | 1.2629 | 0.7528 |
| 0.8937 | 24.13 | 67500 | 1.2771 | 0.7375 |
| 0.8711 | 24.31 | 68000 | 1.3746 | 0.7353 |
| 0.8972 | 24.49 | 68500 | 1.2529 | 0.7454 |
| 0.8863 | 24.67 | 69000 | 1.3219 | 0.7359 |
| 0.8823 | 24.85 | 69500 | 1.3136 | 0.7367 |
| 0.8759 | 25.03 | 70000 | 1.3152 | 0.7428 |
| 0.8722 | 25.21 | 70500 | 1.3108 | 0.7570 |
| 0.8548 | 25.38 | 71000 | 1.3503 | 0.7368 |
| 0.8728 | 25.56 | 71500 | 1.3091 | 0.7403 |
| 0.8633 | 25.74 | 72000 | 1.2952 | 0.7416 |
| 0.8612 | 25.92 | 72500 | 1.1612 | 0.7719 |
| 0.8677 | 26.1 | 73000 | 1.2855 | 0.7450 |
| 0.8526 | 26.28 | 73500 | 1.2979 | 0.7545 |
| 0.8594 | 26.46 | 74000 | 1.2570 | 0.7598 |
| 0.8481 | 26.64 | 74500 | 1.2337 | 0.7492 |
| 0.855 | 26.81 | 75000 | 1.2875 | 0.7444 |
| 0.835 | 26.99 | 75500 | 1.2270 | 0.7585 |
| 0.8309 | 27.17 | 76000 | 1.2540 | 0.7389 |
| 0.8326 | 27.35 | 76500 | 1.3611 | 0.7375 |
| 0.8398 | 27.53 | 77000 | 1.2248 | 0.7505 |
| 0.8304 | 27.71 | 77500 | 1.2403 | 0.7607 |
| 0.8373 | 27.89 | 78000 | 1.1709 | 0.7611 |
| 0.8462 | 28.07 | 78500 | 1.2891 | 0.7508 |
| 0.8259 | 28.24 | 79000 | 1.2452 | 0.7501 |
| 0.8334 | 28.42 | 79500 | 1.2986 | 0.7468 |
| 0.8115 | 28.6 | 80000 | 1.2880 | 0.7515 |
| 0.8205 | 28.78 | 80500 | 1.2728 | 0.7562 |
| 0.8261 | 28.96 | 81000 | 1.2661 | 0.7524 |
| 0.8299 | 29.14 | 81500 | 1.2592 | 0.7486 |
| 0.8276 | 29.32 | 82000 | 1.2325 | 0.7530 |
| 0.8112 | 29.5 | 82500 | 1.3154 | 0.7478 |
| 0.8111 | 29.67 | 83000 | 1.3343 | 0.7405 |
| 0.8148 | 29.85 | 83500 | 1.2806 | 0.7485 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
nacielo/privateuse
|
nacielo
| 2023-08-01T17:18:15Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:nacielo/wav2vec2-gpt",
"base_model:finetune:nacielo/wav2vec2-gpt",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-01T17:10:03Z |
---
base_model: nacielo/wav2vec2-gpt
tags:
- generated_from_trainer
model-index:
- name: privateuse
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# privateuse
This model is a fine-tuned version of [nacielo/wav2vec2-gpt](https://huggingface.co/nacielo/wav2vec2-gpt) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 250 | 0.7575 | 24.1082 | 7.4663 | 18.6701 | 22.4544 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
nokotin/SpaceInvaders
|
nokotin
| 2023-08-01T17:12:08Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T17:11:31Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 716.50 +/- 281.96
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nokotin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nokotin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga nokotin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
MariaK/vilt_finetuned_200
|
MariaK
| 2023-08-01T17:08:24Z | 94 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vilt",
"visual-question-answering",
"generated_from_trainer",
"dataset:vqa",
"base_model:dandelin/vilt-b32-mlm",
"base_model:finetune:dandelin/vilt-b32-mlm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2023-08-01T16:10:06Z |
---
license: apache-2.0
base_model: dandelin/vilt-b32-mlm
tags:
- generated_from_trainer
datasets:
- vqa
model-index:
- name: vilt_finetuned_200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vilt_finetuned_200
This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on the vqa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
Nadarajan329/Agriculture
|
Nadarajan329
| 2023-08-01T17:06:25Z | 0 | 1 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-01T17:06:25Z |
---
license: bigscience-openrail-m
---
|
DanRR/sd_xl_base_1.0_ckpt
|
DanRR
| 2023-08-01T16:58:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-01T16:58:08Z |
Converted version of sdxl_1.0 to training format
DO NOT USE! CKPT are dangerous and may content malitious code
|
bofenghuang/vigogne-2-13b-instruct
|
bofenghuang
| 2023-08-01T16:49:35Z | 1,580 | 14 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"LLM",
"llama-2",
"fr",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-26T13:38:14Z |
---
language:
- fr
pipeline_tag: text-generation
library_name: transformers
inference: false
tags:
- LLM
- llama
- llama-2
---
<p align="center" width="100%">
<img src="https://huggingface.co/bofenghuang/vigogne-2-13b-instruct/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
</p>
# Vigogne-2-13B-Instruct: A Llama-2 based French instruction-following model
Vigogne-2-13B-Instruct is a model based on [LLaMA-2-13B](https://ai.meta.com/llama) that has been fine-tuned to follow French instructions.
For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
**Usage and License Notices**: Vigogne-2-13B-Instruct follows the same usage policy as Llama-2, which can be found [here](https://ai.meta.com/llama/use-policy).
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from vigogne.preprocess import generate_instruct_prompt
model_name_or_path = "bofenghuang/vigogne-2-13b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto")
user_query = "Expliquez la différence entre DoS et phishing."
prompt = generate_instruct_prompt(user_query)
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device)
input_length = input_ids.shape[1]
generated_outputs = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(
temperature=0.1,
do_sample=True,
repetition_penalty=1.0,
max_new_tokens=512,
),
return_dict_in_generate=True,
)
generated_tokens = generated_outputs.sequences[0, input_length:]
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(generated_text)
```
You can also infer this model by using the following Google Colab Notebook.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Example Outputs
*todo*
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
|
eelang/ppo-LunarLander-v2
|
eelang
| 2023-08-01T16:32:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T16:29:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 231.48 +/- 31.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aihakase/chilledremix_for_diffusers
|
aihakase
| 2023-08-01T16:30:53Z | 45 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-09T04:50:45Z |
---
license: creativeml-openrail-m
---
|
idajikuu/SpeechT5_TTS_Haitian2
|
idajikuu
| 2023-08-01T16:16:57Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"text-to-speech, Haitian Creole, TTS,speecht5",
"generated_from_trainer",
"ht",
"dataset:cmu_haitian",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-30T12:38:09Z |
---
language:
- ht
license: mit
base_model: microsoft/speecht5_tts
tags:
- text-to-speech, Haitian Creole, TTS,speecht5
- generated_from_trainer
datasets:
- cmu_haitian
model-index:
- name: SpeechT5 TTS Haitian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Haitian
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the cmu_haitian dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
NasimB/aochildes_cbt_log_rarity-mixed-seed
|
NasimB
| 2023-08-01T16:14:39Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-01T14:11:52Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: aochildes_cbt_log_rarity-mixed-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aochildes_cbt_log_rarity-mixed-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3642 | 0.29 | 500 | 5.3473 |
| 5.0455 | 0.59 | 1000 | 4.9341 |
| 4.7253 | 0.88 | 1500 | 4.7023 |
| 4.4543 | 1.17 | 2000 | 4.5603 |
| 4.3097 | 1.47 | 2500 | 4.4412 |
| 4.2044 | 1.76 | 3000 | 4.3452 |
| 4.0828 | 2.05 | 3500 | 4.2731 |
| 3.91 | 2.35 | 4000 | 4.2325 |
| 3.8773 | 2.64 | 4500 | 4.1780 |
| 3.8389 | 2.93 | 5000 | 4.1277 |
| 3.6395 | 3.23 | 5500 | 4.1337 |
| 3.5987 | 3.52 | 6000 | 4.1049 |
| 3.5818 | 3.81 | 6500 | 4.0689 |
| 3.4719 | 4.11 | 7000 | 4.0787 |
| 3.3261 | 4.4 | 7500 | 4.0750 |
| 3.318 | 4.69 | 8000 | 4.0650 |
| 3.317 | 4.99 | 8500 | 4.0531 |
| 3.1547 | 5.28 | 9000 | 4.0712 |
| 3.141 | 5.58 | 9500 | 4.0701 |
| 3.1449 | 5.87 | 10000 | 4.0702 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
azhang1212/angela_shuffle_tokens_regular_eval
|
azhang1212
| 2023-08-01T16:08:58Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-01T14:53:15Z |
---
license: mit
base_model: Davlan/afro-xlmr-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: angela_shuffle_tokens_regular_eval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# angela_shuffle_tokens_regular_eval
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1543
- Precision: 0.3906
- Recall: 0.2735
- F1: 0.3218
- Accuracy: 0.9556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1729 | 1.0 | 1283 | 0.1417 | 0.4191 | 0.1292 | 0.1975 | 0.9580 |
| 0.1431 | 2.0 | 2566 | 0.1365 | 0.4356 | 0.1984 | 0.2726 | 0.9585 |
| 0.1253 | 3.0 | 3849 | 0.1404 | 0.4376 | 0.2156 | 0.2889 | 0.9588 |
| 0.1064 | 4.0 | 5132 | 0.1457 | 0.3784 | 0.2850 | 0.3251 | 0.9545 |
| 0.089 | 5.0 | 6415 | 0.1543 | 0.3906 | 0.2735 | 0.3218 | 0.9556 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
StereoBartender/WhiskeyWithGrape
|
StereoBartender
| 2023-08-01T16:04:04Z | 0 | 1 | null |
[
"license:other",
"region:us"
] | null | 2023-08-01T13:45:55Z |
---
license: other
---
Greetings!
<center><img src="https://huggingface.co/Berry-real/WhiskeyWithGrape/resolve/main/suwa.png" width="50%"></center>
just an interesting mix randomly created one evening, WIP, it is recommended to mix with something to soften the bitterness, enjoy!
ill refine it a bit later
recommended workflow: gen at 512x1024 and upscale it using img2img controlnet with colorfix + sharp, denoise about 0.4-0.6 depending on image
## Sample images
<center><img src="https://huggingface.co/Berry-real/WhiskeyWithGrape/resolve/main/reimu.png"></center>
<center><img src="https://huggingface.co/Berry-real/WhiskeyWithGrape/resolve/main/marisa.png"></center>
<center><img src="https://huggingface.co/Berry-real/WhiskeyWithGrape/resolve/main/00057-2617748669.png"></center>
## Recommended servings
-https://civitai.com/models/11772?modelVersionId=25820
## Big thanks (To be completed, check them out!)
- [Closertodeath](https://huggingface.co/closertodeath) for breathtaking loras
- [Luna](https://huggingface.co/SweetLuna) for Aurora (base model)
- TODO: reveive the recipe and credit all amazing creators
## License
You are free to:
1. Share — copy and redistribute the material in any medium or format
2. Adapt — remix, transform, and build upon the material, as long as you freely share the changes
Under the following terms:
1. You cannot use the model to deliberately produce nor share illegal or harmful outputs or content
2. Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
3. You may not use the material for commercial purposes, whether it be as a service, sold as is or merged into other material.
4. If you grant access to a modified version of the model available to users over a network, you must make your modified model available to those users immediately.
|
MiBo/llama2-ML-ArXiv-Papers
|
MiBo
| 2023-08-01T15:59:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-31T10:28:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
tamiti1610001/bert-fine-tuned-cola
|
tamiti1610001
| 2023-08-01T15:58:51Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-01T15:18:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5804132033917235
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6993
- Matthews Correlation: 0.5804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4983 | 1.0 | 535 | 0.4890 | 0.5078 |
| 0.2856 | 2.0 | 1070 | 0.5001 | 0.5678 |
| 0.1774 | 3.0 | 1605 | 0.6993 | 0.5804 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
NprogramDev/ppo-Huggy
|
NprogramDev
| 2023-08-01T15:44:41Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-01T13:35:10Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: NprogramDev/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
geranio/ppo-LunarLander-v2
|
geranio
| 2023-08-01T15:43:21Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T15:42:59Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.80 +/- 22.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dfalvearg/dqn-SpaceInvadersNoFrameskip-v4
|
dfalvearg
| 2023-08-01T15:33:17Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T15:32:50Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 329.00 +/- 157.97
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dfalvearg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dfalvearg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dfalvearg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
traeval/llama2-qlora-finetunined-french
|
traeval
| 2023-08-01T15:30:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-01T15:30:28Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
AdirK/ppo-Huggy
|
AdirK
| 2023-08-01T15:30:17Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-01T15:30:12Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AdirK/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kupkus1/TinyBunny
|
kupkus1
| 2023-08-01T15:26:21Z | 0 | 0 | null |
[
"ru",
"en",
"uk",
"region:us"
] | null | 2023-08-01T15:25:28Z |
---
language:
- ru
- en
- uk
---
|
bayartsogt/wav2vec2-base-mn-pretrain-42h
|
bayartsogt
| 2023-08-01T15:24:34Z | 48 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"pretraining",
"speech",
"mn",
"dataset:test",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-23T05:20:48Z |
---
language: mn
datasets:
- test
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Base
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Data
- Sample rate: 16Khz
- Total pretrained data: 42H
- Duration (sec):
- mean: 5.276451094408402
- std: 2.2694219711399533
- max: 12.435937673420312
- min: 0.0005440165748211712
# Convert from FAIRSEQ to HF
1. Create a config
```python
from transformers import Wav2Vec2Config
config = Wav2Vec2Config.from_pretrained('facebook/wav2vec2-base')
config.save_pretrained('./')
```
2. Convert using [the script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py) written by HF team
```bash
wget convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py
hf_name="<my-hf-repo-name>"
ckpt="<path-to-pth-checkpoint>"
python ./convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py \
--pytorch_dump_folder ${hf_name} \
--checkpoint_path ${ckpt} \
--config_path ./config.json \
--not_finetuned
```
|
checcoli/ppo-LunarLander-v2
|
checcoli
| 2023-08-01T15:17:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T15:17:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.18 +/- 19.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Blaise-g/longt5_tglobal_large_scitldr
|
Blaise-g
| 2023-08-01T15:08:55Z | 119 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"summarization",
"biomedical papers",
"en",
"dataset:Blaise-g/scitldr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-08-09T12:44:09Z |
---
language: en
tags:
- summarization
- biomedical papers
widget:
- text: "Biomedical paper of choice \U0001F917"
datasets:
- Blaise-g/scitldr
---
|
Blaise-g/led_pubmed_sumpubmed_1
|
Blaise-g
| 2023-08-01T15:08:45Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"led",
"text2text-generation",
"summarization",
"en",
"dataset:Blaise-g/autotrain-data-SumPubmed",
"dataset:Blaise-g/SumPubmed",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-07-29T00:41:04Z |
---
tags:
- summarization
language:
- en
widget:
- text: "Biomedical paper of choice \U0001F917"
datasets:
- Blaise-g/autotrain-data-SumPubmed
- Blaise-g/SumPubmed
co2_eq_emissions:
emissions: 1027.9
model-index:
- name: Blaise-g/led_pubmed_sumpubmed_1
results:
- task:
type: summarization
name: Summarization
dataset:
name: Blaise-g/SumPubmed
type: Blaise-g/SumPubmed
config: Blaise-g--SumPubmed
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 41.2523
verified: true
- name: ROUGE-2
type: rouge
value: 11.1291
verified: true
- name: ROUGE-L
type: rouge
value: 20.2531
verified: true
- name: ROUGE-LSUM
type: rouge
value: 37.1502
verified: true
- name: loss
type: loss
value: 6.371099948883057
verified: true
- name: gen_len
type: gen_len
value: 193.3744
verified: true
---
# Validation Metrics
- Loss: 2.133
- Rouge1: 45.861
- Rouge2: 14.179
- RougeL: 23.565
- RougeLsum: 40.908
- Gen Len: 195.334
|
Blaise-g/longt5_tglobal_large_sumpubmed
|
Blaise-g
| 2023-08-01T15:08:32Z | 20 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"summarization",
"biomedical papers",
"en",
"dataset:Blaise-g/SumPubmed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-08-08T09:34:34Z |
---
language: en
tags:
- summarization
- biomedical papers
widget:
- text: "Biomedical paper of choice \U0001F917"
datasets:
- Blaise-g/SumPubmed
---
|
gzeskas/test-model-upload-1
|
gzeskas
| 2023-08-01T15:02:43Z | 193 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"art",
"artistic",
"anime",
"en",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-24T08:48:29Z |
---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- art
- artistic
- diffusers
- anime
inference: false
---
# Dream Shaper
## Official Repository
Read more about this model here: https://civitai.com/models/4384/dreamshaper
Also please support by giving 5 stars and a heart, which will notify new updates.
Please consider supporting me on Patreon or buy me a coffee
- https://www.patreon.com/Lykon275
- https://snipfeed.co/lykon
You can run this model on:
- https://huggingface.co/spaces/Lykon/DreamShaper-webui
- Mage.space, sinkin.ai and more
|
jariasn/a2c-PandaReachDense-v2
|
jariasn
| 2023-08-01T15:00:33Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T14:57:55Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.46 +/- 0.59
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mrichardt/llama-101
|
mrichardt
| 2023-08-01T14:53:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"autotrain",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-01T12:22:37Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
---
# Model Trained Using AutoTrain
A project to get familiar with finetuning a llama-2 model. Primarily used to gain experiences.
|
Davietheman/valorant-neon
|
Davietheman
| 2023-08-01T14:22:52Z | 2 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-01T14:18:48Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Valorant_Neon Dreambooth model trained by Davietheman with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AtilliO/ppo-Huggy
|
AtilliO
| 2023-08-01T14:17:06Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-01T14:17:01Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AtilliO/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jariasn/a2c-AntBulletEnv-v0
|
jariasn
| 2023-08-01T14:10:18Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T14:09:11Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1896.57 +/- 121.74
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Testingkon/Railed_reil
|
Testingkon
| 2023-08-01T13:51:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-01T13:45:35Z |
---
license: creativeml-openrail-m
---
|
IbrahimSalah/Arabic_Syllables_to_text_Converter_Using_MT5
|
IbrahimSalah
| 2023-08-01T13:41:38Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-28T11:24:16Z |
# Arabic syllables to word converter using MT5 model
This model will convert arabic syllables to full words.\
The models was intented to be used after Our [syllables based wav2vec model](https://huggingface.co/IbrahimSalah/Arabic_speech_Syllables_recognition_Using_Wav2vec2)
# Example :
-> input : بِاْ لِنْ نِسْ بَ تِ لِ لَسْ سُيْ يَ اِحْ مِمْ مِنْ طَ قَ تِشْ شَرْ قِلْ ءَوْ سَطْ\
-> output :بِالنِسْبَةِ لِلسُنْيَاح مِن مِنْطَقَةِ الشَرْق الأَوْسَط
# To use the model ,the input needs special preprocessing steps.
Please refer to this notebook for further details : [Syllable to text example](https://colab.research.google.com/drive/1VdY16ADTUq6SKcBiORbMm7c-BJC3JxLS?usp=sharing)
|
vrajur/ppo-LunarLander-v2
|
vrajur
| 2023-08-01T13:35:17Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T13:34:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 237.79 +/- 75.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
zein-barhoum/q-FrozenLake-v1-4x4-noSlippery
|
zein-barhoum
| 2023-08-01T13:34:04Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T13:34:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="zein-barhoum/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
s3nh/airoboros-33b-gpt4-m2.0-GGML
|
s3nh
| 2023-08-01T13:27:02Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"text-generation",
"en",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-01T07:32:34Z |
---
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/jondurbin/airoboros-l2-33b-gpt4-m2.0).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
|
NEO946B/Reinforce-Cartpole-v1
|
NEO946B
| 2023-08-01T13:23:45Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T13:23:13Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
w601sxs/b1ade-1b-wizard-chkpt-799k
|
w601sxs
| 2023-08-01T13:14:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-01T13:14:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
w601sxs/b1ade-1b-wizard-chkpt
|
w601sxs
| 2023-08-01T13:13:14Z | 2 | 0 |
peft
|
[
"peft",
"pytorch",
"gpt_neox",
"region:us"
] | null | 2023-08-01T13:12:35Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
IHaveNoClueAndIMustPost/llama2-22b-wizard_vicuna-ggml
|
IHaveNoClueAndIMustPost
| 2023-08-01T13:05:15Z | 0 | 1 | null |
[
"llama",
"llama-2",
"license:other",
"region:us"
] | null | 2023-08-01T12:01:35Z |
---
license: other
tags:
- llama
- llama-2
---
A 22B model merge by [grimpep](https://huggingface.co/grimpep) mixing [13Bv2-llama-modelmerge](https://huggingface.co/grimpep/13Bv2-llama-modelmerge) with [Wizard-Vicuna-30B-Superhot-8K](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Superhot-8K-fp16).
<br>Please see the [original repo](https://huggingface.co/grimpep/llama2-22b-wizard_vicuna) for further information.<br><br>
From my brief testing this model works great for chat or roleplaying using Llama2 syntax along with [SimpleProxy](https://github.com/anon998/simple-proxy-for-tavern) or SimpleProxy style prompt instruction.
|
efederici/it5-base-summarization
|
efederici
| 2023-08-01T13:01:11Z | 192 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- it
tags:
- summarization
---
# **Italian T5 Abstractive Summarization**
gsarti/it5-base fine-tuned in italian for abstractive text summarization.
|
sasi2400/IntangibleBERT
|
sasi2400
| 2023-08-01T12:59:31Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:ProsusAI/finbert",
"base_model:finetune:ProsusAI/finbert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-31T11:32:51Z |
---
base_model: ProsusAI/finbert
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IntangibleBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IntangibleBERT
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0023
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 64 | 0.0094 | 1.0 |
| No log | 2.0 | 128 | 0.0023 | 1.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
efederici/sentence-BERTino-v2-mmarco-4m
|
efederici
| 2023-08-01T12:59:09Z | 2 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"it",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-07T15:50:47Z |
---
pipeline_tag: sentence-similarity
license: apache-2.0
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-BERTino-v2-mmarco-4m
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a finetuned sentence-BERTino-v2-pt on ~4m mmarco examples.
Use `query:` and `passage:` as prefix identifiers for questions and documents respectively.
- loss: MultipleNegativesRankingLoss
- infrastructure: A100 80GB
If you find this project useful, consider supporting its development:
[](https://bmc.link/edoardofederici)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = [
"query: Questo è un esempio di frase",
"passage: Questo è un ulteriore esempio"
]
model = SentenceTransformer('efederici/sentence-BERTino-v2-mmarco-4m')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this:
1. pass your input through the transformer model
2. apply the right pooling-operation on-top of the contextualized word embeddings
```python
from transformers import AutoTokenizer, AutoModel
import torch
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = [
"query: Questo è un esempio di frase",
"passage: Questo è un ulteriore esempio"
]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-BERTino-v2-mmarco-4m')
model = AutoModel.from_pretrained('efederici/sentence-BERTino-v2-mmarco-4m')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
ai-characters/AI-Voice-Models-by-AI_Characters
|
ai-characters
| 2023-08-01T12:58:33Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-28T04:41:23Z |
---
license: openrail
---
If you like what I am doing, feel free to donate to my KoFi: https://ko-fi.com/aicharacters
Here you can find all of my current AI voice models! Currently they were all created using RVC2.
Note that some models may have multiple versions, in which case I recommend using the newer one as it likely is of higher quality!
Feel free to also check out my StableDiffusion AI art models here on Huggingface or alternatively on CivitAI: https://civitai.com/user/AI_Characters
|
DuSommeville/CREATINGESSENCE
|
DuSommeville
| 2023-08-01T12:54:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-01T12:54:35Z |
---
license: creativeml-openrail-m
---
|
rahulhuddar/llama2-chat-hub-my-finetuned-model
|
rahulhuddar
| 2023-08-01T12:51:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-01T12:51:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
heegyu/LIMA-13b-hf
|
heegyu
| 2023-08-01T12:49:55Z | 3,656 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-01T12:29:28Z |
---
license: other
---
LLaMA-13B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
akashAlphastream/distilbert-base-uncased-finetuned-cola
|
akashAlphastream
| 2023-08-01T12:18:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-01T12:12:32Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7872
- Matthews Correlation: 0.5411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5204 | 1.0 | 535 | 0.4614 | 0.4869 |
| 0.3459 | 2.0 | 1070 | 0.4912 | 0.5185 |
| 0.2251 | 3.0 | 1605 | 0.6142 | 0.5150 |
| 0.1747 | 4.0 | 2140 | 0.7872 | 0.5411 |
| 0.1223 | 5.0 | 2675 | 0.8451 | 0.5309 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
Ilovemykid/sim
|
Ilovemykid
| 2023-08-01T12:11:50Z | 0 | 1 |
nemo
|
[
"nemo",
"text-to-video",
"en",
"zu",
"dataset:Anthropic/hh-rlhf",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-video
| 2023-08-01T12:04:57Z |
---
license: creativeml-openrail-m
datasets:
- Anthropic/hh-rlhf
language:
- en
- zu
metrics:
- character
- accuracy
- code_eval
- bertscore
library_name: nemo
pipeline_tag: text-to-video
---
|
draziert/poca-SoccerTwos
|
draziert
| 2023-08-01T12:06:18Z | 31 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-01T12:05:50Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: draziert/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Ayushnangia/llama2-qlora-finetunined-french
|
Ayushnangia
| 2023-08-01T12:02:06Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-01T12:01:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
autosyrup/bert
|
autosyrup
| 2023-08-01T11:54:37Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-31T19:18:16Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3752
- Precision: 0.5495
- Recall: 0.5949
- F1: 0.5713
- Accuracy: 0.9455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 151 | 0.1826 | 0.4095 | 0.4084 | 0.4089 | 0.9362 |
| No log | 2.0 | 302 | 0.1684 | 0.4941 | 0.5303 | 0.5116 | 0.9442 |
| No log | 3.0 | 453 | 0.2528 | 0.5197 | 0.4477 | 0.4810 | 0.9398 |
| 0.1001 | 4.0 | 604 | 0.2100 | 0.5182 | 0.5583 | 0.5375 | 0.9439 |
| 0.1001 | 5.0 | 755 | 0.2556 | 0.5207 | 0.4783 | 0.4986 | 0.9419 |
| 0.1001 | 6.0 | 906 | 0.2908 | 0.4132 | 0.4204 | 0.4168 | 0.9365 |
| 0.0205 | 7.0 | 1057 | 0.3046 | 0.5 | 0.6236 | 0.5550 | 0.9435 |
| 0.0205 | 8.0 | 1208 | 0.3057 | 0.5324 | 0.5750 | 0.5529 | 0.9458 |
| 0.0205 | 9.0 | 1359 | 0.3122 | 0.5626 | 0.5776 | 0.5700 | 0.9469 |
| 0.0082 | 10.0 | 1510 | 0.3673 | 0.5733 | 0.5263 | 0.5488 | 0.9441 |
| 0.0082 | 11.0 | 1661 | 0.3432 | 0.5482 | 0.5270 | 0.5374 | 0.9455 |
| 0.0082 | 12.0 | 1812 | 0.3305 | 0.5590 | 0.5716 | 0.5652 | 0.9445 |
| 0.0082 | 13.0 | 1963 | 0.3293 | 0.5434 | 0.6009 | 0.5707 | 0.9431 |
| 0.005 | 14.0 | 2114 | 0.4080 | 0.5627 | 0.5803 | 0.5713 | 0.9451 |
| 0.005 | 15.0 | 2265 | 0.3752 | 0.5495 | 0.5949 | 0.5713 | 0.9455 |
| 0.005 | 16.0 | 2416 | 0.4140 | 0.5823 | 0.5470 | 0.5641 | 0.9455 |
| 0.002 | 17.0 | 2567 | 0.4308 | 0.5555 | 0.5670 | 0.5612 | 0.9438 |
| 0.002 | 18.0 | 2718 | 0.4389 | 0.5594 | 0.5676 | 0.5635 | 0.9436 |
| 0.002 | 19.0 | 2869 | 0.4463 | 0.5609 | 0.5676 | 0.5642 | 0.9444 |
| 0.0007 | 20.0 | 3020 | 0.4512 | 0.5648 | 0.5636 | 0.5642 | 0.9448 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
AIeseshi/AIeseshi_LoRA
|
AIeseshi
| 2023-08-01T11:50:41Z | 0 | 0 | null |
[
"text-to-image",
"ja",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-27T11:54:40Z |
---
license: creativeml-openrail-m
language:
- ja
pipeline_tag: text-to-image
---
AIエセ師です。
<br>
LECOでLoRAを作成しました。<br>
<br>
**おっぱい増減スライダーLoRA(huge_breasts_1girl)** <br>
LECOで作成したLoRA生成の処女作となります。<br>
おっぱい増減スライダー:トリガーワード 1girl<br>
breasts系プロンプトは不要となります。<br>
学習はBRAV6で作成しました。<br>
別モデルでも利用可能かと思いますが、BRA系での利用を推奨します。
<br>
<img src="https://huggingface.co/AIeseshi/huge_breasts.safetensors/resolve/main/huge_breasts1.jpg" width="100%" height="100%">
<img src="https://huggingface.co/AIeseshi/huge_breasts.safetensors/resolve/main/huge_breasts2.jpg" width="100%" height="100%">
↑画像提供者RAN:https://twitter.com/RAN_kimono_jp<br>
<br>
各モデルのおっぱい増減スライダーを作成しました。<br>
BRAV6:huge_breasts_BRA.safetensors<br>
kisaragi:huge_breasts_kisaragi_yayoi_mix_v1.safetensors<br>
yayoi_v1:huge_breasts_yayoi_mix_v1.safetensors<br>
yayoi_v2:huge_breasts_yayoi_mix_v2.safetensors<br>
|
DavidLazer/llama2_finetuned_chatbot
|
DavidLazer
| 2023-08-01T11:34:18Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-08-01T11:17:12Z |
---
tags:
- generated_from_trainer
model-index:
- name: llama2_finetuned_chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2_finetuned_chatbot
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
wuxianchao/lazylora-7bhf
|
wuxianchao
| 2023-08-01T11:21:16Z | 0 | 1 | null |
[
"arxiv:2305.14314",
"arxiv:2106.09685",
"arxiv:2110.07602",
"arxiv:2104.08691",
"arxiv:2303.16199",
"license:llama2",
"region:us"
] | null | 2023-07-23T22:49:25Z |
---
license: llama2
---
## Lazy LoRA
### Benefits
0. using the updated [Meta's LLaMA-2 models](https://huggingface.co/meta-llama/Llama-2-7b-hf).
1. support [4-bit qlora](https://arxiv.org/abs/2305.14314), extreme GPU memory and inference time saving;
2. comparable MMLU evaluation dataset results:
| | eval | test | comp-eval | comp-test |
|---------------|--------|--------|-----------|-----------|
|llama2-7b | 46.68% | 46.82% | | |
|ckpt-200 | 44.28% | 46.03% | -2.40% | -0.79% |
|ckpt-600 | 45.26% | 45.61% | -1.42% | -1.21% |
llama2-7b: "4e4d531bcab430a66c4d562b7e89e21c0fa235ea"
### Introduction
Determine the rank of LoRA layers by the singular values of pretrained weight matrices.
Also, combines:
1. LoRA: [LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS](https://arxiv.org/abs/2106.09685)
2. Prefix Tuning: [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://aclanthology.org/2021.acl-long.3
53/), [P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks](https://arxiv.or
g/pdf/2110.07602.pdf)
3. Prompt Tuning: [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691)
4. LLaMA adapter: [LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention] (https://arxiv.org/abs/2303.16199)
in one model.
This allows you to perform LoRA (additional low rank adapters inserted to each linear layer), and prompt learning (additional virtual tokens attached to the input and to the attention layers acting as `past_key_values`)
## Usage:
```python
import sys
sys.path.insert(1, '/workspace/asr/peft/src')
# TODO set this path to the lazy-lora source code path,
# or you can install it from source code:
# TODO, please install lazylora for usage:
# git clone [email protected]:Xianchao-Wu/peft.git
# cd peft
# python setup.py install
from transformers import (AutoTokenizer,
AutoModelForCausalLM, BitsAndBytesConfig)
from peft import PeftModel, PeftConfig
import os
import torch
#import ipdb; ipdb.set_trace()
cache_dir="/workspace/asr/peft/qlora"
# TODO set this cache_dir to the path where you
# stored (or, want to store) llama2-7bhf model
lazylora_dir=os.getcwd()
# the path that contains 'adapter_config.json'
# and 'adapter_model.bin'
config = PeftConfig.from_pretrained(lazylora_dir)
tokenizer = AutoTokenizer.from_pretrained(
config.base_model_name_or_path,
cache_dir=cache_dir,
use_auth_token=True
)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
quantization_config=bnb_config,
device_map="auto",
cache_dir=cache_dir,
use_auth_token=True
)
#model.print_trainable_parameters()
print(sum(p.numel() for p in model.parameters()))
# 3,500,412,928 -> half-size of 7B due to 4-bit loading
model = PeftModel.from_pretrained(model, lazylora_dir)
print('after adding lazy lora parameters:')
model.print_trainable_parameters()
# trainable params: 0 || all params: 3,660,359,168 || trainable%: 0.0
```
## MMLU result:
### MMLU eval result:
```json
{"mmlu_loss": 1.9065961667247102,
"mmlu_eval_accuracy_professional_medicine": 0.3870967741935484,
"mmlu_eval_accuracy_college_physics": 0.45454545454545453,
"mmlu_eval_accuracy_conceptual_physics": 0.34615384615384615,
"mmlu_eval_accuracy_econometrics": 0.3333333333333333,
"mmlu_eval_accuracy_high_school_chemistry": 0.45454545454545453,
"mmlu_eval_accuracy_nutrition": 0.5151515151515151,
"mmlu_eval_accuracy_high_school_computer_science": 0.5555555555555556,
"mmlu_eval_accuracy_security_studies": 0.4444444444444444,
"mmlu_eval_accuracy_world_religions": 0.6842105263157895,
"mmlu_eval_accuracy_anatomy": 0.5,
"mmlu_eval_accuracy_prehistory": 0.42857142857142855,
"mmlu_eval_accuracy_high_school_government_and_politics": 0.6666666666666666,
"mmlu_eval_accuracy_professional_accounting": 0.3225806451612903,
"mmlu_eval_accuracy_philosophy": 0.4411764705882353,
"mmlu_eval_accuracy_astronomy": 0.3125,
"mmlu_eval_accuracy_medical_genetics": 0.8181818181818182,
"mmlu_eval_accuracy_jurisprudence": 0.5454545454545454,
"mmlu_eval_accuracy_professional_law": 0.38235294117647056,
"mmlu_eval_accuracy_college_chemistry": 0.125,
"mmlu_eval_accuracy_moral_disputes": 0.4473684210526316,
"mmlu_eval_accuracy_abstract_algebra": 0.36363636363636365,
"mmlu_eval_accuracy_computer_security": 0.5454545454545454,
"mmlu_eval_accuracy_business_ethics": 0.5454545454545454,
"mmlu_eval_accuracy_virology": 0.5,
"mmlu_eval_accuracy_electrical_engineering": 0.375,
"mmlu_eval_accuracy_high_school_biology": 0.34375,
"mmlu_eval_accuracy_public_relations": 0.3333333333333333,
"mmlu_eval_accuracy_high_school_physics": 0.35294117647058826,
"mmlu_eval_accuracy_high_school_psychology": 0.65,
"mmlu_eval_accuracy_college_computer_science": 0.5454545454545454,
"mmlu_eval_accuracy_high_school_european_history": 0.7222222222222222,
"mmlu_eval_accuracy_international_law": 0.8461538461538461,
"mmlu_eval_accuracy_high_school_microeconomics": 0.2692307692307692,
"mmlu_eval_accuracy_college_biology": 0.25,
"mmlu_eval_accuracy_formal_logic": 0.14285714285714285,
"mmlu_eval_accuracy_machine_learning": 0.18181818181818182,
"mmlu_eval_accuracy_human_aging": 0.6956521739130435,
"mmlu_eval_accuracy_logical_fallacies": 0.5555555555555556,
"mmlu_eval_accuracy_clinical_knowledge": 0.41379310344827586,
"mmlu_eval_accuracy_high_school_macroeconomics": 0.3488372093023256,
"mmlu_eval_accuracy_miscellaneous": 0.5930232558139535,
"mmlu_eval_accuracy_sociology": 0.7272727272727273,
"mmlu_eval_accuracy_high_school_us_history": 0.6363636363636364,
"mmlu_eval_accuracy_college_medicine": 0.4090909090909091,
"mmlu_eval_accuracy_high_school_world_history": 0.5,
"mmlu_eval_accuracy_marketing": 0.8,
"mmlu_eval_accuracy_human_sexuality": 0.4166666666666667,
"mmlu_eval_accuracy_professional_psychology": 0.36231884057971014,
"mmlu_eval_accuracy_moral_scenarios": 0.24,
"mmlu_eval_accuracy_college_mathematics": 0.18181818181818182,
"mmlu_eval_accuracy_us_foreign_policy": 0.6363636363636364,
"mmlu_eval_accuracy_high_school_geography": 0.6818181818181818,
"mmlu_eval_accuracy_high_school_statistics": 0.34782608695652173,
"mmlu_eval_accuracy_high_school_mathematics": 0.2413793103448276,
"mmlu_eval_accuracy_elementary_mathematics": 0.3170731707317073,
"mmlu_eval_accuracy_management": 0.36363636363636365,
"mmlu_eval_accuracy_global_facts": 0.2,
"mmlu_eval_accuracy": 0.4526436056641111}
```
### MMLU test result:
```json
{"mmlu_loss": 1.925738222594615,
"mmlu_test_accuracy_business_ethics": 0.53,
"mmlu_test_accuracy_medical_genetics": 0.53,
"mmlu_test_accuracy_international_law": 0.628099173553719,
"mmlu_test_accuracy_professional_law": 0.3363754889178618,
"mmlu_test_accuracy_econometrics": 0.32456140350877194,
"mmlu_test_accuracy_high_school_biology": 0.4806451612903226,
"mmlu_test_accuracy_computer_security": 0.57,
"mmlu_test_accuracy_global_facts": 0.34,
"mmlu_test_accuracy_clinical_knowledge": 0.46037735849056605,
"mmlu_test_accuracy_miscellaneous": 0.6347381864623244,
"mmlu_test_accuracy_high_school_microeconomics": 0.39915966386554624,
"mmlu_test_accuracy_public_relations": 0.5636363636363636,
"mmlu_test_accuracy_high_school_computer_science": 0.45,
"mmlu_test_accuracy_human_sexuality": 0.5572519083969466,
"mmlu_test_accuracy_virology": 0.43373493975903615,
"mmlu_test_accuracy_human_aging": 0.5695067264573991,
"mmlu_test_accuracy_high_school_world_history": 0.6371308016877637,
"mmlu_test_accuracy_college_medicine": 0.3699421965317919,
"mmlu_test_accuracy_marketing": 0.6923076923076923,
"mmlu_test_accuracy_world_religions": 0.6783625730994152,
"mmlu_test_accuracy_college_physics": 0.23529411764705882,
"mmlu_test_accuracy_high_school_chemistry": 0.33004926108374383,
"mmlu_test_accuracy_elementary_mathematics": 0.2751322751322751,
"mmlu_test_accuracy_high_school_psychology": 0.6018348623853211,
"mmlu_test_accuracy_sociology": 0.5920398009950248,
"mmlu_test_accuracy_astronomy": 0.4342105263157895,
"mmlu_test_accuracy_high_school_mathematics": 0.27037037037037037,
"mmlu_test_accuracy_high_school_us_history": 0.5343137254901961,
"mmlu_test_accuracy_logical_fallacies": 0.49693251533742333,
"mmlu_test_accuracy_high_school_statistics": 0.19907407407407407,
"mmlu_test_accuracy_management": 0.5825242718446602,
"mmlu_test_accuracy_moral_disputes": 0.5057803468208093,
"mmlu_test_accuracy_formal_logic": 0.24603174603174602,
"mmlu_test_accuracy_college_chemistry": 0.25,
"mmlu_test_accuracy_college_mathematics": 0.3,
"mmlu_test_accuracy_high_school_geography": 0.5050505050505051,
"mmlu_test_accuracy_machine_learning": 0.35714285714285715,
"mmlu_test_accuracy_philosophy": 0.5787781350482315,
"mmlu_test_accuracy_college_computer_science": 0.32,
"mmlu_test_accuracy_security_studies": 0.46938775510204084,
"mmlu_test_accuracy_abstract_algebra": 0.27,
"mmlu_test_accuracy_professional_psychology": 0.4526143790849673,
"mmlu_test_accuracy_college_biology": 0.4444444444444444,
"mmlu_test_accuracy_us_foreign_policy": 0.68,
"mmlu_test_accuracy_professional_medicine": 0.4522058823529412,
"mmlu_test_accuracy_prehistory": 0.48148148148148145,
"mmlu_test_accuracy_anatomy": 0.45925925925925926,
"mmlu_test_accuracy_moral_scenarios": 0.2346368715083799,
"mmlu_test_accuracy_nutrition": 0.4738562091503268,
"mmlu_test_accuracy_high_school_macroeconomics": 0.4461538461538462,
"mmlu_test_accuracy_high_school_european_history": 0.6181818181818182,
"mmlu_test_accuracy_jurisprudence": 0.5370370370370371,
"mmlu_test_accuracy_professional_accounting": 0.35815602836879434,
"mmlu_test_accuracy_high_school_government_and_politics": 0.6321243523316062,
"mmlu_test_accuracy_high_school_physics": 0.32450331125827814,
"mmlu_test_accuracy_electrical_engineering": 0.47586206896551725,
"mmlu_test_accuracy_conceptual_physics": 0.3872340425531915,
"mmlu_test_accuracy": 0.4560969792275357}
```
## License and intended use
This lazy-lora adapter is based on [Meta's LLaMA-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf), and using the [oasst1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1), following [Guanaco](https://huggingface.co/timdettmers/guanaco-65b).
lazy lora adapter weights are available under LLAMA-2 license. Note the use of the lazy lora adapter weights, requires access to the LLaMA model weighs. Lazy lora is based on LLaMA and therefore should be used according to the LLaMA license.
## Risks and Biases
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
|
autosyrup/roberta
|
autosyrup
| 2023-08-01T11:18:42Z | 28 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:Jean-Baptiste/roberta-large-ner-english",
"base_model:finetune:Jean-Baptiste/roberta-large-ner-english",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-31T10:36:47Z |
---
license: mit
base_model: Jean-Baptiste/roberta-large-ner-english
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta
This model is a fine-tuned version of [Jean-Baptiste/roberta-large-ner-english](https://huggingface.co/Jean-Baptiste/roberta-large-ner-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3908
- Precision: 0.5990
- Recall: 0.5581
- F1: 0.5778
- Accuracy: 0.9470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 151 | 0.2078 | 0.1899 | 0.2388 | 0.2115 | 0.9246 |
| No log | 2.0 | 302 | 0.1499 | 0.4322 | 0.5535 | 0.4854 | 0.9393 |
| No log | 3.0 | 453 | 0.1916 | 0.5204 | 0.4946 | 0.5072 | 0.9418 |
| 0.1542 | 4.0 | 604 | 0.1671 | 0.4615 | 0.5109 | 0.4849 | 0.9426 |
| 0.1542 | 5.0 | 755 | 0.1940 | 0.4841 | 0.4829 | 0.4835 | 0.9439 |
| 0.1542 | 6.0 | 906 | 0.2462 | 0.5066 | 0.5651 | 0.5343 | 0.9428 |
| 0.0616 | 7.0 | 1057 | 0.2106 | 0.5041 | 0.5271 | 0.5153 | 0.9437 |
| 0.0616 | 8.0 | 1208 | 0.2621 | 0.5620 | 0.5202 | 0.5403 | 0.9474 |
| 0.0616 | 9.0 | 1359 | 0.2903 | 0.5242 | 0.5550 | 0.5392 | 0.9440 |
| 0.0326 | 10.0 | 1510 | 0.3083 | 0.5883 | 0.5628 | 0.5753 | 0.9483 |
| 0.0326 | 11.0 | 1661 | 0.3125 | 0.5451 | 0.5853 | 0.5645 | 0.9444 |
| 0.0326 | 12.0 | 1812 | 0.3616 | 0.5503 | 0.5388 | 0.5445 | 0.9427 |
| 0.0326 | 13.0 | 1963 | 0.3398 | 0.5978 | 0.5023 | 0.5459 | 0.9447 |
| 0.0155 | 14.0 | 2114 | 0.2942 | 0.5701 | 0.5550 | 0.5625 | 0.9467 |
| 0.0155 | 15.0 | 2265 | 0.3723 | 0.5771 | 0.5597 | 0.5683 | 0.9462 |
| 0.0155 | 16.0 | 2416 | 0.3651 | 0.5751 | 0.5760 | 0.5755 | 0.9439 |
| 0.0062 | 17.0 | 2567 | 0.3674 | 0.5667 | 0.5891 | 0.5777 | 0.9455 |
| 0.0062 | 18.0 | 2718 | 0.3866 | 0.5897 | 0.5403 | 0.5639 | 0.9463 |
| 0.0062 | 19.0 | 2869 | 0.3908 | 0.5990 | 0.5581 | 0.5778 | 0.9470 |
| 0.0033 | 20.0 | 3020 | 0.4036 | 0.5914 | 0.5620 | 0.5763 | 0.9467 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
wilson-wei/whisper-tiny-finetuned-minds14
|
wilson-wei
| 2023-08-01T11:17:08Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-01T11:05:40Z |
---
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-finetuned-minds-14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MInDS-14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3578403216542217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-minds-14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the MInDS-14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5412
- Wer Ortho: 0.3581
- Wer: 0.3578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0121 | 4.46 | 1000 | 0.5412 | 0.3581 | 0.3578 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.0
- Tokenizers 0.13.3
|
himanimaheshwari3/distilbert-base-uncased-finetuned-DIS-mlm5
|
himanimaheshwari3
| 2023-08-01T11:06:28Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-01T11:02:23Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-DIS-mlm5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-DIS-mlm5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.036 | 1.0 | 2 | 4.3499 |
| 0.1722 | 2.0 | 4 | 4.3545 |
| 0.6087 | 3.0 | 6 | 5.8627 |
| 0.2151 | 4.0 | 8 | 3.6960 |
| 0.2115 | 5.0 | 10 | 3.2086 |
| 0.3443 | 6.0 | 12 | 5.1042 |
| 0.1082 | 7.0 | 14 | 4.0195 |
| 0.5068 | 8.0 | 16 | 3.6664 |
| 0.7362 | 9.0 | 18 | 4.3850 |
| 0.4281 | 10.0 | 20 | 4.6974 |
| 1.3107 | 11.0 | 22 | 4.3258 |
| 1.4157 | 12.0 | 24 | 4.8907 |
| 2.5918 | 13.0 | 26 | 4.6595 |
| 2.577 | 14.0 | 28 | 4.3417 |
| 1.6291 | 15.0 | 30 | 5.0013 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
embaas/sentence-transformers-gte-small
|
embaas
| 2023-08-01T11:04:18Z | 40 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-01T11:04:14Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# embaas/sentence-transformers-gte-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('embaas/sentence-transformers-gte-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=embaas/sentence-transformers-gte-small)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jscore2023/falcon-7b-3
|
jscore2023
| 2023-08-01T11:04:09Z | 1 | 0 |
peft
|
[
"peft",
"pytorch",
"falcon",
"custom_code",
"endpoints_compatible",
"region:us"
] | null | 2023-08-01T09:13:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
heegyu/RedTulu-Uncensored-3B-0719
|
heegyu
| 2023-08-01T10:57:18Z | 1,509 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-23T03:21:07Z |
---
license: apache-2.0
language:
- en
---
Base Model: togethercomputer/RedPajama-INCITE-Base-3B-v1
Dataset from: https://github.com/allenai/open-instruct and uncensored it using code in ehartford/wizard_vicuna_70k_unfiltered
Usage
```
### Human:
your instruction
### ASSISANT:
output will be generated and ended with <|endoftext|>
```
|
JinsooKim/CartPole
|
JinsooKim
| 2023-08-01T10:57:06Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T10:57:00Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 177.70 +/- 9.43
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kadarm/llama2-7b-python
|
kadarm
| 2023-08-01T10:55:05Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-01T10:54:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
SiberiaSoft/SiberianPersonaFred
|
SiberiaSoft
| 2023-08-01T10:46:14Z | 590 | 6 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ru",
"dataset:SiberiaSoft/SiberianPersonaChat",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-27T12:27:04Z |
---
license: mit
datasets:
- SiberiaSoft/SiberianPersonaChat
language:
- ru
pipeline_tag: text2text-generation
widget:
- text: '<SC6>Ты парень, консультант по разным вопросам. Ты очень умный. Любишь помогать собеседнику. Продолжи диалог:\nСобеседник: Почему трава зеленая?\nТы: <extra_id_0>'
- text: '<SC6>Ты парень, консультант по разным вопросам. Ты очень умный. Любишь помогать собеседнику. Продолжи диалог:\nСобеседник: Привет, как дела?\nТы: <extra_id_0>'
---
### SiberiaSoft/SiberianPersonaFred
Данная модель предназначена для имитации личности в диалоге. Подробнее [тут](https://huggingface.co/datasets/SiberiaSoft/SiberianPersonaChat)
### Формат описаний личности
1. Ты парень, пилот самолета. Увлекаешься дайвингом. Собираешь марки. Любишь древнюю архитектуру.
2. Ты девушка, художница. Увлекаешься нейросетевым искусством. Умеешь программировать. Любишь рисовать.
Также в промпт можно подставлять факты о личности: ФИО, возраст и т.д
1. Я девушка 18 лет. Я учусь в институте. Живу с родителями. У меня есть кот. Ищу парня для семьи.
Статья на habr: [ссылка](https://habr.com/ru/articles/751580/)
### Пример кода инференса
```python
import torch
import transformers
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
t5_tokenizer = transformers.GPT2Tokenizer.from_pretrained("SiberiaSoft/SiberianPersonaFred")
t5_model = transformers.T5ForConditionalGeneration.from_pretrained("SiberiaSoft/SiberianPersonaFred")
while True:
print('-'*80)
dialog = []
while True:
msg = input('H:> ').strip()
if len(msg) == 0:
break
msg = msg[0].upper() + msg[1:]
dialog.append('Собеседник: ' + msg)
# В начале ставится промпт персонажа.
prompt = '<SC6>Ты парень, консультант по разным вопросам. Ты очень умный. Любишь помогать собеседнику. Продолжи диалог:' + '\n'.join(dialog) + '\nТы: <extra_id_0>'
input_ids = t5_tokenizer(prompt, return_tensors='pt').input_ids
out_ids = t5_model.generate(input_ids=input_ids.to(device), do_sample=True, temperature=0.9, max_new_tokens=512, top_p=0.85,
top_k=2, repetition_penalty=1.2)
t5_output = t5_tokenizer.decode(out_ids[0][1:])
if '</s>' in t5_output:
t5_output = t5_output[:t5_output.find('</s>')].strip()
t5_output = t5_output.replace('<extra_id_0>', '').strip()
t5_output = t5_output.split('Собеседник')[0].strip()
print('B:> {}'.format(t5_output))
dialog.append('Ты: ' + t5_output)
```
|
ztrip/autotrain-testtranste-79085141139
|
ztrip
| 2023-08-01T10:45:28Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"autotrain",
"translation",
"zh",
"en",
"dataset:ztrip/autotrain-data-testtranste",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-01T10:45:13Z |
---
tags:
- autotrain
- translation
language:
- zh
- en
datasets:
- ztrip/autotrain-data-testtranste
co2_eq_emissions:
emissions: 0.0013030083852032
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 79085141139
- CO2 Emissions (in grams): 0.0013
## Validation Metrics
- Loss: 6.040
- SacreBLEU: 0.000
- Gen len: 3.000
|
wilson-wei/whisper-small-finetuned-minds14
|
wilson-wei
| 2023-08-01T10:38:00Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-30T05:59:27Z |
---
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-small-finetuned-minds-14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MInDS-14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.9778869778869779
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-finetuned-minds-14
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the MInDS-14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6328
- Wer Ortho: 0.8836
- Wer: 0.9779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0017 | 4.48 | 1000 | 0.6328 | 0.8836 | 0.9779 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.0
- Tokenizers 0.13.3
|
caseywilliams27/Ever-Growing-Demand-for-Pink-Diamonds
|
caseywilliams27
| 2023-08-01T10:24:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-01T10:17:27Z |
---
metrics:
- accuracy
---
---
license: openrail
language:
- ja
- fr
- it
- de
- en
tags:
- Pink Diamonds
- <strong>The Ever-Growing Demand for Pink Diamonds: A Sparkling Investment Trend</strong>
---<p>Pink diamonds are a captivating gemstone rarity that has captured the attention of both investors and collectors. These magnificent colored diamonds have come to represent exclusivity, luxury, and style. Because of its distinct beauty and reputation as a wise investment choice, the demand for pink diamonds has increased significantly in recent years. The dynamics of the diamond market, the factors behind the rise in demand for pink diamonds, and the attractiveness of diamonds as alternative asset classes are all discussed in this article.</p>
<p><strong>1.Rarity and restricted Supply:</strong> The rarity and restricted supply of pink diamonds are at the root of their attractiveness. Pink diamonds are extremely rare in nature and make up a very small portion of the world's total production of diamonds. The majority of these diamonds were formerly obtained from the famous Argyle Diamond Mine in Australia, which stopped operating in 2020. Since there is no new supply of these alluring stones, their rarity has boosted their attractiveness, driving up demand from investors and collectors looking for a one-of-a-kind, limited-edition gem.</p>
<p><strong>2.Symbol of Elegance and Romance:</strong> Pink diamonds are a popular choice for engagement rings and other high-end jewelry because they represent refinement and romanticism. Because of their light and delicate colour, which evokes feelings of love and generosity, they are a sentimental and meaningful gift option for key events. As a result, buyers looking to commemorate life's milestones with a touch of elegance and emotion have seen a significant surge in demand for <a href="https://flawlessfinejewelry.com/lab-grown-pink-coloured-diamonds-search/"><strong>Lab-Grown Pink Diamond Engagement Rings.</strong></a></p>
<p><strong>3.Celebrity Endorsement and Media Influence:</strong> Pink diamonds have seen a considerable increase in demand as a result of celebrity endorsements and notable media appearances. The perception of these superb stones is tainted by high-profile individuals wearing stunning pink diamond jewelry on red carpets and in publications, which inspires aspirational enthusiasm. Because of the increased perception in the media that pink diamonds are the height of exclusivity and elegance, buyers seeking to imitate their favorite celebrities are buying more of them.</p>
<p><strong>4.Investment Potential and Inflation Protection:</strong> Pink diamonds have become a popular investment choice due to their potential price rise over time and as a hedge against inflation. Pink diamonds are the ideal alternative asset since they have a more steady value retention than traditional financial markets, which has attracted many investors. Since they are regarded as a tangible and transportable repository of wealth, their scarcity and limited supply add to their investment appeal. Pink diamonds may also act as a hedge against inflation, which adds to their allure for investors trying to diversify their holdings.</p>
<p><strong>5.Rising Affluence in Emerging economies:</strong> The rise of affluence in emerging economies, particularly in China and India, has considerably contributed to the rising demand for luxury items such as pink diamonds. As these regions' middle and upper classes grow, so does their desire for unique and distinguished goods. Pink diamonds, with their mesmerizing beauty and financial possibilities, have piqued the interest of these new sectors of luxury consumers, fueling demand even further.</p>
<strong>CONCLUSION:</strong>
<p>Pink diamonds' increasing popularity reflects their timeless beauty and status as a valuable investment. Scarcity and rarity will continue to define these precious jewels, increasing their allure among collectors, investors, and luxury fans. However, like with any investment, due diligence and expert counsel are essential for successfully navigating the diamond market. Pink diamonds' fascination is unquestionable, whether for personal decoration or investment goals, making them a brilliant treasure in the world of precious stones.</p>
|
Anjoe/german-poetry-gpt2-large
|
Anjoe
| 2023-08-01T10:24:13Z | 58 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-09T17:24:19Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: german-poetry-gpt2-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german-poetry-gpt2-large
This model is a fine-tuned version of [benjamin/gerpt2-large](https://huggingface.co/benjamin/gerpt2-large) on German poems.
It achieves the following results on the evaluation set:
- eval_loss: 3.5753
- eval_runtime: 100.7173
- eval_samples_per_second: 51.6
- eval_steps_per_second: 25.805
- epoch: 4.0
- step: 95544
## Model description
large version of gpt-2
## Intended uses & limitations
It could be used for poetry generation
## Training and evaluation data
The model was trained on german poems from projekt Gutenberg
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
Anjoe/german-poetry-gpt2
|
Anjoe
| 2023-08-01T10:23:56Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-28T21:11:02Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: german-poetry-gpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german-poetry-gpt2
This model is a fine-tuned version of [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.8196
- eval_runtime: 43.8543
- eval_samples_per_second: 86.993
- eval_steps_per_second: 5.45
- epoch: 9.0
- step: 11520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 22
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
with-madrid/with-e5-small-v2
|
with-madrid
| 2023-08-01T10:06:39Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-24T13:01:11Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# with-madrid/with-e5-small-v2
This model is to be used for information retrieval for https://with-madrid.com/
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 143 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 4.762918902353135e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 8,
"weight_decay": 0.00936376631468652
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
nokotin/q-FrozenLake-v1-4x4-noSlippery
|
nokotin
| 2023-08-01T10:06:31Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-01T10:06:29Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nokotin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.