modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 12:31:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 537
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 12:30:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Officialletai/ppo-LunarLander-v2
|
Officialletai
| 2023-07-15T12:18:48Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-01T17:41:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.87 +/- 17.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mrizalf7/xlm-r-qa-squad1.1-squad2.0-tf-1
|
mrizalf7
| 2023-07-15T11:56:00Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-15T11:52:32Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-r-qa-squad1.1-squad2.0-tf-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-qa-squad1.1-squad2.0-tf-1
This model is a fine-tuned version of [mrizalf7/xlm-r-qa-squad-2.0](https://huggingface.co/mrizalf7/xlm-r-qa-squad-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7 | 3.1936 |
| No log | 2.0 | 14 | 3.2455 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mrizalf7/xlm-r-qa-squad1.1-squad2.0-tf
|
mrizalf7
| 2023-07-15T11:45:11Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-15T11:30:11Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-r-qa-squad1.1-squad2.0-tf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-qa-squad1.1-squad2.0-tf
This model is a fine-tuned version of [mrizalf7/xlm-r-qa-squad-2.0](https://huggingface.co/mrizalf7/xlm-r-qa-squad-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2419 | 1.0 | 636 | 3.1678 |
| 2.8486 | 2.0 | 1272 | 3.2826 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
casque/badbrounderwear
|
casque
| 2023-07-15T11:29:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T11:28:50Z |
---
license: creativeml-openrail-m
---
|
NasimB/guten-rarity-all-end-19k-ctx-64
|
NasimB
| 2023-07-15T11:25:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T06:57:00Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-all-end-19k-ctx-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-all-end-19k-ctx-64
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.8243 | 0.15 | 500 | 5.7888 |
| 5.5606 | 0.29 | 1000 | 5.4446 |
| 5.2508 | 0.44 | 1500 | 5.2225 |
| 5.0772 | 0.59 | 2000 | 5.0928 |
| 4.9577 | 0.73 | 2500 | 5.0064 |
| 4.8676 | 0.88 | 3000 | 4.9375 |
| 4.7689 | 1.02 | 3500 | 4.8928 |
| 4.6483 | 1.17 | 4000 | 4.8522 |
| 4.6236 | 1.32 | 4500 | 4.8016 |
| 4.5769 | 1.46 | 5000 | 4.7621 |
| 4.5395 | 1.61 | 5500 | 4.7233 |
| 4.5035 | 1.76 | 6000 | 4.6906 |
| 4.4614 | 1.9 | 6500 | 4.6515 |
| 4.3778 | 2.05 | 7000 | 4.6380 |
| 4.2446 | 2.19 | 7500 | 4.6121 |
| 4.2402 | 2.34 | 8000 | 4.5856 |
| 4.221 | 2.49 | 8500 | 4.5575 |
| 4.2021 | 2.63 | 9000 | 4.5268 |
| 4.1908 | 2.78 | 9500 | 4.4977 |
| 4.1691 | 2.93 | 10000 | 4.4673 |
| 4.0317 | 3.07 | 10500 | 4.4820 |
| 3.931 | 3.22 | 11000 | 4.4766 |
| 3.9202 | 3.36 | 11500 | 4.4607 |
| 3.9241 | 3.51 | 12000 | 4.4389 |
| 3.9147 | 3.66 | 12500 | 4.4202 |
| 3.9027 | 3.8 | 13000 | 4.4001 |
| 3.8931 | 3.95 | 13500 | 4.3843 |
| 3.7317 | 4.1 | 14000 | 4.4054 |
| 3.653 | 4.24 | 14500 | 4.4036 |
| 3.6488 | 4.39 | 15000 | 4.3999 |
| 3.6513 | 4.53 | 15500 | 4.3908 |
| 3.6392 | 4.68 | 16000 | 4.3837 |
| 3.6341 | 4.83 | 16500 | 4.3767 |
| 3.632 | 4.97 | 17000 | 4.3707 |
| 3.4875 | 5.12 | 17500 | 4.3838 |
| 3.4673 | 5.27 | 18000 | 4.3848 |
| 3.4661 | 5.41 | 18500 | 4.3837 |
| 3.4643 | 5.56 | 19000 | 4.3829 |
| 3.463 | 5.71 | 19500 | 4.3827 |
| 3.4588 | 5.85 | 20000 | 4.3824 |
| 3.4591 | 6.0 | 20500 | 4.3825 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
casque/badbroirezumi3
|
casque
| 2023-07-15T11:24:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T11:23:43Z |
---
license: creativeml-openrail-m
---
|
NasimB/guten-rarity-all-2p5k-log-rarity-all-sort
|
NasimB
| 2023-07-15T11:10:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T09:18:12Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-all-2p5k-log-rarity-all-sort
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-all-2p5k-log-rarity-all-sort
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.69 | 0.29 | 500 | 5.6272 |
| 5.3349 | 0.59 | 1000 | 5.1982 |
| 4.9818 | 0.88 | 1500 | 4.9441 |
| 4.7024 | 1.17 | 2000 | 4.7940 |
| 4.5531 | 1.47 | 2500 | 4.6766 |
| 4.4445 | 1.76 | 3000 | 4.5629 |
| 4.3064 | 2.05 | 3500 | 4.4888 |
| 4.12 | 2.35 | 4000 | 4.4409 |
| 4.0994 | 2.64 | 4500 | 4.3854 |
| 4.0596 | 2.93 | 5000 | 4.3289 |
| 3.8415 | 3.23 | 5500 | 4.3258 |
| 3.7949 | 3.52 | 6000 | 4.2992 |
| 3.7753 | 3.81 | 6500 | 4.2626 |
| 3.6705 | 4.11 | 7000 | 4.2631 |
| 3.5128 | 4.4 | 7500 | 4.2550 |
| 3.5022 | 4.69 | 8000 | 4.2439 |
| 3.4902 | 4.99 | 8500 | 4.2293 |
| 3.3248 | 5.28 | 9000 | 4.2426 |
| 3.3111 | 5.57 | 9500 | 4.2419 |
| 3.3138 | 5.87 | 10000 | 4.2408 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
CarlosMN/CartPole
|
CarlosMN
| 2023-07-15T11:10:02Z | 0 | 1 | null |
[
"reinforcement-learning",
"en",
"arxiv:2112.04213",
"region:us"
] |
reinforcement-learning
| 2023-07-15T10:23:37Z |
---
language:
- en
pipeline_tag: reinforcement-learning
---
# Cartpole Reinforcement Learning
This repository is a project focused on exploring reinforcement learning techniques using the OpenAI Gym environment. The objective is to compare different algorithms and approaches to improve the performance of an agent in the Cartpole task.
## Installation
Installation of packages
```
pip install -r requirements.txt
```
If you want to execute the the training phase and get your own model execute the main program, the hyperparameters and different options can be changes via config.ini file.
If you just want to watch the trained model play the game execute the following
```
python3 watchModel.py
```
## Objectives
The main objectives of this project are as follows:
1. Develop a working model that demonstrates an increase in survival time through training.
2. Experiment with different reinforcement learning algorithms and compare their training time, complexity, and achieved scores.
3. Fine-tune the algorithm parameters and the number of bins used to achieve optimal training results.
4. Improve the consistency of the trained agent's strategy.
5. Implement experience replay to enhance learning.
## Results
The initial approach used in this project was Q-Learning, and it produced the following results:

The convergence plot shows an increase in the score over time, with three distinct phases. The first phase corresponds to random inputs, followed by a phase where the model explores a lot. The third phase occurs when the epsilon value starts to decay.

Comparing the results of the trained agent (after 20,000 episodes) with a random agent clearly demonstrates the improvement achieved:

Despite the improvements, the trained agent still lacks consistency. This inconsistency is believed to be due to the inherent randomness in the Cartpole environment.
## Experience Replay
Experience replay has been implemented in this project, leading to significant improvements in the agent's performance. The details and results of this implementation are yet to be provided.
The results of the trained agent with experience replay are as follows:
It should be mention that to speed up the training phase, the experience replay agent had a score limit of 2000.
| Metric | Old Agent | Trained Agent with Experience Replay |
|------------------------|--------------|--------------------------------------|
| Convergence Plot |  |  |
| Score Histogram |  |  |
|Boxplot|| |
As observed by adding experience replay the agent has been able to objectively increase it's score.
## References
- https://arxiv.org/pdf/2112.04213.pdf
- https://aleksandarhaber.com/q-learning-in-python-with-tests-in-cart-pole-openai-gym-environment-reinforcement-learning-tutorial/
|
Anjyee/asep
|
Anjyee
| 2023-07-15T10:09:23Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T10:04:28Z |
---
license: creativeml-openrail-m
---
|
TootToot/q-FrozenLake-v1-4x4-noSlippery
|
TootToot
| 2023-07-15T09:53:40Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-18T14:06:52Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="TootToot/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Xxmlala/ppo-LunarLander-v2
|
Xxmlala
| 2023-07-15T09:45:20Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T09:44:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO_MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 229.98 +/- 14.55
name: mean_reward
verified: false
---
# **PPO_MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **PPO_MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dt-real
|
hafidikhsan
| 2023-07-15T09:42:41Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-15T09:39:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dt-real
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dt-real
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2577
- Accuracy: 0.6578
- F1: 0.6488
- Precision: 0.6432
- Recall: 0.6578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.9224 | 1.0 | 310 | 0.8380 | 0.6142 | 0.5589 | 0.6070 | 0.6142 |
| 0.6168 | 2.0 | 620 | 0.7955 | 0.6651 | 0.6313 | 0.6369 | 0.6651 |
| 0.4687 | 3.0 | 930 | 1.0592 | 0.6150 | 0.6041 | 0.6434 | 0.6150 |
| 0.4495 | 4.0 | 1240 | 1.1980 | 0.6707 | 0.6592 | 0.6547 | 0.6707 |
| 0.182 | 5.0 | 1550 | 1.4150 | 0.6683 | 0.6596 | 0.6566 | 0.6683 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rakaaa/pokemon-lora2
|
rakaaa
| 2023-07-15T09:41:57Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-15T08:54:47Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - rakaaa/pokemon-lora2
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
naimul011/fine_tuned_llama-7b-hf_20
|
naimul011
| 2023-07-15T09:37:01Z | 6 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T09:35:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
koruni/charslora
|
koruni
| 2023-07-15T09:31:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T09:22:03Z |
---
license: creativeml-openrail-m
---
|
manmyung/Reinforce-Pixelcopter-PLE-v0
|
manmyung
| 2023-07-15T09:29:21Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T09:28:47Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 57.90 +/- 51.72
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
NasimB/guten-log-rarity-all-no-cut
|
NasimB
| 2023-07-15T08:55:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T07:03:18Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-log-rarity-all-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-log-rarity-all-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7036 | 0.29 | 500 | 5.6327 |
| 5.3408 | 0.58 | 1000 | 5.2075 |
| 4.9933 | 0.87 | 1500 | 4.9530 |
| 4.7107 | 1.16 | 2000 | 4.7988 |
| 4.5567 | 1.46 | 2500 | 4.6874 |
| 4.452 | 1.75 | 3000 | 4.5707 |
| 4.3309 | 2.04 | 3500 | 4.4934 |
| 4.1223 | 2.33 | 4000 | 4.4512 |
| 4.0982 | 2.62 | 4500 | 4.3907 |
| 4.0684 | 2.91 | 5000 | 4.3428 |
| 3.8697 | 3.2 | 5500 | 4.3302 |
| 3.8014 | 3.49 | 6000 | 4.3025 |
| 3.7776 | 3.79 | 6500 | 4.2679 |
| 3.6962 | 4.08 | 7000 | 4.2638 |
| 3.5138 | 4.37 | 7500 | 4.2596 |
| 3.5066 | 4.66 | 8000 | 4.2463 |
| 3.4966 | 4.95 | 8500 | 4.2334 |
| 3.3506 | 5.24 | 9000 | 4.2465 |
| 3.3204 | 5.53 | 9500 | 4.2435 |
| 3.3138 | 5.82 | 10000 | 4.2428 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nolanaatama/mlnmrtnzrvc1000pchsvrs
|
nolanaatama
| 2023-07-15T08:46:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T08:32:56Z |
---
license: creativeml-openrail-m
---
|
meepmeepow/Lora
|
meepmeepow
| 2023-07-15T08:09:52Z | 0 | 1 | null |
[
"id",
"en",
"region:us"
] | null | 2023-05-01T13:13:01Z |
---
language:
- id
- en
---
<p style="font-size:30px"><b><u>My Lora Collection</u></b></p>
<p style="font-size:28px"> <p style="margin-bottom: -26px;">Kebaya Bali</p></p>
<img src="https://i.ibb.co/PhyMv28/00000-2136414393.png" alt="00000-2136414393" border="0" />
<p style="margin-top: -24px;">~<a style="text-decoration: none" href="https://huggingface.co/meepmeepow/Lora/blob/main/kebayabali.safetensors">Link</a></p>
|
nolanaatama/phtn
|
nolanaatama
| 2023-07-15T08:04:27Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T07:58:19Z |
---
license: creativeml-openrail-m
---
|
Serjssv/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
Serjssv
| 2023-07-15T07:48:17Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-14T13:11:04Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3273
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5056 | 1.0 | 112 | 0.5669 | 0.85 |
| 0.2324 | 2.0 | 225 | 0.5131 | 0.85 |
| 0.2623 | 3.0 | 337 | 0.6539 | 0.79 |
| 0.4419 | 4.0 | 450 | 0.7401 | 0.83 |
| 0.0177 | 5.0 | 562 | 0.5134 | 0.85 |
| 0.0026 | 6.0 | 675 | 0.3351 | 0.9 |
| 0.0046 | 7.0 | 787 | 0.5120 | 0.88 |
| 0.0005 | 8.0 | 900 | 0.5165 | 0.91 |
| 0.2003 | 9.0 | 1012 | 0.3453 | 0.91 |
| 0.0001 | 10.0 | 1125 | 0.3438 | 0.91 |
| 0.0003 | 11.0 | 1237 | 0.3324 | 0.92 |
| 0.0 | 12.0 | 1350 | 0.3999 | 0.89 |
| 0.0 | 13.0 | 1462 | 0.3152 | 0.91 |
| 0.0001 | 14.0 | 1575 | 0.3212 | 0.92 |
| 0.0 | 15.0 | 1687 | 0.3220 | 0.92 |
| 0.0 | 16.0 | 1800 | 0.3343 | 0.9 |
| 0.0 | 17.0 | 1912 | 0.3324 | 0.91 |
| 0.0 | 18.0 | 2025 | 0.3311 | 0.91 |
| 0.0 | 19.0 | 2137 | 0.3292 | 0.91 |
| 0.0 | 19.91 | 2240 | 0.3273 | 0.91 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Raelina/Maya_Ikusaba
|
Raelina
| 2023-07-15T07:35:53Z | 2 | 1 |
diffusers
|
[
"diffusers",
"en",
"region:us"
] | null | 2023-07-15T07:12:04Z |
---
language:
- en
metrics:
- character
library_name: diffusers
---
This LoRa trained with 40+ images taken from anime.
Model used to train is AnimeFullFinalPruned aka NAI, so it work with any anime style model.
Recommended weight 0.7-0.8
Prompt positive and negative refer to CivitAi https://civitai.com/models/109201/maya-ikusaba-or-my-one-hit-kill-sister
Also i recommend use Adetailer! to fix faces and eyes, some of my example images using Adetailer!
|
Ahmet2250/ppo-LunarLander-v2
|
Ahmet2250
| 2023-07-15T07:16:38Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T07:15:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.32 +/- 20.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
blackmount8/mpt-7b-instruct-ct2-int8_float16
|
blackmount8
| 2023-07-15T06:52:02Z | 2 | 0 |
transformers
|
[
"transformers",
"Composer",
"MosaicML",
"llm-foundry",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"region:us"
] | null | 2023-07-15T05:40:47Z |
---
inference: false
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- Composer
- MosaicML
- llm-foundry
---
# blackmount8/mpt-7b-instruct-ct2-int8_float16
Int8_float16 version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct), quantized using CTranslate2.
## MPT-7B-Instruct
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted in the dolly-15k format:
```python
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering."
fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-40GBs for about 2.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
zen-E/q-Taxi-v3-v1
|
zen-E
| 2023-07-15T06:36:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T06:35:41Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.64
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="zen-E/q-Taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
NasimB/guten-rarity-all-end-19k-ctx-512
|
NasimB
| 2023-07-15T06:32:42Z | 143 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T05:38:01Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-all-end-19k-ctx-512
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-all-end-19k-ctx-512
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5135 | 1.19 | 500 | 5.4526 |
| 4.9916 | 2.38 | 1000 | 4.8062 |
| 4.3998 | 3.56 | 1500 | 4.4088 |
| 3.9739 | 4.75 | 2000 | 4.2180 |
| 3.6922 | 5.94 | 2500 | 4.1726 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
NasimB/gpt2-concat-cbt-rarity-end-p5k
|
NasimB
| 2023-07-15T06:25:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T04:30:51Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-cbt-rarity-end-p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-cbt-rarity-end-p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6981 | 0.29 | 500 | 5.6337 |
| 5.3423 | 0.58 | 1000 | 5.2046 |
| 4.9886 | 0.87 | 1500 | 4.9471 |
| 4.7073 | 1.17 | 2000 | 4.8060 |
| 4.5535 | 1.46 | 2500 | 4.6759 |
| 4.4474 | 1.75 | 3000 | 4.5672 |
| 4.336 | 2.04 | 3500 | 4.4881 |
| 4.1197 | 2.33 | 4000 | 4.4473 |
| 4.1025 | 2.62 | 4500 | 4.3897 |
| 4.0623 | 2.91 | 5000 | 4.3338 |
| 3.8634 | 3.21 | 5500 | 4.3240 |
| 3.7979 | 3.5 | 6000 | 4.2995 |
| 3.7821 | 3.79 | 6500 | 4.2652 |
| 3.6959 | 4.08 | 7000 | 4.2614 |
| 3.5107 | 4.37 | 7500 | 4.2535 |
| 3.5065 | 4.66 | 8000 | 4.2392 |
| 3.5013 | 4.95 | 8500 | 4.2262 |
| 3.3462 | 5.24 | 9000 | 4.2390 |
| 3.3225 | 5.54 | 9500 | 4.2385 |
| 3.3144 | 5.83 | 10000 | 4.2372 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
AnaBach/roberta-base-bne-finetuned-amazon_reviews_multi
|
AnaBach
| 2023-07-15T06:11:29Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-14T02:15:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.9355
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2188
- Accuracy: 0.9355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1953 | 1.0 | 1250 | 0.1686 | 0.9343 |
| 0.1034 | 2.0 | 2500 | 0.2188 | 0.9355 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sgarg/falcon-7b-qlora-fiqa-finbot-v1
|
sgarg
| 2023-07-15T05:30:56Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-15T04:43:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
kelvinih/taser-bert-base-uncased
|
kelvinih
| 2023-07-15T05:29:51Z | 0 | 0 | null |
[
"pytorch",
"license:mit",
"region:us"
] | null | 2023-07-15T05:27:05Z |
---
license: mit
---
# Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering
This repository includes the model for
[Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering](https://aclanthology.org/2023.acl-short.159/).
If you find this useful, please cite the following paper:
```
@inproceedings{cheng-etal-2023-task,
title = "Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering",
author = "Cheng, Hao and
Fang, Hao and
Liu, Xiaodong and
Gao, Jianfeng",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-short.159",
pages = "1864--1875",
}
```
|
amirabdullah19852020/pythia_70m_ppo_imdb_sentiment
|
amirabdullah19852020
| 2023-07-15T05:11:34Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-07-14T13:48:04Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="amirabdullah19852020//tmp/tmp3ply1fjk/amirabdullah19852020/pythia_70m_ppo_imdb_sentiment")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("amirabdullah19852020//tmp/tmp3ply1fjk/amirabdullah19852020/pythia_70m_ppo_imdb_sentiment")
model = AutoModelForCausalLMWithValueHead.from_pretrained("amirabdullah19852020//tmp/tmp3ply1fjk/amirabdullah19852020/pythia_70m_ppo_imdb_sentiment")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
NasimB/guten-rarity-end-cut-19k
|
NasimB
| 2023-07-15T04:56:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T03:03:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-end-cut-19k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-end-cut-19k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.69 | 0.29 | 500 | 5.6412 |
| 5.3327 | 0.59 | 1000 | 5.2058 |
| 4.9884 | 0.88 | 1500 | 4.9570 |
| 4.7105 | 1.18 | 2000 | 4.8008 |
| 4.5563 | 1.47 | 2500 | 4.6777 |
| 4.4438 | 1.77 | 3000 | 4.5652 |
| 4.3057 | 2.06 | 3500 | 4.4916 |
| 4.1258 | 2.36 | 4000 | 4.4456 |
| 4.1001 | 2.65 | 4500 | 4.3854 |
| 4.0586 | 2.94 | 5000 | 4.3319 |
| 3.8297 | 3.24 | 5500 | 4.3249 |
| 3.8029 | 3.53 | 6000 | 4.2962 |
| 3.7812 | 3.83 | 6500 | 4.2655 |
| 3.6544 | 4.12 | 7000 | 4.2687 |
| 3.5166 | 4.42 | 7500 | 4.2598 |
| 3.4969 | 4.71 | 8000 | 4.2438 |
| 3.4978 | 5.01 | 8500 | 4.2328 |
| 3.3159 | 5.3 | 9000 | 4.2445 |
| 3.3203 | 5.59 | 9500 | 4.2434 |
| 3.3104 | 5.89 | 10000 | 4.2422 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
goethe0101/GWP_Model
|
goethe0101
| 2023-07-15T04:46:28Z | 1 | 0 |
peft
|
[
"peft",
"pytorch",
"gpt_neox",
"region:us"
] | null | 2023-07-08T01:59:57Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
digiplay/Opiate_v1
|
digiplay
| 2023-07-15T04:39:12Z | 272 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-15T04:15:32Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/69587?modelVersionId=81796
Original Author's DEMO images :


|
yhhjynbhu/Akashi3
|
yhhjynbhu
| 2023-07-15T04:38:25Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T04:37:20Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_keras_callback
model-index:
- name: Akashi3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Akashi3
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
mittalashish/chique7
|
mittalashish
| 2023-07-15T04:11:30Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-15T04:08:44Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: <Chique>
---
### chique7 Dreambooth model trained by mittalashish with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-512 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
<Chique> (use that on your prompt)

|
renatostrianese/q-Taxi-v3
|
renatostrianese
| 2023-07-15T03:48:38Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T03:48:15Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="renatostrianese/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
cbredallas/labelclassification
|
cbredallas
| 2023-07-15T03:44:59Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"en",
"license:openrail",
"region:us"
] | null | 2023-07-15T03:43:24Z |
---
license: openrail
language:
- en
library_name: adapter-transformers
---
|
renatostrianese/q-FrozenLake-v1-4x4-noSlippery
|
renatostrianese
| 2023-07-15T03:43:44Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T03:43:33Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="renatostrianese/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
photonmz/distilbert-base-uncased-finetuned-emotion
|
photonmz
| 2023-07-15T03:33:06Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-15T03:10:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9275012469136824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.9275
- F1: 0.9275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8326 | 1.0 | 250 | 0.3185 | 0.902 | 0.8983 |
| 0.2499 | 2.0 | 500 | 0.2201 | 0.9275 | 0.9275 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
crumb/opentinystories-68m-complex
|
crumb
| 2023-07-15T03:25:24Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"dataset:crumb/flan-ul2-tinystories-complex",
"dataset:crumb/flan-ul2-tinystories",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-08T09:16:03Z |
---
datasets:
- crumb/flan-ul2-tinystories-complex
- crumb/flan-ul2-tinystories
---
test loss 2.669290 on crumb/flan-ul2-tinystories-complex, initialized from crumb/opentinystories-30m-base, 2 epochs, linear decreasing lr 1e-4. trained with double the batch size (256)
|
xielenite/zethielzero
|
xielenite
| 2023-07-15T03:18:29Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-13T23:40:11Z |
---
license: openrail
---
voice models for RVC inferencing. see https://docs.google.com/document/d/13_l1bd1Osgz7qlAZn-zhklCbHpVRk6bYOuAuB78qmsE/edit to see how to use.
|
matgu23/abtrl
|
matgu23
| 2023-07-15T03:09:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-15T03:02:33Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### abtrl Dreambooth model trained by matgu23 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AdanLee/ppo-Huggy
|
AdanLee
| 2023-07-15T03:01:35Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-15T03:01:15Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AdanLee/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
timjwhite/a2c-PandaReachDense-v2
|
timjwhite
| 2023-07-15T02:41:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T02:39:03Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.64 +/- 0.41
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/gpt2-concat-switch-rarity-no-cut
|
NasimB
| 2023-07-15T02:38:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-15T00:47:27Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-switch-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-switch-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7037 | 0.29 | 500 | 5.6319 |
| 5.3373 | 0.58 | 1000 | 5.2001 |
| 4.9919 | 0.87 | 1500 | 4.9536 |
| 4.7185 | 1.17 | 2000 | 4.8020 |
| 4.5556 | 1.46 | 2500 | 4.6811 |
| 4.4476 | 1.75 | 3000 | 4.5737 |
| 4.3298 | 2.04 | 3500 | 4.4863 |
| 4.1272 | 2.33 | 4000 | 4.4421 |
| 4.0996 | 2.62 | 4500 | 4.3853 |
| 4.0564 | 2.91 | 5000 | 4.3350 |
| 3.8676 | 3.21 | 5500 | 4.3248 |
| 3.8015 | 3.5 | 6000 | 4.2945 |
| 3.7787 | 3.79 | 6500 | 4.2610 |
| 3.6894 | 4.08 | 7000 | 4.2563 |
| 3.5111 | 4.37 | 7500 | 4.2530 |
| 3.5076 | 4.66 | 8000 | 4.2365 |
| 3.4984 | 4.95 | 8500 | 4.2243 |
| 3.341 | 5.24 | 9000 | 4.2363 |
| 3.3189 | 5.54 | 9500 | 4.2358 |
| 3.3196 | 5.83 | 10000 | 4.2346 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Scherbi/test-finetune-distilgpt2
|
Scherbi
| 2023-07-15T02:38:04Z | 138 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T17:14:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: test-finetune-distilgpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-finetune-distilgpt2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 0.0912 |
| No log | 2.0 | 6 | 0.0901 |
| No log | 3.0 | 9 | 0.0897 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
timjwhite/a2c-AntBulletEnv-v0
|
timjwhite
| 2023-07-15T01:39:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-15T01:37:29Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 792.36 +/- 37.50
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Panchovix/guanaco-33b-PI-8192-LoRA-4bit-32g
|
Panchovix
| 2023-07-15T01:38:52Z | 5 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-04T06:00:12Z |
---
license: other
---
[guanaco-33b](https://huggingface.co/timdettmers/guanaco-33b-merged) merged with bhenrym14's [airoboros-33b-gpt4-1.4.1-PI-8192-LoRA](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-LoRA), quantized at 4 bit.
More info about the LoRA [Here](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16). This is an alternative to SuperHOT 8k LoRA trained with LoRA_rank 64, and airoboros 1.4.1 dataset.
It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model.
I HIGHLY suggest to use exllama, to evade some VRAM issues.
Use compress_pos_emb = 4 for any context up to 8192 context.
If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use:
gpu_split: 9,21
|
borkur/gpt2-finetuned-wikitext2
|
borkur
| 2023-07-15T00:56:29Z | 85 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-14T21:30:03Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: borkur/gpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# borkur/gpt2-finetuned-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.4948
- Validation Loss: 6.3466
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.3152 | 6.7681 | 0 |
| 6.4948 | 6.3466 | 1 |
### Framework versions
- Transformers 4.31.0.dev0
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ALM-AHME/beit-large-patch16-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
|
ALM-AHME
| 2023-07-14T23:55:06Z | 5 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-14T20:43:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-large-patch16-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Splitted-Resized
split: train
args: Splitted-Resized
metrics:
- name: Accuracy
type: accuracy
value: 0.9938708156529938
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-large-patch16-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0275
- Accuracy: 0.9939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.46 | 1.0 | 199 | 0.3950 | 0.8482 |
| 0.2048 | 2.0 | 398 | 0.1886 | 0.9189 |
| 0.182 | 3.0 | 597 | 0.1382 | 0.9481 |
| 0.0826 | 4.0 | 796 | 0.0760 | 0.9694 |
| 0.0886 | 5.0 | 995 | 0.0600 | 0.9788 |
| 0.0896 | 6.0 | 1194 | 0.0523 | 0.9802 |
| 0.0774 | 7.0 | 1393 | 0.0482 | 0.9826 |
| 0.0876 | 8.0 | 1592 | 0.0289 | 0.9877 |
| 0.1105 | 9.0 | 1791 | 0.0580 | 0.9821 |
| 0.0289 | 10.0 | 1990 | 0.0294 | 0.9925 |
| 0.0594 | 11.0 | 2189 | 0.0331 | 0.9906 |
| 0.0011 | 12.0 | 2388 | 0.0275 | 0.9939 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
silvacarl/distilbert-base-uncased-finetuned-cola
|
silvacarl
| 2023-07-14T23:45:58Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-14T22:37:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.527141964318474
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8042
- Matthews Correlation: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5199 | 1.0 | 535 | 0.5170 | 0.4218 |
| 0.3502 | 2.0 | 1070 | 0.5057 | 0.4959 |
| 0.2419 | 3.0 | 1605 | 0.6179 | 0.5164 |
| 0.1818 | 4.0 | 2140 | 0.7569 | 0.5209 |
| 0.1328 | 5.0 | 2675 | 0.8042 | 0.5271 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-concat-rarity-all-guten-2p5k-cbt-p5k
|
NasimB
| 2023-07-14T23:37:07Z | 139 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-14T21:39:25Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-rarity-all-guten-2p5k-cbt-p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-rarity-all-guten-2p5k-cbt-p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6858 | 0.29 | 500 | 5.6433 |
| 5.3511 | 0.59 | 1000 | 5.2111 |
| 4.9925 | 0.88 | 1500 | 4.9524 |
| 4.7238 | 1.17 | 2000 | 4.8079 |
| 4.5666 | 1.47 | 2500 | 4.6856 |
| 4.453 | 1.76 | 3000 | 4.5716 |
| 4.3289 | 2.06 | 3500 | 4.5002 |
| 4.137 | 2.35 | 4000 | 4.4482 |
| 4.1124 | 2.64 | 4500 | 4.3913 |
| 4.0636 | 2.94 | 5000 | 4.3336 |
| 3.852 | 3.23 | 5500 | 4.3341 |
| 3.8135 | 3.52 | 6000 | 4.3033 |
| 3.7914 | 3.82 | 6500 | 4.2691 |
| 3.6733 | 4.11 | 7000 | 4.2704 |
| 3.5243 | 4.4 | 7500 | 4.2640 |
| 3.5183 | 4.7 | 8000 | 4.2479 |
| 3.5042 | 4.99 | 8500 | 4.2351 |
| 3.3345 | 5.28 | 9000 | 4.2497 |
| 3.3242 | 5.58 | 9500 | 4.2486 |
| 3.3254 | 5.87 | 10000 | 4.2476 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
CheeriosMomentors/LORA
|
CheeriosMomentors
| 2023-07-14T23:32:58Z | 0 | 0 | null |
[
"en",
"license:wtfpl",
"region:us"
] | null | 2023-04-08T06:21:46Z |
---
license: wtfpl
language:
- en
---
Okay listen up. This is mostly loras that I made by myself.
Some of these may be released on Civitai and some may not.
If you found these, good job you now have cool loras.
You can post these on Civitai or anywhere idc.
You can say these are yours, get money I do not care.
But please for god sake, leave my name out of it.
I am not responsible for anything you done with these.
These were just for fun, that is all. Now enjoy.
Lora Count: 2
We currently have Nisho Ishin (Medaka Box) style and ryukishi07 (Umineko Style.)
I may make more and post them here.
|
chunwoolee0/seqcls_mrpc_bert_base_uncased_model
|
chunwoolee0
| 2023-07-14T23:32:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-14T23:27:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: seqcls_mrpc_bert_base_uncased_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8014705882352942
- name: F1
type: f1
value: 0.8669950738916257
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# seqcls_mrpc_bert_base_uncased_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4621
- Accuracy: 0.8015
- F1: 0.8670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 58 | 0.5442 | 0.7108 | 0.8228 |
| No log | 2.0 | 116 | 0.5079 | 0.7745 | 0.8558 |
| No log | 3.0 | 174 | 0.4621 | 0.8015 | 0.8670 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
foreverip/dqn-SpaceInvadersNoFrameskip-v4
|
foreverip
| 2023-07-14T23:31:22Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-14T23:30:45Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 603.00 +/- 169.77
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga foreverip -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga foreverip -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga foreverip
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Yntec/Photosphere
|
Yntec
| 2023-07-14T23:22:58Z | 1,547 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Noosphere",
"Dreamlike",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-14T22:54:19Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Noosphere
- Dreamlike
---
# Photosphere
A mix of Noosphere v3 by skumerz and photorealistic models.
Original page:
https://civitai.com/models/36538?modelVersionId=107675
|
MnLgt/slope-bed
|
MnLgt
| 2023-07-14T23:19:56Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-07-14T23:19:55Z |
---
license: mit
---
### slope-bed on Stable Diffusion
This is the `<slope-bed>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:













|
0sunfire0/Pixelcopter_train_00
|
0sunfire0
| 2023-07-14T23:10:07Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-14T23:10:05Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter_train_00
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 7.20 +/- 7.10
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
cgr28/q-FrozenLake-v1-4x4-noSlippery
|
cgr28
| 2023-07-14T23:06:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-14T23:06:04Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="cgr28/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ashnrk/textual_inversion_annual_crop_te
|
ashnrk
| 2023-07-14T23:05:57Z | 31 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-14T22:58:31Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a centered satellite photo of <annual-crop> annual crop land.
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - ashnrk/textual_inversion_annual_crop_te
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a centered satellite photo of <annual-crop> annual crop land. using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
GISDGDIGDI9ED/leslie
|
GISDGDIGDI9ED
| 2023-07-14T22:53:08Z | 0 | 0 |
flair
|
[
"flair",
"art",
"es",
"dataset:openchat/openchat_sharegpt4_dataset",
"license:bsd",
"region:us"
] | null | 2023-07-14T22:50:29Z |
---
license: bsd
datasets:
- openchat/openchat_sharegpt4_dataset
language:
- es
metrics:
- character
library_name: flair
tags:
- art
---
|
ddanshin/clip-roberta-finetuned
|
ddanshin
| 2023-07-14T22:45:45Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-text-dual-encoder",
"feature-extraction",
"generated_from_trainer",
"dataset:ydshieh/coco_dataset_script",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-07-14T00:04:05Z |
---
base_model: ./clip-roberta
tags:
- generated_from_trainer
datasets:
- ydshieh/coco_dataset_script
model-index:
- name: clip-roberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-finetuned
This model is a fine-tuned version of [./clip-roberta](https://huggingface.co/./clip-roberta) on the ydshieh/coco_dataset_script 2017 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YanJiangJerry/sentiment-roberta-e3-b16-v2-w0.01
|
YanJiangJerry
| 2023-07-14T22:45:22Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-14T13:20:25Z |
---
tags:
- generated_from_trainer
metrics:
- f1
- recall
- precision
model-index:
- name: sentiment-roberta-e3-b16-v2-w0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-roberta-e3-b16-v2-w0.01
This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6014
- F1: 0.7844
- Recall: 0.7844
- Precision: 0.7844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:---------:|
| No log | 1.0 | 187 | 0.6687 | 0.7574 | 0.7574 | 0.7574 |
| No log | 2.0 | 374 | 0.5700 | 0.7898 | 0.7898 | 0.7898 |
| 0.6052 | 3.0 | 561 | 0.6014 | 0.7844 | 0.7844 | 0.7844 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
S1X3L4/Reinforce-copter
|
S1X3L4
| 2023-07-14T22:44:56Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-14T22:44:51Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-copter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 15.90 +/- 13.10
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
underactuated/opt-350m_ft
|
underactuated
| 2023-07-14T22:41:50Z | 136 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-14T22:39:39Z |
---
tags:
- generated_from_trainer
model-index:
- name: opt-350m_ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m_ft
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
fgaim/tiroberta-pos
|
fgaim
| 2023-07-14T22:36:14Z | 126 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"token-classification",
"ti",
"dataset:TLMD",
"dataset:NTC",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language: ti
widget:
- text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር"
datasets:
- TLMD
- NTC
metrics:
- f1
- precision
- recall
- accuracy
model-index:
- name: tiroberta-base-pos
results:
- task:
name: Token Classification
type: token-classification
metrics:
- name: F1
type: f1
value: 0.9562
- name: Precision
type: precision
value: 0.9562
- name: Recall
type: recall
value: 0.9562
- name: Accuracy
type: accuracy
value: 0.9562
---
# Tigrinya POS tagging with TiRoBERTa
This model is a fine-tuned version of [TiRoBERTa](https://huggingface.co/fgaim/tiroberta) on the NTC-v1 dataset (Tedla et al. 2016).
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Results
The model achieves the following results on the test set:
- Loss: 0.3194
- Adj Precision: 0.9219
- Adj Recall: 0.9335
- Adj F1: 0.9277
- Adj Number: 1670
- Adv Precision: 0.8297
- Adv Recall: 0.8554
- Adv F1: 0.8423
- Adv Number: 484
- Con Precision: 0.9844
- Con Recall: 0.9763
- Con F1: 0.9804
- Con Number: 972
- Fw Precision: 0.7895
- Fw Recall: 0.5357
- Fw F1: 0.6383
- Fw Number: 28
- Int Precision: 0.6552
- Int Recall: 0.7308
- Int F1: 0.6909
- Int Number: 26
- N Precision: 0.9650
- N Recall: 0.9662
- N F1: 0.9656
- N Number: 3992
- Num Precision: 0.9747
- Num Recall: 0.9665
- Num F1: 0.9706
- Num Number: 239
- N Prp Precision: 0.9308
- N Prp Recall: 0.9447
- N Prp F1: 0.9377
- N Prp Number: 470
- N V Precision: 0.9854
- N V Recall: 0.9736
- N V F1: 0.9794
- N V Number: 416
- Pre Precision: 0.9722
- Pre Recall: 0.9625
- Pre F1: 0.9673
- Pre Number: 907
- Pro Precision: 0.9448
- Pro Recall: 0.9236
- Pro F1: 0.9341
- Pro Number: 445
- Pun Precision: 1.0
- Pun Recall: 0.9994
- Pun F1: 0.9997
- Pun Number: 1607
- Unc Precision: 1.0
- Unc Recall: 0.875
- Unc F1: 0.9333
- Unc Number: 16
- V Precision: 0.8780
- V Recall: 0.9231
- V F1: 0.9
- V Number: 78
- V Aux Precision: 0.9685
- V Aux Recall: 0.9878
- V Aux F1: 0.9780
- V Aux Number: 654
- V Ger Precision: 0.9388
- V Ger Recall: 0.9571
- V Ger F1: 0.9479
- V Ger Number: 513
- V Imf Precision: 0.9634
- V Imf Recall: 0.9497
- V Imf F1: 0.9565
- V Imf Number: 914
- V Imv Precision: 0.8793
- V Imv Recall: 0.7286
- V Imv F1: 0.7969
- V Imv Number: 70
- V Prf Precision: 0.8960
- V Prf Recall: 0.9082
- V Prf F1: 0.9020
- V Prf Number: 294
- V Rel Precision: 0.9678
- V Rel Recall: 0.9538
- V Rel F1: 0.9607
- V Rel Number: 757
- Overall Precision: 0.9562
- Overall Recall: 0.9562
- Overall F1: 0.9562
- Overall Accuracy: 0.9562
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher={WiNLP 2021/EMNLP 2021}
}
```
## References
```
Tedla, Y., Yamamoto, K. & Marasinghe, A. 2016.
Tigrinya Part-of-Speech Tagging with Morphological Patterns and the New Nagaoka Tigrinya Corpus.
International Journal Of Computer Applications 146 pp. 33-41 (2016).
```
|
YanJiangJerry/sentiment-roberta-e2-b16-v2-w0.01
|
YanJiangJerry
| 2023-07-14T22:29:12Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-14T22:22:40Z |
---
tags:
- generated_from_trainer
metrics:
- f1
- recall
- precision
model-index:
- name: sentiment-roberta-e2-b16-v2-w0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-roberta-e2-b16-v2-w0.01
This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8630
- F1: 0.7520
- Recall: 0.7520
- Precision: 0.7520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:---------:|
| No log | 1.0 | 375 | 0.8651 | 0.6739 | 0.6739 | 0.6739 |
| 0.6564 | 2.0 | 750 | 0.8630 | 0.7520 | 0.7520 | 0.7520 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Recognai/zeroshot_selectra_small
|
Recognai
| 2023-07-14T22:23:19Z | 129 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"electra",
"text-classification",
"zero-shot-classification",
"nli",
"es",
"dataset:xnli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:04Z |
---
language: es
tags:
- zero-shot-classification
- nli
- pytorch
datasets:
- xnli
pipeline_tag: zero-shot-classification
license: apache-2.0
widget:
- text: "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo"
candidate_labels: "cultura, sociedad, economia, salud, deportes"
---
# Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA
*Zero-shot SELECTRA* is a [SELECTRA model](https://huggingface.co/Recognai/selectra_small) fine-tuned on the Spanish portion of the [XNLI dataset](https://huggingface.co/datasets/xnli). You can use it with Hugging Face's [Zero-shot pipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline) to make [zero-shot classifications](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
In comparison to our previous zero-shot classifier [based on BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli), zero-shot SELECTRA is **much more lightweight**. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) **outperforms** the BETO based zero-shot classifier.
## Usage
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Recognai/zeroshot_selectra_medium")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Este ejemplo es {}."
)
"""Output
{'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo',
'labels': ['sociedad', 'cultura', 'salud', 'economia', 'deportes'],
'scores': [0.3711881935596466,
0.25650349259376526,
0.17355826497077942,
0.1641489565372467,
0.03460107371211052]}
"""
```
The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
## Metrics
| Model | Params | XNLI (acc) | \*MLSUM (acc) |
| --- | --- | --- | --- |
| [zs BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli) | 110M | 0.799 | 0.530 |
| [zs SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium) | 41M | **0.807** | **0.589** |
| zs SELECTRA small | **22M** | 0.795 | 0.446 |
\*evaluated with zero-shot learning (ZSL)
- **XNLI**: The stated accuracy refers to the test portion of the [XNLI dataset](https://huggingface.co/datasets/xnli), after finetuning the model on the training portion.
- **MLSUM**: For this accuracy we take the test set of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) and classify the summaries of 5 selected labels. For details, check out our [evaluation notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/evaluation.ipynb)
## Training
Check out our [training notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/training.ipynb) for all the details.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon))
- Javier Lopez ([GitHub](https://github.com/javispp))
|
Recognai/bert-base-spanish-wwm-cased-xnli
|
Recognai
| 2023-07-14T22:22:51Z | 2,134 | 16 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"es",
"dataset:xnli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:04Z |
---
language: es
tags:
- zero-shot-classification
- nli
- pytorch
datasets:
- xnli
license: mit
pipeline_tag: zero-shot-classification
widget:
- text: "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo"
candidate_labels: "cultura, sociedad, economia, salud, deportes"
---
# bert-base-spanish-wwm-cased-xnli
**UPDATE, 15.10.2021: Check out our new zero-shot classifiers, much more lightweight and even outperforming this one: [zero-shot SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small) and [zero-shot SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium).**
## Model description
This model is a fine-tuned version of the [spanish BERT model](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) with the Spanish portion of the XNLI dataset. You can have a look at the [training script](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli/blob/main/zeroshot_training_script.py) for details of the training.
### How to use
You can use this model with Hugging Face's [zero-shot-classification pipeline](https://discuss.huggingface.co/t/new-pipeline-for-zero-shot-text-classification/681):
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Recognai/bert-base-spanish-wwm-cased-xnli")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Este ejemplo es {}."
)
"""output
{'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo',
'labels': ['cultura', 'sociedad', 'economia', 'salud', 'deportes'],
'scores': [0.38897448778152466,
0.22997373342514038,
0.1658431738615036,
0.1205764189362526,
0.09463217109441757]}
"""
```
## Eval results
Accuracy for the test set:
| | XNLI-es |
|-----------------------------|---------|
|bert-base-spanish-wwm-cased-xnli | 79.9% |
|
Recognai/distilbert-base-es-multilingual-cased
|
Recognai
| 2023-07-14T22:20:32Z | 352 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"es",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: es
license: apache-2.0
datasets:
- wikipedia
widget:
- text: "Mi nombre es Juan y vivo en [MASK]."
---
# DistilBERT base multilingual model Spanish subset (cased)
This model is the Spanish extract of `distilbert-base-multilingual-cased` (https://huggingface.co/distilbert-base-multilingual-cased), a distilled version of the [BERT base multilingual model](bert-base-multilingual-cased). This model is cased: it does make a difference between english and English.
It uses the extraction method proposed by Geotrend described in https://github.com/Geotrend-research/smaller-transformers.
The resulting model has the same architecture as DistilmBERT: 6 layers, 768 dimension and 12 heads, with a total of **63M parameters** (compared to 134M parameters for DistilmBERT).
The goal of this model is to reduce even further the size of the `distilbert-base-multilingual` multilingual model by selecting only most frequent tokens for Spanish, reducing the size of the embedding layer. For more details visit the paper from the Geotrend team: Load What You Need: Smaller Versions of Multilingual BERT.
|
NasimB/gpt2-concat-simple-wiki-mod-rarity-no-cut
|
NasimB
| 2023-07-14T22:08:59Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-14T20:28:50Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-simple-wiki-mod-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-simple-wiki-mod-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6838 | 0.29 | 500 | 5.6277 |
| 5.3231 | 0.59 | 1000 | 5.1994 |
| 4.987 | 0.88 | 1500 | 4.9572 |
| 4.7151 | 1.17 | 2000 | 4.8128 |
| 4.5647 | 1.47 | 2500 | 4.7004 |
| 4.4618 | 1.76 | 3000 | 4.6135 |
| 4.3426 | 2.06 | 3500 | 4.5400 |
| 4.1605 | 2.35 | 4000 | 4.4888 |
| 4.1305 | 2.64 | 4500 | 4.4288 |
| 4.0903 | 2.94 | 5000 | 4.3762 |
| 3.8797 | 3.23 | 5500 | 4.3722 |
| 3.83 | 3.52 | 6000 | 4.3423 |
| 3.8158 | 3.82 | 6500 | 4.3083 |
| 3.6986 | 4.11 | 7000 | 4.3079 |
| 3.5427 | 4.4 | 7500 | 4.3022 |
| 3.5399 | 4.7 | 8000 | 4.2835 |
| 3.5248 | 4.99 | 8500 | 4.2710 |
| 3.352 | 5.28 | 9000 | 4.2862 |
| 3.3468 | 5.58 | 9500 | 4.2856 |
| 3.3441 | 5.87 | 10000 | 4.2850 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Jowie/ppo-LunarLander
|
Jowie
| 2023-07-14T22:08:23Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-14T22:07:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 227.31 +/- 46.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AACEE/pokemon-lora
|
AACEE
| 2023-07-14T21:57:11Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-14T20:24:26Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - AACEE/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
wolffenbuetell/PFKODRCHORMA
|
wolffenbuetell
| 2023-07-14T21:53:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-14T21:48:13Z |
---
license: creativeml-openrail-m
---
|
YanJiangJerry/covid-tweet-bert-large-e2-noweight
|
YanJiangJerry
| 2023-07-14T21:45:24Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-14T21:30:30Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: covid-tweet-bert-large-e2-noweight
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-tweet-bert-large-e2-noweight
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2582
- Accuracy: 0.9568
- F1: 0.8878
- Precision: 0.8604
- Recall: 0.9170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0593 | 1.0 | 1023 | 0.2053 | 0.9581 | 0.8885 | 0.8810 | 0.8962 |
| 0.0146 | 2.0 | 2046 | 0.2582 | 0.9568 | 0.8878 | 0.8604 | 0.9170 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
0sunfire0/Cartpole-v1_train_01
|
0sunfire0
| 2023-07-14T21:31:24Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-14T21:31:15Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v1_train_01
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 497.20 +/- 8.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ronde1e/lll123
|
ronde1e
| 2023-07-14T21:22:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-14T21:17:57Z |
---
license: creativeml-openrail-m
---
|
NasimB/gpt2-concat-qed-rarity-no-cut
|
NasimB
| 2023-07-14T21:16:05Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-14T19:12:29Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-qed-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-qed-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7002 | 0.29 | 500 | 5.6309 |
| 5.3451 | 0.58 | 1000 | 5.2082 |
| 5.0021 | 0.88 | 1500 | 4.9592 |
| 4.7266 | 1.17 | 2000 | 4.8110 |
| 4.5737 | 1.46 | 2500 | 4.6859 |
| 4.4727 | 1.75 | 3000 | 4.5796 |
| 4.3511 | 2.04 | 3500 | 4.5066 |
| 4.1544 | 2.34 | 4000 | 4.4568 |
| 4.1252 | 2.63 | 4500 | 4.3988 |
| 4.083 | 2.92 | 5000 | 4.3471 |
| 3.8825 | 3.21 | 5500 | 4.3454 |
| 3.8226 | 3.5 | 6000 | 4.3139 |
| 3.8118 | 3.8 | 6500 | 4.2766 |
| 3.7159 | 4.09 | 7000 | 4.2763 |
| 3.5383 | 4.38 | 7500 | 4.2702 |
| 3.5395 | 4.67 | 8000 | 4.2556 |
| 3.5257 | 4.96 | 8500 | 4.2454 |
| 3.3727 | 5.26 | 9000 | 4.2570 |
| 3.3469 | 5.55 | 9500 | 4.2567 |
| 3.3465 | 5.84 | 10000 | 4.2550 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Vladislav-HuggingFace/dqn-SpaceInvadersNoFrameskip-v4
|
Vladislav-HuggingFace
| 2023-07-14T20:52:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-14T20:52:04Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 654.50 +/- 195.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Vladislav-HuggingFace -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Vladislav-HuggingFace -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Vladislav-HuggingFace
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
YanJiangJerry/covid-augment-tweet-bert-large-e8-noweight
|
YanJiangJerry
| 2023-07-14T20:48:41Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-14T20:18:03Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: covid-augment-tweet-bert-large-e8-noweight
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-augment-tweet-bert-large-e8-noweight
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2396
- Accuracy: 0.9714
- F1: 0.9249
- Precision: 0.9095
- Recall: 0.9409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 408 | 0.1663 | 0.9419 | 0.8609 | 0.78 | 0.9606 |
| 0.2202 | 2.0 | 816 | 0.1532 | 0.9594 | 0.8957 | 0.8630 | 0.9310 |
| 0.0794 | 3.0 | 1224 | 0.1745 | 0.9687 | 0.9167 | 0.9122 | 0.9212 |
| 0.0318 | 4.0 | 1632 | 0.1815 | 0.9696 | 0.9197 | 0.9087 | 0.9310 |
| 0.0098 | 5.0 | 2040 | 0.2013 | 0.9705 | 0.9227 | 0.9052 | 0.9409 |
| 0.0098 | 6.0 | 2448 | 0.2173 | 0.9733 | 0.9294 | 0.9183 | 0.9409 |
| 0.0031 | 7.0 | 2856 | 0.2324 | 0.9696 | 0.9189 | 0.9167 | 0.9212 |
| 0.0024 | 8.0 | 3264 | 0.2396 | 0.9714 | 0.9249 | 0.9095 | 0.9409 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
davidfisher/test_model
|
davidfisher
| 2023-07-14T20:40:19Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"en",
"dataset:imdb",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-14T15:39:56Z |
---
language: en
tags:
- text-classification
datasets:
- imdb
license: mit
---
# My Model
This is the description of my model.
## Usage
```python
from transformers import pipeline
model_path = "davidfisher/test_model" # update with the actual repository name
classifier = pipeline("text-classification", model=model_path)
classifier("This is an example of input text.")
```
## Limitations
This model could be improved.
## Ethical Considerations
Don't use this model for evil.
|
hseokool/vicuna-13b-v1.3-230623-10
|
hseokool
| 2023-07-14T20:35:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-14T20:35:31Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Rui31415/Taxi
|
Rui31415
| 2023-07-14T20:32:17Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-14T20:32:15Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Rui31415/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
avishek-018/bert-semantic-similarity
|
avishek-018
| 2023-07-14T20:22:22Z | 6 | 1 |
tf-keras
|
[
"tf-keras",
"sentence-similarity",
"en",
"license:mit",
"region:us"
] |
sentence-similarity
| 2023-07-14T19:41:59Z |
---
license: mit
language:
- en
pipeline_tag: sentence-similarity
---
widget:
- source_sentence: Two women are observing something together.
sentences:
- Two women are standing with their eyes closed.
example_title: Example 1
- source_sentence: A smiling costumed woman is holding an umbrella
sentences:
- A happy woman in a fairy costume holds an umbrella
example_title: Example 2
- source_sentence: A soccer game with multiple males playing
sentences:
- Some men are playing a sport
example_title: Example 3
|
davej23/distilhubert-finetuned-gtzan
|
davej23
| 2023-07-14T20:20:33Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-14T18:19:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4577
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8254 | 1.0 | 113 | 1.8353 | 0.48 |
| 1.2492 | 2.0 | 226 | 1.4297 | 0.57 |
| 1.0203 | 3.0 | 339 | 0.9814 | 0.69 |
| 0.633 | 4.0 | 452 | 0.7345 | 0.83 |
| 0.5642 | 5.0 | 565 | 0.6213 | 0.8 |
| 0.3219 | 6.0 | 678 | 0.5763 | 0.84 |
| 0.1772 | 7.0 | 791 | 0.4850 | 0.86 |
| 0.2427 | 8.0 | 904 | 0.4841 | 0.86 |
| 0.1397 | 9.0 | 1017 | 0.4760 | 0.86 |
| 0.4494 | 10.0 | 1130 | 0.4577 | 0.86 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rachidsaid/videomae-base-finetuned-ucf101-subset
|
rachidsaid
| 2023-07-14T20:17:04Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-07-01T18:29:21Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4355
- Accuracy: 0.8516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 148
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1351 | 0.26 | 38 | 1.6582 | 0.6286 |
| 0.7409 | 1.26 | 76 | 0.8407 | 0.7143 |
| 0.4333 | 2.26 | 114 | 0.5107 | 0.8143 |
| 0.2766 | 3.23 | 148 | 0.3579 | 0.9143 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MicroPanda123/PythonBasic
|
MicroPanda123
| 2023-07-14T20:15:25Z | 4 | 0 | null |
[
"text-generation",
"license:gpl-2.0",
"region:us"
] |
text-generation
| 2023-07-14T13:25:41Z |
---
license: gpl-2.0
pipeline_tag: text-generation
---
Got bored so used [nanoGPT](https://github.com/karpathy/nanoGPT) to train model on all Python snippets from https://www.kaggle.com/datasets/simiotic/github-code-snippets
Model was trained on default train.py settings, except
```
eval_intervals=20
eval_iters=40
batch_size=2
gradient_accumulation_steps = 64
```
This was because I was training it locally on RTX2060 and did not have enough power to train it on higher settings.
Model is stored in "model" folder that contains model itself and "info.txt" file containing:
- iter_num - number of iterations
- train_loss - training loss at time of checkpoint
- val_loss - validation loss at time of checkpoint
- config - nanoGPT config
At first I made it only save model after validation loss improved, to not allow overfitting, but after some time I decided to risk it and turned that off and allowed it to save everytime, luckly it worked out fine.
|
roilhi/ppo-LunarLander-v2
|
roilhi
| 2023-07-14T20:08:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-14T20:07:54Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 286.00 +/- 24.34
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
akifhasan/sabbur-protogenx3-4
|
akifhasan
| 2023-07-14T20:00:14Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-14T19:55:21Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### sabbur_protogenx3.4 Dreambooth model trained by akifhasan with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
felflare/EasyOCR-weights
|
felflare
| 2023-07-14T19:57:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-03-29T17:40:39Z |
## Port of EasyOCR weights from Jaided AI model Hub
These Weights are from Gen 2 of EasyOCR weights
**Original weights can be found here - [Jaided AI Model Hub](https://www.jaided.ai/easyocr/modelhub/)**
Licensed under [Jaided AI license terms](https://github.com/JaidedAI/EasyOCR/blob/master/LICENSE), this is only a port of the weights onto Hugginface model repository for ease of access.
|
Rui31415/q-FrozenLake-v1-4x4-noSlippery
|
Rui31415
| 2023-07-14T19:50:52Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-14T19:50:49Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Rui31415/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
chaojiang06/arXivEdits-intention-classifier-T5-large-fine-grained
|
chaojiang06
| 2023-07-14T19:41:54Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"arxiv:2210.15067",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-14T18:59:20Z |
---
tags:
- generated_from_trainer
model-index:
- name: arXivEdits-intention-classifier-T5-large-fine-grained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Checkpoints for [arXivEdits paper](https://arxiv.org/pdf/2210.15067.pdf). Please see more details at the [github repo](https://github.com/chaojiang06/arXivEdits/tree/main).
# arXivEdits-intention-classifier-T5-large-fine-grained
This model is a fine-tuned version of [tmp/tst-translation355](https://huggingface.co/tmp/tst-translation355) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.11.6
|
chaojiang06/arXivEdits-intention-classifier-T5-base-fine-grained
|
chaojiang06
| 2023-07-14T19:40:52Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"arxiv:2210.15067",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-14T19:12:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: arXivEdits-intention-classifier-T5-base-fine-grained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Checkpoints for [arXivEdits paper](https://arxiv.org/pdf/2210.15067.pdf). Please see more details at the [github repo](https://github.com/chaojiang06/arXivEdits/tree/main).
# arXivEdits-intention-classifier-T5-base-fine-grained
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1457
- Accuracy: 0.6826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 105 | 0.3043 | 0.2991 |
| No log | 2.0 | 210 | 0.2653 | 0.3311 |
| No log | 3.0 | 315 | 0.2475 | 0.4726 |
| No log | 4.0 | 420 | 0.1737 | 0.6096 |
| 0.5112 | 5.0 | 525 | 0.1660 | 0.6256 |
| 0.5112 | 6.0 | 630 | 0.1499 | 0.6575 |
| 0.5112 | 7.0 | 735 | 0.1497 | 0.6438 |
| 0.5112 | 8.0 | 840 | 0.1457 | 0.6826 |
| 0.5112 | 9.0 | 945 | 0.1470 | 0.6781 |
| 0.151 | 10.0 | 1050 | 0.1428 | 0.6781 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.11.6
|
YanJiangJerry/SA-berttweet-large-e6-w2-1-b16-w0.01
|
YanJiangJerry
| 2023-07-14T19:35:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-14T18:56:29Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-berttweet-large-e6-w2-1-b16-w0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-berttweet-large-e6-w2-1-b16-w0.01
This model is a fine-tuned version of [vinai/bertweet-large](https://huggingface.co/vinai/bertweet-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4510
- Accuracy: 0.935
- F1: 0.9423
- Precision: 0.9432
- Recall: 0.9415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 285 | 0.2599 | 0.871 | 0.8714 | 0.9954 | 0.7748 |
| 0.3039 | 2.0 | 570 | 0.2502 | 0.929 | 0.9371 | 0.9363 | 0.9379 |
| 0.3039 | 3.0 | 855 | 0.4228 | 0.923 | 0.9331 | 0.9148 | 0.9521 |
| 0.1246 | 4.0 | 1140 | 0.4102 | 0.934 | 0.9414 | 0.9431 | 0.9397 |
| 0.1246 | 5.0 | 1425 | 0.4532 | 0.933 | 0.9407 | 0.9398 | 0.9415 |
| 0.0379 | 6.0 | 1710 | 0.4510 | 0.935 | 0.9423 | 0.9432 | 0.9415 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
janimo/taxiv3
|
janimo
| 2023-07-14T19:24:28Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-14T19:24:25Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxiv3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="janimo/taxiv3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
w601sxs/pythia-70m-instruct-orca-chkpt-64000
|
w601sxs
| 2023-07-14T19:16:16Z | 171 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"dataset:Open-Orca/OpenOrca",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-14T18:39:56Z |
---
datasets:
- Open-Orca/OpenOrca
---
To use, do:
```
from peft import PeftModel, PeftConfig
from transformers import AutoTokenizer
ref_model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-70m-deduped-v0", torch_dtype=torch.bfloat16)
peft_model_id = "w601sxs/pythia-70m-instruct-orca-chkpt-64000"
config = PeftConfig.from_pretrained(peft_model_id)
model = PeftModel.from_pretrained(ref_model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model = model.to('cuda:0')
model.eval()
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=10)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
```
### Prompt format
```
context: < ... >
question: < ... >
answer: < ... >
```
For e.g.
```
context: <You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.>
question: <Here is some data: The Rice Boat eatType restaurant; The Rice Boat food Fast food; The Rice Boat familyFriendly yes; The Rice Boat near Express by Holiday Inn.
Write a sentence that describes this data:>
answer: <
```
|
tanmoy-in/base_model
|
tanmoy-in
| 2023-07-14T19:15:39Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"region:us"
] | null | 2023-07-14T19:02:32Z |
---
license: other
base_model: facebook/opt-350m
tags:
- generated_from_trainer
model-index:
- name: base_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base_model
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 5
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Leon68/opt-6.7b-lora
|
Leon68
| 2023-07-14T19:03:58Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-14T19:03:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
YanJiangJerry/SA-roberta-e3-w2-1-b16-w0.01-data2
|
YanJiangJerry
| 2023-07-14T18:53:37Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-14T18:22:38Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-roberta-e3-w2-1-b16-w0.01-data2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-roberta-e3-w2-1-b16-w0.01-data2
This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5272
- Accuracy: 0.9032
- F1: 0.8664
- Precision: 0.8924
- Recall: 0.8418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2717 | 1.0 | 581 | 0.3400 | 0.9132 | 0.8811 | 0.9003 | 0.8627 |
| 0.1102 | 2.0 | 1162 | 0.5082 | 0.9021 | 0.8706 | 0.8580 | 0.8836 |
| 0.0525 | 3.0 | 1743 | 0.5272 | 0.9032 | 0.8664 | 0.8924 | 0.8418 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.