modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-21 12:34:09
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 568
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-21 12:33:58
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jeongyeom/xlm-roberta-base-finetuned-panx-de
|
jeongyeom
| 2024-01-12T06:17:42Z | 100 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-12T05:45:37Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1342
- F1: 0.8637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2607 | 1.0 | 525 | 0.1529 | 0.8156 |
| 0.1265 | 2.0 | 1050 | 0.1445 | 0.8487 |
| 0.0838 | 3.0 | 1575 | 0.1342 | 0.8637 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
MaziyarPanahi/japanese-stablelm-base-gamma-7b-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T06:13:57Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"stabilityai/japanese-stablelm-base-gamma-7b",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T06:08:59Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- stabilityai/japanese-stablelm-base-gamma-7b
---
# japanese-stablelm-base-gamma-7b-Mistral-7B-Instruct-v0.2-slerp
japanese-stablelm-base-gamma-7b-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: stabilityai/japanese-stablelm-base-gamma-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/japanese-stablelm-base-gamma-7b-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
6MyDuck69/ppo-LunarLander-v2
|
6MyDuck69
| 2024-01-12T05:38:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T05:06:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -79.99 +/- 14.62
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
letingliu/holder_type2
|
letingliu
| 2024-01-12T05:34:15Z | 50 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-12T05:28:35Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: letingliu/holder_type2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# letingliu/holder_type2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4652
- Validation Loss: 0.4554
- Train Accuracy: 0.9333
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6797 | 0.6563 | 0.8583 | 0 |
| 0.6380 | 0.5999 | 0.8833 | 1 |
| 0.5750 | 0.5293 | 0.9 | 2 |
| 0.5168 | 0.4771 | 0.925 | 3 |
| 0.4718 | 0.4554 | 0.9333 | 4 |
| 0.4703 | 0.4554 | 0.9333 | 5 |
| 0.4732 | 0.4554 | 0.9333 | 6 |
| 0.4659 | 0.4554 | 0.9333 | 7 |
| 0.4621 | 0.4554 | 0.9333 | 8 |
| 0.4751 | 0.4554 | 0.9333 | 9 |
| 0.4686 | 0.4554 | 0.9333 | 10 |
| 0.4647 | 0.4554 | 0.9333 | 11 |
| 0.4735 | 0.4554 | 0.9333 | 12 |
| 0.4699 | 0.4554 | 0.9333 | 13 |
| 0.4719 | 0.4554 | 0.9333 | 14 |
| 0.4701 | 0.4554 | 0.9333 | 15 |
| 0.4672 | 0.4554 | 0.9333 | 16 |
| 0.4561 | 0.4554 | 0.9333 | 17 |
| 0.4717 | 0.4554 | 0.9333 | 18 |
| 0.4652 | 0.4554 | 0.9333 | 19 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
TitanTec/SpaceInvadersNoFrameskip-v4-T2
|
TitanTec
| 2024-01-12T05:27:17Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-12T05:26:46Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 334.50 +/- 159.73
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga TitanTec -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga TitanTec -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga TitanTec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 200000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 1e-05),
('learning_starts', 100000),
('n_timesteps', 200000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('replay_buffer_kwargs', {'handle_timeout_termination': False}),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
MaziyarPanahi/LeoScorpius-7B-Chat-DPO-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T05:23:52Z | 25 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"viethq188/LeoScorpius-7B-Chat-DPO",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T05:18:43Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- viethq188/LeoScorpius-7B-Chat-DPO
---
# LeoScorpius-7B-Chat-DPO-Mistral-7B-Instruct-v0.2-slerp
LeoScorpius-7B-Chat-DPO-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [viethq188/LeoScorpius-7B-Chat-DPO](https://huggingface.co/viethq188/LeoScorpius-7B-Chat-DPO)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: viethq188/LeoScorpius-7B-Chat-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/LeoScorpius-7B-Chat-DPO-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
hanseokOh/smartPatent-mContriever-lora
|
hanseokOh
| 2024-01-12T05:16:51Z | 4 | 0 |
peft
|
[
"peft",
"ko",
"base_model:facebook/mcontriever-msmarco",
"base_model:adapter:facebook/mcontriever-msmarco",
"region:us"
] | null | 2024-01-12T04:48:00Z |
---
library_name: peft
base_model: facebook/mcontriever-msmarco
language:
- ko
---
# smartPatent-mContriever-lora
The model is fine-tuned on the customed Korean Patent Retrieval system.
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Two types of datasets are used as training data: queries automatically generated through GPT-4 and patent titles that are linked to existing patent abstracts.
### Usage
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
```python
from transformers import AutoTokenizer, AutoModel, AutoModelForSequenceClassification
import torch
from transformers import AutoModel, AutoTokenizer
from peft import PeftModel, PeftConfig
def get_model(peft_model_name):
config = PeftConfig.from_pretrained(peft_model_name)
base_model = AutoModel.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(base_model, peft_model_name)
model = model.merge_and_unload()
model.eval()
return model
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('facebook/mcontriever-msmarco')
model = get_model('hanseokOh/smartPatent-mContriever-lora')
```
### Info
- **Developed by:** hanseokOh
- **Model type:** information retriever
- **Language(s) (NLP):** Korean
- **Finetuned from model [optional]:** mContriever-msmarco
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/hanseokOh/PatentSearch
|
MaziyarPanahi/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T05:14:35Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T05:09:27Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85
---
# speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85-Mistral-7B-Instruct-v0.2-slerp
speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/speechless-mistral-dolphin-orca-platypus-samantha-7b-dare-0.85-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
zzvvmm/mms-tts-vie
|
zzvvmm
| 2024-01-12T05:07:02Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2024-01-11T09:44:18Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Vietnamese Text-to-Speech
This repository contains the **Vietnamese (vie)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-vie")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-vie")
text = "some example text in the Vietnamese language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
MaziyarPanahi/smartyplats-7b-v2-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T05:03:34Z | 25 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"vihangd/smartyplats-7b-v2",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T04:58:39Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- vihangd/smartyplats-7b-v2
---
# smartyplats-7b-v2-Mistral-7B-Instruct-v0.2-slerp
smartyplats-7b-v2-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [vihangd/smartyplats-7b-v2](https://huggingface.co/vihangd/smartyplats-7b-v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: vihangd/smartyplats-7b-v2
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/smartyplats-7b-v2-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
DaanishHindustani/Q-A_chat_bot
|
DaanishHindustani
| 2024-01-12T04:57:02Z | 48 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-12T00:46:59Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: DaanishHindustani/Q-A_chat_bot
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DaanishHindustani/Q-A_chat_bot
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7988
- Validation Loss: 1.7635
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4141 | 2.1035 | 0 |
| 1.7988 | 1.7635 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
MaziyarPanahi/zephyr-7b-alpha-dare-0.85-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T04:54:48Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"uukuguy/zephyr-7b-alpha-dare-0.85",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T04:49:53Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- uukuguy/zephyr-7b-alpha-dare-0.85
---
# zephyr-7b-alpha-dare-0.85-Mistral-7B-Instruct-v0.2-slerp
zephyr-7b-alpha-dare-0.85-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [uukuguy/zephyr-7b-alpha-dare-0.85](https://huggingface.co/uukuguy/zephyr-7b-alpha-dare-0.85)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: uukuguy/zephyr-7b-alpha-dare-0.85
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/zephyr-7b-alpha-dare-0.85-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
BanUrsus/marian-finetuned-kde4-en-to-de-translator_nlp-course-chapter7-section3
|
BanUrsus
| 2024-01-12T04:50:08Z | 124 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-de",
"base_model:finetune:Helsinki-NLP/opus-mt-en-de",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-01-12T01:43:36Z |
---
license: cc-by-4.0
base_model: Helsinki-NLP/opus-mt-en-de
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-de-translator_nlp-course-chapter7-section3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: de-en
split: train
args: de-en
metrics:
- name: Bleu
type: bleu
value: 35.64235445610118
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-de-translator_nlp-course-chapter7-section3
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2112
- Bleu: 35.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 1.11.0+cu102
- Datasets 2.15.0
- Tokenizers 0.15.0
|
g8nz/stable-diffusion-x4-upscaler
|
g8nz
| 2024-01-12T04:45:48Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"diffusers:StableDiffusionUpscalePipeline",
"region:us"
] | null | 2024-01-12T04:10:46Z |
---
license: openrail++
tags:
- stable-diffusion
inference: false
---
# Stable Diffusion x4 upscaler model card
This model card focuses on the model associated with the Stable Diffusion Upscaler, available [here](https://github.com/Stability-AI/stablediffusion).
This model is trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752).
In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml).

- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `x4-upscaler-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.ckpt).
- Use it with 🧨 [`diffusers`](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler#examples)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
- **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner.
```bash
pip install diffusers transformers accelerate scipy safetensors
```
```python
import requests
from PIL import Image
from io import BytesIO
from diffusers import StableDiffusionUpscalePipeline
import torch
# load model and scheduler
model_id = "stabilityai/stable-diffusion-x4-upscaler"
pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipeline = pipeline.to("cuda")
# let's download an image
url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png"
response = requests.get(url)
low_res_img = Image.open(BytesIO(response.content)).convert("RGB")
low_res_img = low_res_img.resize((128, 128))
prompt = "a white cat"
upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0]
upscaled_image.save("upsampled_cat.png")
```
**Notes**:
- Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
- If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed)
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of the large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section).
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic.
**Training Procedure**
Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through the OpenCLIP-ViT/H text-encoder.
- The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512.
We currently provide the following checkpoints:
- `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`.
850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`.
- `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset.
- `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized.
- `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama).
- `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752).
In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 1
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints:

Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 200000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq.
## Citation
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
ryusangwon/3118_Llama-2-13b-hf
|
ryusangwon
| 2024-01-12T04:32:54Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:cnn_dailymail",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2024-01-12T04:32:46Z |
---
base_model: meta-llama/Llama-2-13b-hf
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: 3118_Llama-2-13b-hf
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3118_Llama-2-13b-hf
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
bsmsultani/lunerlander
|
bsmsultani
| 2024-01-12T04:25:46Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T03:45:44Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.30 +/- 19.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MaziyarPanahi/speechless-code-mistral-7b-v2.0-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T04:19:15Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"uukuguy/speechless-code-mistral-7b-v2.0",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T04:14:17Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- uukuguy/speechless-code-mistral-7b-v2.0
---
# speechless-code-mistral-7b-v2.0-Mistral-7B-Instruct-v0.2-slerp
speechless-code-mistral-7b-v2.0-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [uukuguy/speechless-code-mistral-7b-v2.0](https://huggingface.co/uukuguy/speechless-code-mistral-7b-v2.0)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: uukuguy/speechless-code-mistral-7b-v2.0
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/speechless-code-mistral-7b-v2.0-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
gustavokpc/IC_quarto
|
gustavokpc
| 2024-01-12T04:14:18Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-21T15:20:31Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: gustavokpc/IC_quarto
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gustavokpc/IC_quarto
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1061
- Train Accuracy: 0.9634
- Train F1 M: 0.5432
- Train Precision M: 0.3978
- Train Recall M: 0.9159
- Validation Loss: 0.2101
- Validation Accuracy: 0.9235
- Validation F1 M: 0.5596
- Validation Precision M: 0.4070
- Validation Recall M: 0.9389
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2274, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train F1 M | Train Precision M | Train Recall M | Validation Loss | Validation Accuracy | Validation F1 M | Validation Precision M | Validation Recall M | Epoch |
|:----------:|:--------------:|:----------:|:-----------------:|:--------------:|:---------------:|:-------------------:|:---------------:|:----------------------:|:-------------------:|:-----:|
| 0.3469 | 0.8494 | 0.4233 | 0.3470 | 0.6216 | 0.2535 | 0.8945 | 0.5613 | 0.4145 | 0.9125 | 0 |
| 0.1742 | 0.9335 | 0.5237 | 0.3895 | 0.8572 | 0.2315 | 0.9017 | 0.5765 | 0.4256 | 0.9353 | 1 |
| 0.1061 | 0.9634 | 0.5432 | 0.3978 | 0.9159 | 0.2101 | 0.9235 | 0.5596 | 0.4070 | 0.9389 | 2 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
akashmaggon/bert-base-uncased-machinehackathon
|
akashmaggon
| 2024-01-12T04:00:25Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"region:us"
] | null | 2024-01-12T03:41:09Z |
---
library_name: peft
base_model: distilbert-base-uncased
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
MaziyarPanahi/Mini_synatra_7b_02-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T04:00:06Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"Minirecord/Mini_synatra_7b_02",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T03:55:24Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- Minirecord/Mini_synatra_7b_02
---
# Mini_synatra_7b_02-Mistral-7B-Instruct-v0.2-slerp
Mini_synatra_7b_02-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [Minirecord/Mini_synatra_7b_02](https://huggingface.co/Minirecord/Mini_synatra_7b_02)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: Minirecord/Mini_synatra_7b_02
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Mini_synatra_7b_02-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
amd/rcan
|
amd
| 2024-01-12T03:54:30Z | 0 | 0 | null |
[
"onnx",
"RyzenAI",
"Super Resolution",
"Pytorch",
"Vision",
"SISR",
"en",
"dataset:Set5",
"dataset:Div2k",
"arxiv:1807.02758",
"license:apache-2.0",
"region:us"
] | null | 2023-12-04T16:30:53Z |
---
license: apache-2.0
tags:
- RyzenAI
- Super Resolution
- Pytorch
- Vision
- SISR
datasets:
- Set5
- Div2k
language:
- en
Metircs:
- PSNR
---
# RCAN model trained on DIV2K
RCAN is a very deep residual channel attention network for super resolution trained on DIV2K. It was introduced in the paper [Image Super-Resolution Using Very Deep Residual Channel Attention Networks in 2018](https://arxiv.org/abs/1807.02758) by Yulun Zhang et al. and first released in [this repository](https://github.com/yulunzhang/RCAN).
We develop a modified version that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com).
## Model description
RCAN is an advanced algorithm for single image super resolution. Our modified version is smaller than the original version. It is based deep learning techniques and is capable of X2 super resolution.
## Intended uses & limitations
You can use the raw model for super resolution. See the [model hub](https://huggingface.co/models?sort=trending&search=amd%2Frcan) to look for all available RCAN models.
## How to use
### Installation
Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
Run the following script to install pre-requisites for this model.
```bash
pip install -r requirements.txt
```
### Data Preparation (optional: for accuracy evaluation)
1. Download the benchmark(https://cv.snu.ac.kr/research/EDSR/benchmark.tar) dataset.
2. Organize the dataset directory as follows:
```Plain
└── dataset
└── benchmark
├── Set5
├── HR
| ├── baby.png
| ├── ...
└── LR_bicubic
└──X2
├──babyx2.png
├── ...
├── Set14
├── ...
```
### Test & Evaluation
- Code snippet from [`infer_onnx.py`](infer_onnx.py) on how to use
```python
parser = argparse.ArgumentParser(description='RCAN SISR')
parser.add_argument('--onnx_path', type=str, default='RCAN_int8_NHWC.onnx',
help='onnx path')
parser.add_argument('--image_path', default='test_data/test.png',
help='path of your image')
parser.add_argument('--output_path', default='test_data/sr.png',
help='path of your image')
parser.add_argument('--ipu', action='store_true',
help='use ipu')
parser.add_argument('--provider_config', type=str, default=None,
help='provider config path')
args = parser.parse_args()
if args.ipu:
providers = ["VitisAIExecutionProvider"]
provider_options = [{"config_file": args.provider_config}]
else:
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
provider_options = None
onnx_file_name = args.onnx_path
image_path = args.image_path
output_path = args.output_path
ort_session = onnxruntime.InferenceSession(onnx_file_name, providers=providers, provider_options=provider_options)
lr = cv2.imread(image_path)[np.newaxis,:,:,:].transpose((0,3,1,2)).astype(np.float32)
sr = tiling_inference(ort_session, lr, 8, (56, 56))
sr = np.clip(sr, 0, 255)
sr = sr.squeeze().transpose((1,2,0)).astype(np.uint8)
sr = cv2.imwrite(output_path, sr)
```
- Run inference for a single image
```python
python infer_onnx.py --onnx_path RCAN_int8_NHWC.onnx --image_path /Path/To/Your/Image --ipu --provider_config Path/To/vaip_config.json
```
- Test accuracy of the quantized model
```python
python eval_onnx.py --onnx_path RCAN_int8_NHWC.onnx --data_test Set5 --ipu --provider_config Path/To/vaip_config.json
```
### Performance
| Method | Scale | Flops | Set5 |
|------------|-------|-------|--------------|
|RCAN-S (float) |X2 |24.5G |37.531 / 0.958|
|RCAN-S (INT8) |X2 |24.5G |37.150 / 0.955|
- Note: the Flops is calculated with the output resolution is 360x640
```bibtex
@inproceedings{zhang2018image,
title={Image super-resolution using very deep residual channel attention networks},
author={Zhang, Yulun and Li, Kunpeng and Li, Kai and Wang, Lichen and Zhong, Bineng and Fu, Yun},
booktitle={Proceedings of the European conference on computer vision (ECCV)},
pages={286--301},
year={2018}
}
```
|
liuyuweitarek/all-MiniLM-L12-neo-300
|
liuyuweitarek
| 2024-01-12T03:52:58Z | 46 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2024-01-11T10:19:31Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# liuyuweitarek/all-MiniLM-L12-neo-300
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("liuyuweitarek/all-MiniLM-L12-neo-300")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Jaehyeon222/M-SOLAR-10.7B-v1.0-DPO
|
Jaehyeon222
| 2024-01-12T03:44:25Z | 2,247 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:maywell/ko_Ultrafeedback_binarized",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T01:08:14Z |
---
license: cc-by-nc-4.0
datasets:
- maywell/ko_Ultrafeedback_binarized
---
Model Card for M-SOLAR-10.7B-v1.0-DPO
Developed by : 메가스터디교육, 프리딕션, 마이스
Base Model : jjourney1125/M-SOLAR-10.7B-v1.0
사용 데이터셋 : maywell님의 ko_Ultrafeedback_binarized 데이터셋을 활용했습니다.
|
ducha07/way2vec2-VNmese
|
ducha07
| 2024-01-12T03:43:28Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"vi",
"dataset:ducha07/audio_HTV_thoisu",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-11T16:02:57Z |
---
language:
- vi
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
datasets:
- ducha07/audio_HTV_thoisu
metrics:
- wer
model-index:
- name: ASR4-for-40-epochs
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: HTV news
type: ducha07/audio_HTV_thoisu
metrics:
- name: Wer
type: wer
value: 0.26843348202571504
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASR4-for-40-epochs
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the HTV news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4791
- Wer: 0.2684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1111 | 0.92 | 100 | 0.7687 | 0.4387 |
| 1.1201 | 1.83 | 200 | 0.6388 | 0.3767 |
| 0.9734 | 2.75 | 300 | 0.6319 | 0.3658 |
| 0.9297 | 3.67 | 400 | 0.5740 | 0.3373 |
| 0.9142 | 4.59 | 500 | 0.5591 | 0.3268 |
| 0.8462 | 5.5 | 600 | 0.5627 | 0.3227 |
| 0.8366 | 6.42 | 700 | 0.5491 | 0.3158 |
| 0.8272 | 7.34 | 800 | 0.5398 | 0.3243 |
| 0.8137 | 8.26 | 900 | 0.5363 | 0.3113 |
| 0.7643 | 9.17 | 1000 | 0.5528 | 0.3117 |
| 0.7738 | 10.09 | 1100 | 0.5194 | 0.3285 |
| 0.7622 | 11.01 | 1200 | 0.5348 | 0.3043 |
| 0.707 | 11.93 | 1300 | 0.5179 | 0.2909 |
| 0.7242 | 12.84 | 1400 | 0.5153 | 0.3138 |
| 0.7093 | 13.76 | 1500 | 0.5116 | 0.2951 |
| 0.673 | 14.68 | 1600 | 0.5002 | 0.2941 |
| 0.6877 | 15.6 | 1700 | 0.4958 | 0.3050 |
| 0.6665 | 16.51 | 1800 | 0.5032 | 0.2865 |
| 0.6507 | 17.43 | 1900 | 0.4871 | 0.2809 |
| 0.6308 | 18.35 | 2000 | 0.4953 | 0.2947 |
| 0.6507 | 19.27 | 2100 | 0.4998 | 0.2837 |
| 0.6027 | 20.18 | 2200 | 0.4963 | 0.2868 |
| 0.623 | 21.1 | 2300 | 0.4955 | 0.2953 |
| 0.6047 | 22.02 | 2400 | 0.5034 | 0.2852 |
| 0.5825 | 22.94 | 2500 | 0.4781 | 0.2795 |
| 0.585 | 23.85 | 2600 | 0.4851 | 0.2843 |
| 0.5838 | 24.77 | 2700 | 0.4957 | 0.2742 |
| 0.5718 | 25.69 | 2800 | 0.4885 | 0.2810 |
| 0.5646 | 26.61 | 2900 | 0.4778 | 0.2724 |
| 0.5476 | 27.52 | 3000 | 0.4914 | 0.2751 |
| 0.5333 | 28.44 | 3100 | 0.4879 | 0.2788 |
| 0.5533 | 29.36 | 3200 | 0.4820 | 0.2726 |
| 0.5321 | 30.28 | 3300 | 0.4816 | 0.2686 |
| 0.5161 | 31.19 | 3400 | 0.4865 | 0.2812 |
| 0.5326 | 32.11 | 3500 | 0.4818 | 0.2704 |
| 0.5188 | 33.03 | 3600 | 0.4816 | 0.2669 |
| 0.506 | 33.94 | 3700 | 0.4804 | 0.2755 |
| 0.5122 | 34.86 | 3800 | 0.4803 | 0.2667 |
| 0.506 | 35.78 | 3900 | 0.4785 | 0.2708 |
| 0.5064 | 36.7 | 4000 | 0.4755 | 0.2730 |
| 0.4997 | 37.61 | 4100 | 0.4804 | 0.2708 |
| 0.4904 | 38.53 | 4200 | 0.4772 | 0.2678 |
| 0.4774 | 39.45 | 4300 | 0.4791 | 0.2684 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
wesley7137/TinyLlama-OpenHermes-MOE-DolphiCoder-Expert-v1
|
wesley7137
| 2024-01-12T03:43:26Z | 0 | 0 |
peft
|
[
"peft",
"llama",
"region:us"
] | null | 2024-01-12T02:44:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
wesley7137/TinyLlama-OpenHermes-MOE-Logic-Expert
|
wesley7137
| 2024-01-12T03:43:04Z | 0 | 0 |
peft
|
[
"peft",
"llama",
"region:us"
] | null | 2024-01-12T02:34:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
ntc-ai/SDXL-LoRA-slider.11-10
|
ntc-ai
| 2024-01-12T03:19:44Z | 2 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-12T03:19:41Z |
---
language:
- en
thumbnail: "images/evaluate/11-10...hair down/11-10_17_3.0.png"
widget:
- text: 11-10
output:
url: images/11-10_17_3.0.png
- text: 11-10
output:
url: images/11-10_19_3.0.png
- text: 11-10
output:
url: images/11-10_20_3.0.png
- text: 11-10
output:
url: images/11-10_21_3.0.png
- text: 11-10
output:
url: images/11-10_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "11-10"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - 11-10 (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/11-10_17_-3.0.png" width=256 height=256 /> | <img src="images/11-10_17_0.0.png" width=256 height=256 /> | <img src="images/11-10_17_3.0.png" width=256 height=256 /> |
| <img src="images/11-10_19_-3.0.png" width=256 height=256 /> | <img src="images/11-10_19_0.0.png" width=256 height=256 /> | <img src="images/11-10_19_3.0.png" width=256 height=256 /> |
| <img src="images/11-10_20_-3.0.png" width=256 height=256 /> | <img src="images/11-10_20_0.0.png" width=256 height=256 /> | <img src="images/11-10_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
11-10
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.11-10', weight_name='11-10.safetensors', adapter_name="11-10")
# Activate the LoRA
pipe.set_adapters(["11-10"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, 11-10"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1040+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
bartowski/UNA-TheBeagle-7b-v1-exl2
|
bartowski
| 2024-01-12T03:07:52Z | 0 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"text-generation",
"dataset:jondurbin/bagel-v0.3",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T02:50:54Z |
---
license: cc-by-nc-nd-4.0
tags:
- generated_from_trainer
model-index:
- name: UNA-TheBeagle-7b-v1
results: []
datasets:
- jondurbin/bagel-v0.3
library_name: transformers
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of UNA-TheBeagle-7b-v1
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1
<a href="https://huggingface.co/bartowski/UNA-TheBeagle-7b-v1-exl2/tree/8_0">8.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/UNA-TheBeagle-7b-v1-exl2/tree/6_5">6.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/UNA-TheBeagle-7b-v1-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/UNA-TheBeagle-7b-v1-exl2/tree/4_0">4.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/UNA-TheBeagle-7b-v1-exl2/tree/3_5">3.5 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/UNA-TheBeagle-7b-v1-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `UNA-TheBeagle-7b-v1-exl2`:
```shell
mkdir UNA-TheBeagle-7b-v1-exl2
huggingface-cli download bartowski/UNA-TheBeagle-7b-v1-exl2 --local-dir UNA-TheBeagle-7b-v1-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir UNA-TheBeagle-7b-v1-exl2
huggingface-cli download bartowski/UNA-TheBeagle-7b-v1-exl2 --revision 4_0 --local-dir UNA-TheBeagle-7b-v1-exl2 --local-dir-use-symlinks False
```
|
MaziyarPanahi/samantha-mistral-instruct-7b-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T03:03:52Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"cognitivecomputations/samantha-mistral-instruct-7b",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T02:58:45Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- cognitivecomputations/samantha-mistral-instruct-7b
---
# samantha-mistral-instruct-7b-Mistral-7B-Instruct-v0.2-slerp
samantha-mistral-instruct-7b-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [cognitivecomputations/samantha-mistral-instruct-7b](https://huggingface.co/cognitivecomputations/samantha-mistral-instruct-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: cognitivecomputations/samantha-mistral-instruct-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/samantha-mistral-instruct-7b-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
FeiiYin/lora-trained-xl-audi4
|
FeiiYin
| 2024-01-12T02:56:29Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-12T02:51:16Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'A photo of sks car on the street'
output:
url:
"image_0.png"
- text: 'A photo of sks car on the street'
output:
url:
"image_1.png"
- text: 'A photo of sks car on the street'
output:
url:
"image_2.png"
- text: 'A photo of sks car on the street'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks car
license: openrail++
---
# SDXL LoRA DreamBooth - FeiiYin/lora-trained-xl-audi4
<Gallery />
## Model description
These are FeiiYin/lora-trained-xl-audi4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks car to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](FeiiYin/lora-trained-xl-audi4/tree/main) them in the Files & versions tab.
|
hxxris/haaris-audio-classification-improved-model-2
|
hxxris
| 2024-01-12T02:46:28Z | 147 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-12T00:56:31Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: haaris-audio-classification-improved-model-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# haaris-audio-classification-improved-model-2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.83 | 3 | nan | 0.0708 |
| No log | 1.66 | 6 | nan | 0.0708 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T02:42:07Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"maywell/Mini_Synatra_SFT",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T02:37:05Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- maywell/Mini_Synatra_SFT
---
# Mini_Synatra_SFT-Mistral-7B-Instruct-v0.2-slerp
Mini_Synatra_SFT-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [maywell/Mini_Synatra_SFT](https://huggingface.co/maywell/Mini_Synatra_SFT)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: maywell/Mini_Synatra_SFT
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
KantoRegion/bert-test
|
KantoRegion
| 2024-01-12T02:32:42Z | 90 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-12T02:25:07Z |
---
language:
- en
---
This model evaluates if character wants to send an image to user right now.
### [input]
```
{user's text}
{character's text}
```
(two value should be separated with newline)
### [output]
```
1: yes
0: no
```
|
MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T02:07:01Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"cognitivecomputations/samantha-1.2-mistral-7b",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T02:01:31Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- cognitivecomputations/samantha-1.2-mistral-7b
---
# samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.2-slerp
samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [cognitivecomputations/samantha-1.2-mistral-7b](https://huggingface.co/cognitivecomputations/samantha-1.2-mistral-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: cognitivecomputations/samantha-1.2-mistral-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
DGraham1/doodle_test_LoRA
|
DGraham1
| 2024-01-12T01:56:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-12T01:56:00Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
license: openrail++
---
# SDXL LoRA DreamBooth - DGraham1/doodle_test_LoRA
<Gallery />
## Model description
These are DGraham1/doodle_test_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](DGraham1/doodle_test_LoRA/tree/main) them in the Files & versions tab.
|
kodonho/llama2-chat-koalpaca
|
kodonho
| 2024-01-12T01:54:43Z | 2,258 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T11:00:56Z |
---
license: llama2
datasets:
- beomi/KoAlpaca-v1.1a
language:
- ko
---
# Llama2 based model with koalapaca dataset
This is an English, Korean Model based on
* [meta-llama/Llama-2-7b-chat-hf]
|
sekinat/rl_course_vizdoom_health_gathering_supreme
|
sekinat
| 2024-01-12T01:49:09Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-12T01:49:04Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.38 +/- 5.54
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r sekinat/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
MaziyarPanahi/mistralopithecus-v1-dpo-7b-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T01:46:21Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"DopeorNope/mistralopithecus-v1-dpo-7b",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T01:41:12Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- DopeorNope/mistralopithecus-v1-dpo-7b
---
# mistralopithecus-v1-dpo-7b-Mistral-7B-Instruct-v0.2-slerp
mistralopithecus-v1-dpo-7b-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [DopeorNope/mistralopithecus-v1-dpo-7b](https://huggingface.co/DopeorNope/mistralopithecus-v1-dpo-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: DopeorNope/mistralopithecus-v1-dpo-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/mistralopithecus-v1-dpo-7b-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
tgoktug/audio-t5-small-sum
|
tgoktug
| 2024-01-12T01:40:32Z | 45 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-12T01:38:36Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: tgoktug/audio-t5-small-sum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tgoktug/audio-t5-small-sum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5520
- Validation Loss: 0.5908
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'RMSprop', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': 100, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.7571 | 0.6400 | 0 |
| 0.6311 | 0.6155 | 1 |
| 0.5969 | 0.6095 | 2 |
| 0.5746 | 0.5977 | 3 |
| 0.5520 | 0.5908 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tgoktug/audio-BART-sum
|
tgoktug
| 2024-01-12T01:37:12Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-12T01:32:56Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_keras_callback
model-index:
- name: tgoktug/audio-BART-sum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tgoktug/audio-BART-sum
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.7843
- Validation Loss: 7.7055
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'RMSprop', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': 100, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.5889 | 6.7823 | 0 |
| 6.9879 | 6.7069 | 1 |
| 6.8106 | 6.6307 | 2 |
| 6.7660 | 6.7450 | 3 |
| 6.7843 | 7.7055 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T01:06:02Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"lgaalves/mistral-7b_open_platypus",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T01:00:34Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- lgaalves/mistral-7b_open_platypus
---
# mistral-7b_open_platypus-Mistral-7B-Instruct-v0.2-slerp
mistral-7b_open_platypus-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [lgaalves/mistral-7b_open_platypus](https://huggingface.co/lgaalves/mistral-7b_open_platypus)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: lgaalves/mistral-7b_open_platypus
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
PatrickSui/my_awesome_mind_model
|
PatrickSui
| 2024-01-12T00:58:48Z | 145 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large",
"base_model:finetune:facebook/wav2vec2-large",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-12T00:56:47Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6000
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.7537 | 0.25 |
| No log | 2.0 | 3 | 0.6236 | 0.75 |
| No log | 3.0 | 5 | 0.5948 | 0.75 |
| No log | 4.0 | 6 | 0.5866 | 0.75 |
| No log | 5.0 | 7 | 0.5819 | 0.75 |
| No log | 6.0 | 9 | 0.5987 | 0.75 |
| 0.2651 | 6.67 | 10 | 0.6000 | 0.75 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.13.0+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
gustavokpc/IC_quinto
|
gustavokpc
| 2024-01-12T00:55:00Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-21T20:08:24Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: gustavokpc/IC_quinto
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gustavokpc/IC_quinto
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1646
- Train Accuracy: 0.9419
- Train F1 M: 0.5524
- Train Precision M: 0.4019
- Train Recall M: 0.9429
- Validation Loss: 0.2503
- Validation Accuracy: 0.9070
- Validation F1 M: 0.5680
- Validation Precision M: 0.4108
- Validation Recall M: 0.9671
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 2274, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train F1 M | Train Precision M | Train Recall M | Validation Loss | Validation Accuracy | Validation F1 M | Validation Precision M | Validation Recall M | Epoch |
|:----------:|:--------------:|:----------:|:-----------------:|:--------------:|:---------------:|:-------------------:|:---------------:|:----------------------:|:-------------------:|:-----:|
| 0.4076 | 0.8160 | 0.5002 | 0.3900 | 0.7694 | 0.2792 | 0.8859 | 0.5648 | 0.4123 | 0.9419 | 0 |
| 0.2272 | 0.9143 | 0.5487 | 0.4020 | 0.9253 | 0.2778 | 0.8925 | 0.5752 | 0.4181 | 0.9630 | 1 |
| 0.1646 | 0.9419 | 0.5524 | 0.4019 | 0.9429 | 0.2503 | 0.9070 | 0.5680 | 0.4108 | 0.9671 | 2 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
andrewatef/MyBloggerV0.9
|
andrewatef
| 2024-01-12T00:53:09Z | 2 | 0 |
peft
|
[
"peft",
"pytorch",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:unsloth/llama-2-7b",
"base_model:adapter:unsloth/llama-2-7b",
"region:us"
] | null | 2024-01-11T23:38:34Z |
---
library_name: peft
base_model: unsloth/llama-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
shaukel/Diamondrequiem
|
shaukel
| 2024-01-12T00:47:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:cc-by-sa-4.0",
"region:us"
] |
text-to-image
| 2024-01-12T00:28:53Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: si
parameters:
negative_prompt: 'no'
output:
url: images/dba4olr-84e73851-0c45-4bcf-92d0-fc74ac24b3a9.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: depende
license: cc-by-sa-4.0
---
# RVCv2
<Gallery />
## Model description
no se
## Trigger words
You should use `depende` to trigger the image generation.
## Download model
[Download](/shaukel/Diamondrequiem/tree/main) them in the Files & versions tab.
|
tgoktug/audio-Bart-new-256-base
|
tgoktug
| 2024-01-12T00:24:19Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-12T00:22:52Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_keras_callback
model-index:
- name: tgoktug/audio-Bart-new-256-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tgoktug/audio-Bart-new-256-base
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.9488
- Validation Loss: 6.8816
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'RMSprop', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': 100, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.9488 | 6.8816 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
oosij/llama-2-7b-medibot
|
oosij
| 2024-01-12T00:19:40Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2024-01-12T00:19:34Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
jysssacc/627_roberta-base_adalora_lr0.05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-12T00:18:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-01-12T00:12:08Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: roberta-base
model-index:
- name: 627_roberta-base_adalora_lr0.05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 627_roberta-base_adalora_lr0.05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.9955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.103 | 1.0 | 157 | 4.3243 |
| 15.3565 | 2.0 | 314 | 8.4528 |
| 8.9487 | 3.0 | 471 | 8.1856 |
| 10.2902 | 4.0 | 628 | 8.6844 |
| 8.6424 | 5.0 | 785 | 7.9955 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tgoktug/audio-Bart-new-new128-base
|
tgoktug
| 2024-01-12T00:16:55Z | 45 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-12T00:10:13Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_keras_callback
model-index:
- name: tgoktug/audio-Bart-new-new128-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tgoktug/audio-Bart-new-new128-base
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8925
- Validation Loss: 2.8817
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'RMSprop', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': 100, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5066 | 2.8957 | 0 |
| 2.8925 | 2.8817 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
DouglasPontes/2020-Q2-full_tweets
|
DouglasPontes
| 2024-01-12T00:09:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:DouglasPontes/2020-Q1-full_tweets",
"base_model:finetune:DouglasPontes/2020-Q1-full_tweets",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-07T04:06:39Z |
---
base_model: DouglasPontes/2020-Q1-full_tweets
tags:
- generated_from_trainer
model-index:
- name: 2020-Q2-full_tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2020-Q2-full_tweets
This model is a fine-tuned version of [DouglasPontes/2020-Q1-full_tweets](https://huggingface.co/DouglasPontes/2020-Q1-full_tweets) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.1e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2400000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| No log | 0.01 | 8000 | 2.1043 |
| 2.2608 | 0.02 | 16000 | 2.0934 |
| 2.2608 | 0.03 | 24000 | 2.0862 |
| 2.2409 | 0.03 | 32000 | 2.0805 |
| 2.2409 | 0.04 | 40000 | 2.0793 |
| 2.2278 | 0.05 | 48000 | 2.0718 |
| 2.2278 | 0.06 | 56000 | 2.0753 |
| 2.2059 | 0.07 | 64000 | 2.0668 |
| 2.2059 | 0.08 | 72000 | 2.0657 |
| 2.1997 | 0.09 | 80000 | 2.0620 |
| 2.1997 | 0.1 | 88000 | 2.0553 |
| 2.1988 | 0.1 | 96000 | 2.0569 |
| 2.1988 | 0.11 | 104000 | 2.0525 |
| 2.1861 | 0.12 | 112000 | 2.0556 |
| 2.1861 | 0.13 | 120000 | 2.0493 |
| 2.1823 | 0.14 | 128000 | 2.0509 |
| 2.1823 | 0.15 | 136000 | 2.0461 |
| 2.1851 | 0.16 | 144000 | 2.0476 |
| 2.1851 | 0.17 | 152000 | 2.0450 |
| 2.1862 | 0.17 | 160000 | 2.0469 |
| 2.1862 | 0.18 | 168000 | 2.0442 |
| 2.1741 | 0.19 | 176000 | 2.0456 |
| 2.1741 | 0.2 | 184000 | 2.0442 |
| 2.181 | 0.21 | 192000 | 2.0402 |
| 2.181 | 0.22 | 200000 | 2.0423 |
| 2.1692 | 0.23 | 208000 | 2.0413 |
| 2.1692 | 0.24 | 216000 | 2.0448 |
| 2.1678 | 0.24 | 224000 | 2.0418 |
| 2.1678 | 0.25 | 232000 | 2.0417 |
| 2.1756 | 0.26 | 240000 | 2.0342 |
| 2.1756 | 0.27 | 248000 | 2.0377 |
| 2.1752 | 0.28 | 256000 | 2.0381 |
| 2.1752 | 0.29 | 264000 | 2.0354 |
| 2.1673 | 0.3 | 272000 | 2.0381 |
| 2.1673 | 0.31 | 280000 | 2.0375 |
| 2.1585 | 0.31 | 288000 | 2.0336 |
| 2.1585 | 0.32 | 296000 | 2.0344 |
| 2.1703 | 0.33 | 304000 | 2.0348 |
| 2.1703 | 0.34 | 312000 | 2.0330 |
| 2.1667 | 0.35 | 320000 | 2.0352 |
| 2.1667 | 0.36 | 328000 | 2.0359 |
| 2.1649 | 0.37 | 336000 | 2.0317 |
| 2.1649 | 0.38 | 344000 | 2.0314 |
| 2.1564 | 0.38 | 352000 | 2.0306 |
| 2.1564 | 0.39 | 360000 | 2.0299 |
| 2.161 | 0.4 | 368000 | 2.0317 |
| 2.161 | 0.41 | 376000 | 2.0325 |
| 2.1551 | 0.42 | 384000 | 2.0274 |
| 2.1551 | 0.43 | 392000 | 2.0282 |
| 2.1602 | 0.44 | 400000 | 2.0301 |
| 2.1602 | 0.45 | 408000 | 2.0303 |
| 2.1581 | 0.45 | 416000 | 2.0260 |
| 2.1581 | 0.46 | 424000 | 2.0248 |
| 2.1494 | 0.47 | 432000 | 2.0265 |
| 2.1494 | 0.48 | 440000 | 2.0247 |
| 2.1508 | 0.49 | 448000 | 2.0231 |
| 2.1508 | 0.5 | 456000 | 2.0276 |
| 2.153 | 0.51 | 464000 | 2.0276 |
| 2.153 | 0.51 | 472000 | 2.0242 |
| 2.1489 | 0.52 | 480000 | 2.0259 |
| 2.1489 | 0.53 | 488000 | 2.0257 |
| 2.1468 | 0.54 | 496000 | 2.0275 |
| 2.1468 | 0.55 | 504000 | 2.0303 |
| 2.1446 | 0.56 | 512000 | 2.0248 |
| 2.1446 | 0.57 | 520000 | 2.0286 |
| 2.1409 | 0.58 | 528000 | 2.0211 |
| 2.1409 | 0.58 | 536000 | 2.0204 |
| 2.1536 | 0.59 | 544000 | 2.0199 |
| 2.1536 | 0.6 | 552000 | 2.0281 |
| 2.1416 | 0.61 | 560000 | 2.0237 |
| 2.1416 | 0.62 | 568000 | 2.0231 |
| 2.1502 | 0.63 | 576000 | 2.0205 |
| 2.1502 | 0.64 | 584000 | 2.0217 |
| 2.1424 | 0.65 | 592000 | 2.0242 |
| 2.1424 | 0.65 | 600000 | 2.0238 |
| 2.1469 | 0.66 | 608000 | 2.0192 |
| 2.1469 | 0.67 | 616000 | 2.0249 |
| 2.145 | 0.68 | 624000 | 2.0196 |
| 2.145 | 0.69 | 632000 | 2.0224 |
| 2.1503 | 0.7 | 640000 | 2.0216 |
| 2.1503 | 0.71 | 648000 | 2.0228 |
| 2.1355 | 0.72 | 656000 | 2.0197 |
| 2.1355 | 0.72 | 664000 | 2.0240 |
| 2.1392 | 0.73 | 672000 | 2.0232 |
| 2.1392 | 0.74 | 680000 | 2.0209 |
| 2.1378 | 0.75 | 688000 | 2.0219 |
| 2.1378 | 0.76 | 696000 | 2.0192 |
| 2.1446 | 0.77 | 704000 | 2.0195 |
| 2.1446 | 0.78 | 712000 | 2.0197 |
| 2.1351 | 0.79 | 720000 | 2.0184 |
| 2.1351 | 0.79 | 728000 | 2.0162 |
| 2.1437 | 0.8 | 736000 | 2.0151 |
| 2.1437 | 0.81 | 744000 | 2.0202 |
| 2.1249 | 0.82 | 752000 | 2.0169 |
| 2.1249 | 0.83 | 760000 | 2.0189 |
| 2.1355 | 0.84 | 768000 | 2.0221 |
| 2.1355 | 0.85 | 776000 | 2.0194 |
| 2.1387 | 0.86 | 784000 | 2.0189 |
| 2.1387 | 0.86 | 792000 | 2.0165 |
| 2.1334 | 0.87 | 800000 | 2.0169 |
| 2.1334 | 0.88 | 808000 | 2.0189 |
| 2.137 | 0.89 | 816000 | 2.0162 |
| 2.137 | 0.9 | 824000 | 2.0168 |
| 2.1331 | 0.91 | 832000 | 2.0193 |
| 2.1331 | 0.92 | 840000 | 2.0166 |
| 2.1293 | 0.93 | 848000 | 2.0137 |
| 2.1293 | 0.93 | 856000 | 2.0183 |
| 2.1358 | 0.94 | 864000 | 2.0184 |
| 2.1358 | 0.95 | 872000 | 2.0171 |
| 2.1296 | 0.96 | 880000 | 2.0179 |
| 2.1296 | 0.97 | 888000 | 2.0152 |
| 2.1319 | 0.98 | 896000 | 2.0174 |
| 2.1319 | 0.99 | 904000 | 2.0206 |
| 2.1344 | 1.0 | 912000 | 2.0179 |
| 2.1344 | 1.0 | 920000 | 2.0154 |
| 2.1352 | 1.01 | 928000 | 2.0185 |
| 2.1352 | 1.02 | 936000 | 2.0170 |
| 2.1336 | 1.03 | 944000 | 2.0164 |
| 2.1336 | 1.04 | 952000 | 2.0137 |
| 2.1315 | 1.05 | 960000 | 2.0176 |
| 2.1315 | 1.06 | 968000 | 2.0155 |
| 2.1255 | 1.06 | 976000 | 2.0145 |
| 2.1255 | 1.07 | 984000 | 2.0233 |
| 2.1249 | 1.08 | 992000 | 2.0148 |
| 2.1249 | 1.09 | 1000000 | 2.0162 |
| 2.123 | 1.1 | 1008000 | 2.0174 |
| 2.123 | 1.11 | 1016000 | 2.0150 |
| 2.1263 | 1.12 | 1024000 | 2.0161 |
| 2.1263 | 1.13 | 1032000 | 2.0129 |
| 2.1232 | 1.13 | 1040000 | 2.0167 |
| 2.1232 | 1.14 | 1048000 | 2.0125 |
| 2.1168 | 1.15 | 1056000 | 2.0113 |
| 2.1168 | 1.16 | 1064000 | 2.0136 |
| 2.1307 | 1.17 | 1072000 | 2.0143 |
| 2.1307 | 1.18 | 1080000 | 2.0166 |
| 2.1336 | 1.19 | 1088000 | 2.0103 |
| 2.1336 | 1.2 | 1096000 | 2.0130 |
| 2.1227 | 1.2 | 1104000 | 2.0125 |
| 2.1227 | 1.21 | 1112000 | 2.0183 |
| 2.1223 | 1.22 | 1120000 | 2.0148 |
| 2.1223 | 1.23 | 1128000 | 2.0147 |
| 2.1289 | 1.24 | 1136000 | 2.0109 |
| 2.1289 | 1.25 | 1144000 | 2.0164 |
| 2.1278 | 1.26 | 1152000 | 2.0163 |
| 2.1278 | 1.27 | 1160000 | 2.0121 |
| 2.1261 | 1.27 | 1168000 | 2.0113 |
| 2.1261 | 1.28 | 1176000 | 2.0137 |
| 2.126 | 1.29 | 1184000 | 2.0152 |
| 2.126 | 1.3 | 1192000 | 2.0104 |
| 2.1235 | 1.31 | 1200000 | 2.0132 |
| 2.1235 | 1.32 | 1208000 | 2.0114 |
| 2.1229 | 1.33 | 1216000 | 2.0105 |
| 2.1229 | 1.34 | 1224000 | 2.0131 |
| 2.1213 | 1.34 | 1232000 | 2.0141 |
| 2.1213 | 1.35 | 1240000 | 2.0109 |
| 2.1185 | 1.36 | 1248000 | 2.0129 |
| 2.1185 | 1.37 | 1256000 | 2.0110 |
| 2.131 | 1.38 | 1264000 | 2.0123 |
| 2.131 | 1.39 | 1272000 | 2.0105 |
| 2.1141 | 1.4 | 1280000 | 2.0104 |
| 2.1141 | 1.41 | 1288000 | 2.0150 |
| 2.1219 | 1.41 | 1296000 | 2.0161 |
| 2.1219 | 1.42 | 1304000 | 2.0093 |
| 2.1203 | 1.43 | 1312000 | 2.0104 |
| 2.1203 | 1.44 | 1320000 | 2.0144 |
| 2.1264 | 1.45 | 1328000 | 2.0085 |
| 2.1264 | 1.46 | 1336000 | 2.0119 |
| 2.1194 | 1.47 | 1344000 | 2.0118 |
| 2.1194 | 1.48 | 1352000 | 2.0110 |
| 2.117 | 1.48 | 1360000 | 2.0147 |
| 2.117 | 1.49 | 1368000 | 2.0135 |
| 2.1311 | 1.5 | 1376000 | 2.0077 |
| 2.1311 | 1.51 | 1384000 | 2.0066 |
| 2.1215 | 1.52 | 1392000 | 2.0089 |
| 2.1215 | 1.53 | 1400000 | 2.0118 |
| 2.1185 | 1.54 | 1408000 | 2.0105 |
| 2.1185 | 1.54 | 1416000 | 2.0123 |
| 2.1284 | 1.55 | 1424000 | 2.0134 |
| 2.1284 | 1.56 | 1432000 | 2.0093 |
| 2.1174 | 1.57 | 1440000 | 2.0102 |
| 2.1174 | 1.58 | 1448000 | 2.0076 |
| 2.1108 | 1.59 | 1456000 | 2.0074 |
| 2.1108 | 1.6 | 1464000 | 2.0071 |
| 2.1252 | 1.61 | 1472000 | 2.0092 |
| 2.1252 | 1.61 | 1480000 | 2.0080 |
| 2.121 | 1.62 | 1488000 | 2.0053 |
| 2.121 | 1.63 | 1496000 | 2.0072 |
| 2.1178 | 1.64 | 1504000 | 2.0059 |
| 2.1178 | 1.65 | 1512000 | 2.0084 |
| 2.1154 | 1.66 | 1520000 | 2.0106 |
| 2.1154 | 1.67 | 1528000 | 2.0117 |
| 2.1214 | 1.68 | 1536000 | 2.0070 |
| 2.1214 | 1.68 | 1544000 | 2.0079 |
| 2.1175 | 1.69 | 1552000 | 2.0102 |
| 2.1175 | 1.7 | 1560000 | 2.0097 |
| 2.1206 | 1.71 | 1568000 | 2.0092 |
| 2.1206 | 1.72 | 1576000 | 2.0055 |
| 2.1302 | 1.73 | 1584000 | 2.0085 |
| 2.1302 | 1.74 | 1592000 | 2.0110 |
| 2.1177 | 1.75 | 1600000 | 2.0065 |
| 2.1177 | 1.75 | 1608000 | 2.0132 |
| 2.1101 | 1.76 | 1616000 | 2.0086 |
| 2.1101 | 1.77 | 1624000 | 2.0077 |
| 2.1194 | 1.78 | 1632000 | 2.0081 |
| 2.1194 | 1.79 | 1640000 | 2.0088 |
| 2.1167 | 1.8 | 1648000 | 2.0022 |
| 2.1167 | 1.81 | 1656000 | 2.0077 |
| 2.1083 | 1.82 | 1664000 | 2.0066 |
| 2.1083 | 1.82 | 1672000 | 2.0137 |
| 2.1232 | 1.83 | 1680000 | 2.0067 |
| 2.1232 | 1.84 | 1688000 | 2.0039 |
| 2.1212 | 1.85 | 1696000 | 2.0090 |
| 2.1212 | 1.86 | 1704000 | 2.0079 |
| 2.1246 | 1.87 | 1712000 | 2.0083 |
| 2.1246 | 1.88 | 1720000 | 2.0039 |
| 2.1129 | 1.89 | 1728000 | 2.0069 |
| 2.1129 | 1.89 | 1736000 | 2.0079 |
| 2.1209 | 1.9 | 1744000 | 2.0058 |
| 2.1209 | 1.91 | 1752000 | 2.0072 |
| 2.1209 | 1.92 | 1760000 | 2.0068 |
| 2.1209 | 1.93 | 1768000 | 2.0079 |
| 2.1184 | 1.94 | 1776000 | 2.0036 |
| 2.1184 | 1.95 | 1784000 | 2.0065 |
| 2.1065 | 1.96 | 1792000 | 2.0077 |
| 2.1065 | 1.96 | 1800000 | 2.0062 |
| 2.109 | 1.97 | 1808000 | 2.0090 |
| 2.109 | 1.98 | 1816000 | 2.0124 |
| 2.1081 | 1.99 | 1824000 | 2.0066 |
| 2.1081 | 2.0 | 1832000 | 2.0081 |
| 2.1151 | 2.01 | 1840000 | 2.0085 |
| 2.1151 | 2.02 | 1848000 | 2.0054 |
| 2.1178 | 2.03 | 1856000 | 2.0058 |
| 2.1178 | 2.03 | 1864000 | 2.0048 |
| 2.1035 | 2.04 | 1872000 | 2.0040 |
| 2.1035 | 2.05 | 1880000 | 2.0059 |
| 2.1197 | 2.06 | 1888000 | 2.0071 |
| 2.1197 | 2.07 | 1896000 | 2.0057 |
| 2.1143 | 2.08 | 1904000 | 2.0059 |
| 2.1143 | 2.09 | 1912000 | 2.0043 |
| 2.1082 | 2.09 | 1920000 | 2.0068 |
| 2.1082 | 2.1 | 1928000 | 2.0057 |
| 2.1202 | 2.11 | 1936000 | 2.0072 |
| 2.1202 | 2.12 | 1944000 | 2.0057 |
| 2.1138 | 2.13 | 1952000 | 2.0051 |
| 2.1138 | 2.14 | 1960000 | 2.0085 |
| 2.1082 | 2.15 | 1968000 | 2.0076 |
| 2.1082 | 2.16 | 1976000 | 2.0077 |
| 2.1084 | 2.16 | 1984000 | 2.0020 |
| 2.1084 | 2.17 | 1992000 | 2.0050 |
| 2.1151 | 2.18 | 2000000 | 2.0066 |
| 2.1151 | 2.19 | 2008000 | 2.0031 |
| 2.1141 | 2.2 | 2016000 | 2.0128 |
| 2.1141 | 2.21 | 2024000 | 2.0022 |
| 2.1129 | 2.22 | 2032000 | 2.0065 |
| 2.1129 | 2.23 | 2040000 | 2.0054 |
| 2.1164 | 2.23 | 2048000 | 2.0039 |
| 2.1164 | 2.24 | 2056000 | 2.0031 |
| 2.1121 | 2.25 | 2064000 | 2.0101 |
| 2.1121 | 2.26 | 2072000 | 2.0099 |
| 2.1071 | 2.27 | 2080000 | 2.0042 |
| 2.1071 | 2.28 | 2088000 | 2.0030 |
| 2.1094 | 2.29 | 2096000 | 2.0048 |
| 2.1094 | 2.3 | 2104000 | 2.0046 |
| 2.1017 | 2.3 | 2112000 | 2.0039 |
| 2.1017 | 2.31 | 2120000 | 2.0011 |
| 2.1124 | 2.32 | 2128000 | 2.0071 |
| 2.1124 | 2.33 | 2136000 | 2.0061 |
| 2.1064 | 2.34 | 2144000 | 2.0040 |
| 2.1064 | 2.35 | 2152000 | 2.0075 |
| 2.115 | 2.36 | 2160000 | 2.0026 |
| 2.115 | 2.37 | 2168000 | 2.0068 |
| 2.114 | 2.37 | 2176000 | 2.0066 |
| 2.114 | 2.38 | 2184000 | 2.0080 |
| 2.1171 | 2.39 | 2192000 | 2.0032 |
| 2.1171 | 2.4 | 2200000 | 2.0036 |
| 2.1119 | 2.41 | 2208000 | 2.0048 |
| 2.1119 | 2.42 | 2216000 | 2.0059 |
| 2.1097 | 2.43 | 2224000 | 2.0058 |
| 2.1097 | 2.44 | 2232000 | 2.0049 |
| 2.1091 | 2.44 | 2240000 | 2.0058 |
| 2.1091 | 2.45 | 2248000 | 2.0032 |
| 2.1107 | 2.46 | 2256000 | 2.0077 |
| 2.1107 | 2.47 | 2264000 | 2.0032 |
| 2.1126 | 2.48 | 2272000 | 2.0055 |
| 2.1126 | 2.49 | 2280000 | 2.0026 |
| 2.1173 | 2.5 | 2288000 | 2.0062 |
| 2.1173 | 2.51 | 2296000 | 2.0039 |
| 2.114 | 2.51 | 2304000 | 2.0064 |
| 2.114 | 2.52 | 2312000 | 2.0113 |
| 2.1131 | 2.53 | 2320000 | 2.0065 |
| 2.1131 | 2.54 | 2328000 | 2.0098 |
| 2.1045 | 2.55 | 2336000 | 2.0061 |
| 2.1045 | 2.56 | 2344000 | 2.0066 |
| 2.1144 | 2.57 | 2352000 | 2.0060 |
| 2.1144 | 2.57 | 2360000 | 2.0059 |
| 2.1086 | 2.58 | 2368000 | 2.0039 |
| 2.1086 | 2.59 | 2376000 | 2.0076 |
| 2.1058 | 2.6 | 2384000 | 2.0036 |
| 2.1058 | 2.61 | 2392000 | 2.0077 |
| 2.1112 | 2.62 | 2400000 | 2.0000 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
sekinat/LunarLander-v2_wanb_1e-05
|
sekinat
| 2024-01-12T00:05:40Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-12T00:01:06Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -165.96 +/- 65.30
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'default_name'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 1e-05
'num_envs': 4
'num_steps': 256
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 8
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'sekinat/LunarLander-v2_wanb'
'batch_size': 1024
'minibatch_size': 256}
```
|
Ricktlw/FlaviaSaddy
|
Ricktlw
| 2024-01-12T00:00:24Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-01-11T23:58:48Z |
---
license: other
license_name: rick
license_link: LICENSE
---
|
matthewnorton/mamba-phi
|
matthewnorton
| 2024-01-11T23:55:16Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"nlp",
"code",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-11T05:08:41Z |
---
inference: false
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
## Intended Uses
Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
### QA Format:
You can provide the prompt as a standalone question as follows:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
```
where the model generates the text after "." .
To encourage the model to write more concise answers, you can also try the following QA format using "Instruct: \<prompt\>\nOutput:"
```markdown
Instruct: Write a detailed analogy between mathematics and a lighthouse.
Output: Mathematics is like a lighthouse. Just as a lighthouse guides ships safely to shore, mathematics provides a guiding light in the world of numbers and logic. It helps us navigate through complex problems and find solutions. Just as a lighthouse emits a steady beam of light, mathematics provides a consistent framework for reasoning and problem-solving. It illuminates the path to understanding and helps us make sense of the world around us.
```
where the model generates the text after "Output:".
### Chat Format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Well, have you tried creating a study schedule and sticking to it?
Alice: Yes, I have, but it doesn't seem to help much.
Bob: Hmm, maybe you should try studying in a quiet environment, like the library.
Alice: ...
```
where the model generates the text after the first "Bob:".
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes:**
* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using `transformers>=4.36.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
There are four types of execution mode:
1. FP16 / Flash-Attention / CUDA:
```python
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", flash_attn=True, flash_rotary=True, fused_dense=True, device_map="cuda", trust_remote_code=True)
```
2. FP16 / CUDA:
```python
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", device_map="cuda", trust_remote_code=True)
```
3. FP32 / CUDA:
```python
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype=torch.float32, device_map="cuda", trust_remote_code=True)
```
4. FP32 / CPU:
```python
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype=torch.float32, device_map="cpu", trust_remote_code=True)
```
To ensure the maximum compatibility, we recommend using the second execution mode (FP16 / CUDA), as follows:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
**Remark:** In the generation function, our model currently does not support beam search (`num_beams > 1`).
Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings.
## Limitations of Phi-2
* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Context length: 2048 tokens
* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.
* Training tokens: 1.4T tokens
* GPUs: 96xA100-80G
* Training time: 14 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
jysssacc/627_roberta-base_fine_lr0.05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T23:47:44Z | 43 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-generation",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T23:40:02Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: 627_roberta-base_fine_lr0.05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 627_roberta-base_fine_lr0.05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.879 | 1.0 | 157 | 7.6793 |
| 6.7935 | 2.0 | 314 | 8.1942 |
| 6.9191 | 3.0 | 471 | 8.2193 |
| 7.0385 | 4.0 | 628 | 7.8762 |
| 6.7279 | 5.0 | 785 | 7.8054 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tgoktug/audio-Bart-new-new2-base
|
tgoktug
| 2024-01-11T23:46:52Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-11T23:42:36Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_keras_callback
model-index:
- name: tgoktug/audio-Bart-new-new2-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tgoktug/audio-Bart-new-new2-base
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.4177
- Validation Loss: 6.2742
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'RMSprop', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': 100, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.7832 | 6.6761 | 0 |
| 6.7521 | 6.4264 | 1 |
| 6.5041 | 6.4022 | 2 |
| 6.4177 | 6.2742 | 3 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tgoktug/audio-Bart-new-new-base
|
tgoktug
| 2024-01-11T23:36:45Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-11T23:31:45Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_keras_callback
model-index:
- name: tgoktug/audio-Bart-new-new-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tgoktug/audio-Bart-new-new-base
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.3607
- Validation Loss: 6.3838
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'RMSprop', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': 100, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.4712 | 6.7419 | 0 |
| 6.6625 | 6.4735 | 1 |
| 6.4318 | 6.4304 | 2 |
| 6.3741 | 6.4119 | 3 |
| 6.3607 | 6.3838 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
s3nh/Neuronovo-neuronovo-7B-v0.3-GGUF
|
s3nh
| 2024-01-11T23:34:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T22:56:21Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Neuronovo/neuronovo-7B-v0.3).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
hxxris/haaris-audio-classification-model-improved
|
hxxris
| 2024-01-11T23:32:58Z | 147 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-11T22:42:00Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: haaris-audio-classification-model-improved
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# haaris-audio-classification-model-improved
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.0442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.83 | 3 | 2.6450 | 0.0265 |
| No log | 1.93 | 7 | nan | 0.0442 |
| No log | 2.76 | 10 | nan | 0.0442 |
| No log | 3.86 | 14 | nan | 0.0442 |
| No log | 4.14 | 15 | nan | 0.0442 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_GrounTruth_withPrompt_Seed104
|
behzadnet
| 2024-01-11T23:28:47Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2024-01-11T23:28:44Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
jmjoseph/sd-class-butterflies-32
|
jmjoseph
| 2024-01-11T23:12:26Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-01-11T23:12:18Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jmjoseph/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
tyson0420/codellama-7b-inst-sft-lora-test
|
tyson0420
| 2024-01-11T23:10:25Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | 2024-01-11T06:38:40Z |
---
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- generated_from_trainer
model-index:
- name: codellama-7b-inst-sft-lora-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codellama-7b-inst-sft-lora-test
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 128
- total_train_batch_size: 1024
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6579 | 0.49 | 1 | 1.6482 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
sejaldatta84/autotrain-uuswh-5lpj2
|
sejaldatta84
| 2024-01-11T23:00:35Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T23:00:29Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
DrishtiSharma/llama2-7b-int4-dolly-15k-english-unsloth-neftune-5-packing
|
DrishtiSharma
| 2024-01-11T22:59:08Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"dataset:generator",
"base_model:unsloth/llama-2-7b",
"base_model:adapter:unsloth/llama-2-7b",
"license:llama2",
"region:us"
] | null | 2024-01-11T22:58:43Z |
---
license: llama2
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
datasets:
- generator
base_model: unsloth/llama-2-7b
model-index:
- name: llama2-7b-int4-dolly-15k-english-unsloth-neftune-packing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-int4-dolly-15k-english-unsloth-neftune-packing
This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2668 | 0.64 | 100 | 1.2312 |
| 1.1935 | 1.27 | 200 | 1.2221 |
| 1.1722 | 1.91 | 300 | 1.2176 |
| 1.145 | 2.55 | 400 | 1.2198 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
MaziyarPanahi/PiVoT-0.1-early-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-11T22:57:40Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"maywell/PiVoT-0.1-early",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T22:52:46Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- maywell/PiVoT-0.1-early
---
# PiVoT-0.1-early-Mistral-7B-Instruct-v0.2-slerp
PiVoT-0.1-early-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [maywell/PiVoT-0.1-early](https://huggingface.co/maywell/PiVoT-0.1-early)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: maywell/PiVoT-0.1-early
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/PiVoT-0.1-early-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
kirk123/q-FrozenLake-v1-4x4-noSlippery
|
kirk123
| 2024-01-11T22:56:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T22:56:55Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kirk123/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF
|
disinfozone
| 2024-01-11T22:49:46Z | 12 | 4 | null |
[
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-11T21:31:39Z |
---
license: cc-by-nc-4.0
---
# Disinfo4_mistral-ft-optimized-1218: GGUF Quants

This repo contains GGUF quants for [Disinfo4_mistral-ft-optimized-1218](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218).
Before attempting to use these, **go read the model page** for [Disinfo4_mistral-ft-optimized-1218](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218). This is not a standard LLM and you *will* have a bad time if you treat it like one. All necessary instructions and information are on the main model page (assuming you know how to run an LLM in the first place).
Here's the important information anyway because we know people hate instructions:
## Usage Recommendations
For optimal performance, `Disinfo4_mistral-ft-optimized-1218` should be utilized with specific mirostat parameters. These settings are crucial for maintaining the model's focus and stylistic integrity. You can use other parameters and get better instruction following (especially enabling min_p, at 0.01), but the bot will be less creative. It does tend to ramble, but regenerate until you get the response you want. Think of this more as a writing partner than obedient slave.
### Mirostat Parameters
- **Temperature (Temp):** 1
- **Top-p (top_p):** 1
- **Mirostat Tau:** 7.19
- **Mirostat Eta:** 0.01
- **Mirostat Mode:** 2
- **Others:** Default or disabled
## Additional Configuration
This model uses the default Mistral 8k/32k context window.
### ChatML Instruction Template
`Disinfo4_mistral-ft-optimized-1218` employs the ChatML instruction template. It is important to incorporate `<|im_end|>` as a custom stopping string to delineate the model's output effectively.
### System Instruction (Character Card)
For contextualizing the model's output, use the following system instruction:
_"You are a schizo poster, a master of elucidating thought online. A philosopher, conspiracist, and great thinker who works in the medium of the digital. Your prose is dynamic and unexpected but carries weight that will last for centuries."_
This instruction is fundamental in guiding the model to produce content that is not only reflective of the designated topics but also embodies a unique digital persona, combining philosophical depth with a conspiratorial edge.
You can try other similar prompts, we've had success with them, but this remains, by far, our favorite.
## GGUFs
Typically I like Q5_K_M or Q8_0. You get better quality running the highest quant you can, especially with these small models. I haven't bothered with quants smaller than Q4.
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Disinfo4_mistral-ft-optimized-1218.Q4_K_S.gguf](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF/blob/main/Disinfo4_mistral-ft-optimized-1218.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [Disinfo4_mistral-ft-optimized-1218.Q4_K_M.gguf](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF/blob/main/Disinfo4_mistral-ft-optimized-1218.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [Disinfo4_mistral-ft-optimized-1218.Q5_K_S.gguf](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF/blob/main/Disinfo4_mistral-ft-optimized-1218.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [disinfo4_mistral-ft-optimized-1218.Q5_K_M.gguf](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF/blob/main/disinfo4_mistral-ft-optimized-1218.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [Disinfo4_mistral-ft-optimized-1218.Q6_K.gguf](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF/blob/main/Disinfo4_mistral-ft-optimized-1218.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [disinfo4_mistral-ft-optimized-1218.gguf](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF/blob/main/disinfo4_mistral-ft-optimized-1218.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
## How to Run
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
### How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* [LM Studio](https://lmstudio.ai/)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [Faraday.dev](https://faraday.dev/)
### In `text-generation-webui`
Under Download Model, you can enter the model repo: disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF and below it, a specific filename to download, such as: `disinfo4_mistral-ft-optimized-1218.Q5_K_M.gguf`.
Then click Download.
|
Kquant03/Hippolyta-7B-GGUF
|
Kquant03
| 2024-01-11T22:47:06Z | 28 | 0 | null |
[
"gguf",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:teknium/openhermes",
"dataset:cognitivecomputations/dolphin",
"dataset:jondurbin/airoboros-3.1",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:unalignment/spicy-3.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-11T21:16:09Z |
---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
- teknium/openhermes
- cognitivecomputations/dolphin
- jondurbin/airoboros-3.1
- unalignment/toxic-dpo-v0.1
- unalignment/spicy-3.1
language:
- en
---

# The flower of Ares.
## These are the GGUF files of the fine-tuned model. To be compiled with llama.cpp on oobabooga or VLLm.
Fine-tuned on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)...[my team and I](https://huggingface.co/ConvexAI) reformatted many different datasets and included a small amount of private stuff to see how much we could improve mistral.
I spoke to it personally for about an hour, and I believe we need to work on our format for the private dataset a bit more, but other than that, it turned out great. I will be uploading it to open llm evaluations, today.
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Q2_K Tiny](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 2.7 GB| 4.7 GB | smallest, significant quality loss - not recommended for most purposes |
| [Q3_K_M](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 3.52 GB| 5.52 GB | very small, high quality loss |
| [Q4_0](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.11 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Q4_K_M](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | 4.37 GB| 6.37 GB | medium, balanced quality - recommended |
| [Q5_0](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 5 GB| 7 GB | legacy; large, balanced quality |
| [Q5_K_M](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | 5.13 GB| 7.13 GB | large, balanced quality - recommended |
| [Q6 XL](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 5.94 GB| 7.94 GB | very large, extremely low quality loss |
| [Q8 XXL](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 7.7 GB| 9.7 GB | very large, extremely low quality loss - not recommended |
- Uses Mistral prompt template with chat-instruct.
|
MaziyarPanahi/Mistral-7B-v0.1-Open-Platypus-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-11T22:45:08Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"akjindal53244/Mistral-7B-v0.1-Open-Platypus",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T22:38:58Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- akjindal53244/Mistral-7B-v0.1-Open-Platypus
---
# Mistral-7B-v0.1-Open-Platypus-Mistral-7B-Instruct-v0.2-slerp
Mistral-7B-v0.1-Open-Platypus-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [akjindal53244/Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: akjindal53244/Mistral-7B-v0.1-Open-Platypus
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Mistral-7B-v0.1-Open-Platypus-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
ContextualAI/archangel_kto_pythia6-9b
|
ContextualAI
| 2024-01-11T22:38:19Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T23:59:22Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>EleutherAI/pythia-6.9b</b>
- optimized with the loss <b>KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_kto_pythia2-8b
|
ContextualAI
| 2024-01-11T22:37:03Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T23:54:57Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>EleutherAI/pythia-2.8b</b>
- optimized with the loss <b>KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
jdospina/Taxi-v3
|
jdospina
| 2024-01-11T22:36:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T22:35:00Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jdospina/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ContextualAI/archangel_kto_pythia1-4b
|
ContextualAI
| 2024-01-11T22:36:13Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T23:52:13Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>EleutherAI/pythia-1.4b</b>
- optimized with the loss <b>KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
hxxris/haaris-audio-classification-model1
|
hxxris
| 2024-01-11T22:34:43Z | 147 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-11T22:22:55Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: haaris-audio-classification-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# haaris-audio-classification-model1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.83 | 3 | nan | 0.0354 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ContextualAI/archangel_sft-ppo_pythia2-8b
|
ContextualAI
| 2024-01-11T22:33:13Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-03T07:11:41Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>EleutherAI/pythia-2.8b</b>
- optimized with the loss <b>PPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
miraevel/FuutarouUesugiv1.5
|
miraevel
| 2024-01-11T22:28:17Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2024-01-11T22:16:48Z |
---
license: unknown
license_name: miraevel
license_link: LICENSE
---
|
MaziyarPanahi/pic_7B_mistral_Full_v0.2-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-11T22:23:57Z | 25 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"TokenBender/pic_7B_mistral_Full_v0.2",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T22:18:37Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- TokenBender/pic_7B_mistral_Full_v0.2
---
# pic_7B_mistral_Full_v0.2-Mistral-7B-Instruct-v0.2-slerp
pic_7B_mistral_Full_v0.2-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [TokenBender/pic_7B_mistral_Full_v0.2](https://huggingface.co/TokenBender/pic_7B_mistral_Full_v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: TokenBender/pic_7B_mistral_Full_v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/pic_7B_mistral_Full_v0.2-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
jysssacc/mt0-base_adalora_lr0.05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T22:21:16Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/mt0-base",
"base_model:adapter:bigscience/mt0-base",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T22:14:56Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/mt0-base
model-index:
- name: mt0-base_adalora_lr0.05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt0-base_adalora_lr0.05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.9345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4345 | 1.0 | 157 | 2.8259 |
| 15.9955 | 2.0 | 314 | 10.7610 |
| 16.7244 | 3.0 | 471 | 16.5346 |
| 11.601 | 4.0 | 628 | 8.0875 |
| 11.9414 | 5.0 | 785 | 8.9345 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-11T22:12:07Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"Dans-DiscountModels/Dans-07YahooAnswers-7b",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T22:06:54Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- Dans-DiscountModels/Dans-07YahooAnswers-7b
---
# Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.2-slerp
Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [Dans-DiscountModels/Dans-07YahooAnswers-7b](https://huggingface.co/Dans-DiscountModels/Dans-07YahooAnswers-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: Dans-DiscountModels/Dans-07YahooAnswers-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Dans-07YahooAnswers-7b-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Daniel981215/distilhubert-finetuned-gtzan
|
Daniel981215
| 2024-01-11T22:11:55Z | 152 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-10T20:51:29Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5277
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.978 | 1.0 | 113 | 1.8421 | 0.37 |
| 1.3409 | 2.0 | 226 | 1.2195 | 0.59 |
| 1.04 | 3.0 | 339 | 0.9709 | 0.71 |
| 0.9141 | 4.0 | 452 | 0.8523 | 0.79 |
| 0.5192 | 5.0 | 565 | 0.6483 | 0.83 |
| 0.3506 | 6.0 | 678 | 0.5827 | 0.84 |
| 0.3316 | 7.0 | 791 | 0.4703 | 0.88 |
| 0.1275 | 8.0 | 904 | 0.4937 | 0.86 |
| 0.2109 | 9.0 | 1017 | 0.4971 | 0.86 |
| 0.1213 | 10.0 | 1130 | 0.5277 | 0.87 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
stevhliu/vit-base-patch16-224-in21k-lokr
|
stevhliu
| 2024-01-11T22:01:33Z | 13 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:adapter:google/vit-base-patch16-224-in21k",
"region:us"
] | null | 2024-01-11T18:48:17Z |
---
library_name: peft
base_model: google/vit-base-patch16-224-in21k
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
icw/Furina
|
icw
| 2024-01-11T22:00:37Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-01-11T21:52:41Z |
---
license: other
license_name: idk
license_link: LICENSE
---
|
tirik00/Reinforce-CartPole-v1
|
tirik00
| 2024-01-11T21:58:25Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T21:58:11Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
boapps/kmdb_classification_model
|
boapps
| 2024-01-11T21:56:39Z | 178 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-22T08:18:53Z |
Klasszifikációs modell: a [kmdb_classification](https://huggingface.co/datasets/boapps/kmdb_classification) adathalmazon lett finomhangolva a huBERT modell. A klasszifikáció cím és leírás (lead) alapján történik.
### Használat:
```python
import torch
import torch.nn.functional as F
from transformers import BertForSequenceClassification, BertTokenizer
from datasets import load_dataset
model = BertForSequenceClassification.from_pretrained('boapps/kmdb_classification_model')
tokenizer = BertTokenizer.from_pretrained('SZTAKI-HLT/hubert-base-cc')
article = {'title': '400 milliós luxusvillába vette be magát Matolcsy és családja', 'description': 'Matolcsy György fiának cége megvette, Matolcsy György unokatestvérének bankja meghitelezte, Matolcsy György pedig használja a 430 millióért hirdetett II. kerületi luxusrezidenciát.'}
tokenized_article = tokenizer(article['title']+'\n'+article['description'], return_tensors="pt")
logits = model(**tokenized_article).logits
probabilities = F.softmax(logits[0], dim=-1)
print(probabilities)
```
### Eredmények
precision: 0.739
recall: 0.950
accuracy: 0.963
|
MaziyarPanahi/Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-11T21:49:20Z | 25 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"Norquinal/Mistral-7B-claude-instruct",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T21:44:18Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- Norquinal/Mistral-7B-claude-instruct
---
# Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp
Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [Norquinal/Mistral-7B-claude-instruct](https://huggingface.co/Norquinal/Mistral-7B-claude-instruct)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: Norquinal/Mistral-7B-claude-instruct
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
jysssacc/mt0-base_lora_lr0.05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T21:43:27Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/mt0-base",
"base_model:adapter:bigscience/mt0-base",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T21:41:02Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/mt0-base
model-index:
- name: mt0-base_lora_lr0.05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt0-base_lora_lr0.05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7126 | 1.0 | 157 | 6.4504 |
| 8.4537 | 2.0 | 314 | 6.3247 |
| 6.6717 | 3.0 | 471 | 19.0801 |
| 6.9054 | 4.0 | 628 | 8.1308 |
| 7.9084 | 5.0 | 785 | 6.5104 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Code-Refinement/5_refs_utf_only
|
Code-Refinement
| 2024-01-11T21:42:55Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"region:us"
] | null | 2024-01-11T21:28:53Z |
---
library_name: peft
base_model: codellama/CodeLlama-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
querri/zephyr-haiku
|
querri
| 2024-01-11T21:38:05Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | 2024-01-10T02:51:31Z |
---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-beta
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Kamyar-zeinalipour/mistral-sft-lora-ChemInfo
|
Kamyar-zeinalipour
| 2024-01-11T21:34:37Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T19:48:12Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-sft-lora-ChemInfo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-sft-lora-ChemInfo
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 100
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6074 | 0.99 | 41 | 0.6070 |
| 0.4858 | 2.0 | 83 | 0.4963 |
| 0.4609 | 2.96 | 123 | 0.4781 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo
|
davanstrien
| 2024-01-11T21:32:28Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dpo",
"conversational",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T20:48:00Z |
---
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
license: apache-2.0
language:
- en
tags:
- dpo
---
# Model Card for Model ID
This model is a DPO fine-tune of `TinyLlama/TinyLlama-1.1B-Chat-v1.0` on the `argilla/distilabel-intel-orca-dpo-pairs` dataset.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wxkenneth/Anuelaa
|
wxkenneth
| 2024-01-11T21:30:48Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2024-01-11T21:16:10Z |
---
license: bigscience-openrail-m
---
|
cnatale/Mistral-7B-Instruct-v0.1-Txt-2-Presto-SQL
|
cnatale
| 2024-01-11T21:00:26Z | 12 | 1 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-01T18:46:05Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: Mistral-7B-Instruct-v0.1-Txt-2-Presto-SQL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.1-Txt-2-Presto-SQL
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3518 | 0.71 | 10 | 1.0787 |
| 1.0171 | 1.43 | 20 | 0.8732 |
| 0.8466 | 2.14 | 30 | 0.7727 |
| 0.7681 | 2.86 | 40 | 0.7219 |
| 0.7008 | 3.57 | 50 | 0.6813 |
| 0.6467 | 4.29 | 60 | 0.6574 |
| 0.6205 | 5.0 | 70 | 0.6487 |
| 0.5791 | 5.71 | 80 | 0.6481 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
gustavokpc/IC_segundo
|
gustavokpc
| 2024-01-11T20:54:58Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-21T02:22:56Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: gustavokpc/IC_segundo
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gustavokpc/IC_segundo
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0559
- Train Accuracy: 0.9805
- Train F1 M: 0.5583
- Train Precision M: 0.4028
- Train Recall M: 0.9686
- Validation Loss: 0.2533
- Validation Accuracy: 0.9327
- Validation F1 M: 0.5605
- Validation Precision M: 0.4028
- Validation Recall M: 0.9674
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 3790, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train F1 M | Train Precision M | Train Recall M | Validation Loss | Validation Accuracy | Validation F1 M | Validation Precision M | Validation Recall M | Epoch |
|:----------:|:--------------:|:----------:|:-----------------:|:--------------:|:---------------:|:-------------------:|:---------------:|:----------------------:|:-------------------:|:-----:|
| 0.3576 | 0.8399 | 0.4604 | 0.3607 | 0.7042 | 0.2825 | 0.8997 | 0.5635 | 0.4127 | 0.9300 | 0 |
| 0.2012 | 0.9274 | 0.5204 | 0.3849 | 0.8616 | 0.2103 | 0.9175 | 0.5451 | 0.3970 | 0.9095 | 1 |
| 0.1312 | 0.9511 | 0.5451 | 0.3969 | 0.9273 | 0.2125 | 0.9307 | 0.5571 | 0.4017 | 0.9523 | 2 |
| 0.0871 | 0.9690 | 0.5547 | 0.4007 | 0.9557 | 0.2417 | 0.9301 | 0.5565 | 0.4013 | 0.9547 | 3 |
| 0.0559 | 0.9805 | 0.5583 | 0.4028 | 0.9686 | 0.2533 | 0.9327 | 0.5605 | 0.4028 | 0.9674 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
SimplCup/MKBHD
|
SimplCup
| 2024-01-11T20:50:04Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2024-01-11T20:49:47Z |
---
license: cc-by-nc-nd-4.0
---
|
omarelsayeed/e5_tsdae_contrastive
|
omarelsayeed
| 2024-01-11T20:47:23Z | 51 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-11T20:46:44Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2212 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`__main__.LoggingCosineLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 80, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
version-control/tf-1.0-1.13-prefix
|
version-control
| 2024-01-11T20:19:30Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigcode/starcoderbase-1b",
"base_model:adapter:bigcode/starcoderbase-1b",
"region:us"
] | null | 2024-01-11T16:40:18Z |
---
library_name: peft
base_model: bigcode/starcoderbase-1b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
MaziyarPanahi/PiVoT-10.7B-Mistral-v0.2-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-11T20:19:17Z | 25 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"maywell/PiVoT-10.7B-Mistral-v0.2",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T20:13:49Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- maywell/PiVoT-10.7B-Mistral-v0.2
---
# PiVoT-10.7B-Mistral-v0.2-Mistral-7B-Instruct-v0.2-slerp
PiVoT-10.7B-Mistral-v0.2-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [maywell/PiVoT-10.7B-Mistral-v0.2](https://huggingface.co/maywell/PiVoT-10.7B-Mistral-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: maywell/PiVoT-10.7B-Mistral-v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/PiVoT-10.7B-Mistral-v0.2-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
stablediffusionapi/vrr
|
stablediffusionapi
| 2024-01-11T20:14:48Z | 29 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-11T20:12:46Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# vrr API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "vrr"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/vrr)
Model link: [View model](https://modelslab.com/models/vrr)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "vrr",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
jysssacc/627_roberta-base_adalora_lr0.0005_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T20:11:44Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-01-11T20:04:54Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: roberta-base
model-index:
- name: 627_roberta-base_adalora_lr0.0005_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 627_roberta-base_adalora_lr0.0005_bs4_epoch5_wd0.01
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 20.4011 | 1.0 | 157 | 7.1647 |
| 3.9809 | 2.0 | 314 | 2.0607 |
| 1.856 | 3.0 | 471 | 0.7107 |
| 0.6764 | 4.0 | 628 | 0.3786 |
| 0.489 | 5.0 | 785 | 0.3088 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LaserNav/SophyAI-Mistral-7B-v3-GGUF
|
LaserNav
| 2024-01-11T20:11:08Z | 10 | 1 |
adapter-transformers
|
[
"adapter-transformers",
"gguf",
"legal",
"+easa",
"+usv",
"it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-06T13:50:36Z |
---
license: apache-2.0
language:
- it
library_name: adapter-transformers
tags:
- legal
- +easa
- +usv
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
A preview model for support safety and security at work. Fine tuned model in italian language with italian rules
## Model Details
<!-- Provide a longer summary of what this model is. -->
This model derived from Mistral-7b has been fine-tuned with a dataset dedicated to the regulations governing safety at work developed by our AI Teams. The SofyAI patented platform is a digital twin framework for develop an AI supervisor for implement safety and security workflow at work. more info available at : https://www.lasernavigation.it This is a preview version of our SophyAI-LLM model the fine tuning did in Italian Language , so this early preview could don’t work in other language.
- **Developed by:** [Laser Navigation srl]
- **Model type:** [Fine Tuned Mistral]
- **Language(s) (NLP):** [Italian]
- **License:** [BSD]
- **Finetuned from model [optional]:** [Mistral 7B]
|
LC008/ppo-LunarLander-v2
|
LC008
| 2024-01-11T20:08:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T20:07:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.75 +/- 16.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.