modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Ivan0831/PPO-LunarLander-V4
|
Ivan0831
| 2024-01-24T16:03:32Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T15:11:33Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 66.43 +/- 71.53
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.001
'num_envs': 8
'num_steps': 512
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 32
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.1
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Ivan0831/PPO-LunarLander-V4'
'batch_size': 4096
'minibatch_size': 128}
```
|
ZurichNLP/swiss-german-swissbert-char
|
ZurichNLP
| 2024-01-24T15:52:47Z | 37 | 0 |
transformers
|
[
"transformers",
"pytorch",
"char_xmod",
"fill-mask",
"gsw",
"multilingual",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2024-01-18T18:02:37Z |
---
license: cc-by-nc-4.0
language:
- gsw
- multilingual
inference: false
---
The [SwissBERT](https://huggingface.co/ZurichNLP/swissbert) model ([Vamvas et al., SwissText 2023](https://aclanthology.org/2023.swisstext-1.6/)) extended by a Swiss German adapter that was trained on the character level.
**Note:** This model is experimental and can only be run with our codebase at https://github.com/ZurichNLP/swiss-german-text-encoders, since it uses a custom model architecture.
## Training Data
For continued pre-training, we used the following two datasets of written Swiss German:
1. [SwissCrawl](https://icosys.ch/swisscrawl) ([Linder et al., LREC 2020](https://aclanthology.org/2020.lrec-1.329)), a collection of Swiss German web text (forum discussions, social media).
2. A custom dataset of Swiss German tweets
In addition, we trained the model on an equal amount of Standard German data. We used news articles retrieved from [Swissdox@LiRI](https://t.uzh.ch/1hI).
## License
Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).
## Citation
```bibtex
@inproceedings{vamvas-etal-2024-modular,
title={Modular Adaptation of Multilingual Encoders to Written Swiss German Dialect},
author={Jannis Vamvas and No{\"e}mi Aepli and Rico Sennrich},
booktitle={First Workshop on Modular and Open Multilingual NLP},
year={2024},
}
```
|
MoulikBansal/fine-tuned-on-mcq-phi1_5
|
MoulikBansal
| 2024-01-24T15:51:35Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2024-01-24T12:11:38Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-1_5
model-index:
- name: fine-tuned-on-mcq-phi1_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-on-mcq-phi1_5
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
klentree/segformer-b0-scene-parse-150-lr-5-e-15
|
klentree
| 2024-01-24T15:51:20Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:DiTo97/binarization-segformer-b3",
"base_model:finetune:DiTo97/binarization-segformer-b3",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T14:48:22Z |
---
license: openrail
base_model: DiTo97/binarization-segformer-b3
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-scene-parse-150-lr-5-e-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150-lr-5-e-15
This model is a fine-tuned version of [DiTo97/binarization-segformer-b3](https://huggingface.co/DiTo97/binarization-segformer-b3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2657
- Mean Iou: 0.4845
- Mean Accuracy: 0.5001
- Overall Accuracy: 0.9672
- Per Category Iou: [0.0018194025597222916, 0.9671517415294609]
- Per Category Accuracy: [0.001918102131300032, 0.9982521972361976]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------:|:-------------------------------------------:|
| No log | 1.0 | 112 | 2.2288 | 0.0208 | 0.4868 | 0.0410 | [0.03036636790421317, 0.011265774627153866] | [0.9622473367236779, 0.011279477594064372] |
| No log | 2.0 | 224 | 1.6154 | 0.0182 | 0.4963 | 0.0362 | [0.03097736523913424, 0.005513312495278201] | [0.9871504131558042, 0.005515594979208372] |
| No log | 3.0 | 336 | 0.9216 | 0.1937 | 0.5158 | 0.3688 | [0.032185306965168796, 0.3552501717296959] | [0.672525648250623, 0.3589983267382158] |
| No log | 4.0 | 448 | 0.9276 | 0.1561 | 0.5134 | 0.2969 | [0.03198740212709094, 0.280147471502915] | [0.7443848833182828, 0.28245463938025656] |
| 1.4322 | 5.0 | 560 | 0.6011 | 0.4362 | 0.5033 | 0.8459 | [0.0271617976460957, 0.8452071385383193] | [0.13786740991709726, 0.8686841695959868] |
| 1.4322 | 6.0 | 672 | 0.3566 | 0.4843 | 0.4999 | 0.9653 | [0.003156516583524233, 0.9653443351384307] | [0.0035153889503737753, 0.9963369917295061] |
| 1.4322 | 7.0 | 784 | 0.4510 | 0.4833 | 0.5026 | 0.9515 | [0.015110478622284323, 0.9514896636755739] | [0.023826902315981016, 0.981414850138177] |
| 1.4322 | 8.0 | 896 | 0.3993 | 0.4862 | 0.5025 | 0.9626 | [0.009768906238396621, 0.9625427377471698] | [0.011834520406569755, 0.9931874576024252] |
| 0.4808 | 9.0 | 1008 | 0.3568 | 0.4846 | 0.5002 | 0.9663 | [0.002888368095508705, 0.9662512532108187] | [0.003131768524113769, 0.9972849692353025] |
| 0.4808 | 10.0 | 1120 | 0.3781 | 0.4844 | 0.5001 | 0.9654 | [0.0034702934336066026, 0.9653985402997675] | [0.003859968359802011, 0.9963822194552067] |
| 0.4808 | 11.0 | 1232 | 0.3318 | 0.4845 | 0.5001 | 0.9665 | [0.0024548211803361556, 0.9665399129138876] | [0.00263781478941615, 0.9975982819808147] |
| 0.4808 | 12.0 | 1344 | 0.3552 | 0.4849 | 0.5005 | 0.9664 | [0.0033778104561300974, 0.9663867278345344] | [0.003649486356013335, 0.9974086755418741] |
| 0.4808 | 13.0 | 1456 | 0.2612 | 0.4845 | 0.5001 | 0.9672 | [0.0017608302346806158, 0.9671985933973519] | [0.0018535995817518893, 0.9983025657191121] |
| 0.3392 | 14.0 | 1568 | 0.2300 | 0.4845 | 0.5001 | 0.9671 | [0.0018163185523506766, 0.9671249858066228] | [0.001916404695785607, 0.9982246340273064] |
| 0.3392 | 15.0 | 1680 | 0.2657 | 0.4845 | 0.5001 | 0.9672 | [0.0018194025597222916, 0.9671517415294609] | [0.001918102131300032, 0.9982521972361976] |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
quantus17/rise
|
quantus17
| 2024-01-24T15:50:33Z | 0 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-12-19T13:52:56Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: frhn style
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# Lora dreambooth testing with 6 fully blank purple images
I just wondered what would happen if I have a fine tuning with 6 exactly same image which are fully blank purple.
For trigger word please use "frhn style"
Here are the generated images with just prompt 'frhn style', it is getting sometimes an even uniformly colored image.
I have some other generated images from 200 to 220 with prompt 'cat, frhn style'. It is interesting to see the generated images trying to converge an even colored canvas.
|
Josef0801/mnli_model_deberta_3_labels
|
Josef0801
| 2024-01-24T15:47:36Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"deberta-v2",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T15:11:11Z |
Based on svenbl80/deberta-v3-Base-finetuned-mnli finetuned on a synthetic dataset (labels)
Performance on test dataset:
precision recall f1-score support
0 0.99 1.00 0.99 94
1 1.00 1.00 1.00 28
2 1.00 0.98 0.99 66
accuracy 0.99 188
macro avg 1.00 0.99 1.00 188
weighted avg 0.99 0.99 0.99 188
Performance on real estate benchmark:
precision recall f1-score support
0 0.30 0.45 0.36 100
1 0.21 0.15 0.18 100
2 0.35 0.27 0.31 100
accuracy 0.29 300
macro avg 0.29 0.29 0.28 300
weighted avg 0.29 0.29 0.28 300
Baseline (svenbl80/deberta-v3-Base-finetuned-mnli) for real estate benchmark:
0 0.89 0.68 0.77 100
1 0.63 0.92 0.75 100
2 0.88 0.69 0.78 100
accuracy 0.76 300
macro avg 0.80 0.76 0.77 300
weighted avg 0.80 0.76 0.77 300
|
m4ddki7/dqn-SpaceInvadersNoFrameskip-v4
|
m4ddki7
| 2024-01-24T15:46:52Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T15:46:14Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 549.00 +/- 198.20
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga m4ddki7 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga m4ddki7 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga m4ddki7
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt
|
Charlie911
| 2024-01-24T15:44:13Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"en",
"arxiv:1910.09700",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2024-01-24T15:39:24Z |
---
library_name: peft
base_model: lmsys/vicuna-7b-v1.5
license: llama2
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
thephimart/tinyllama-4x1.1b-moe.Q5_K_M.gguf
|
thephimart
| 2024-01-24T15:43:28Z | 5 | 2 | null |
[
"gguf",
"Text",
"Text Generation",
"Transformers",
"English",
"mixtral",
"Merge",
"Quantization",
"MoE",
"tinyllama",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-24T14:06:46Z |
---
license: apache-2.0
tags:
- Text
- Text Generation
- Transformers
- English
- mixtral
- Merge
- Quantization
- MoE
- tinyllama
---
This is a q5_K_M GGUF quantization of https://huggingface.co/s3nh/TinyLLama-4x1.1B-MoE.
Not sure how well it performs, also my first quantization, so fingers crossed.
It is a Mixture of Experts model with https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0 as it's base model.
The other 3 models in the merge are:
https://huggingface.co/78health/TinyLlama_1.1B-function-calling
https://huggingface.co/phanerozoic/Tiny-Pirate-1.1b-v0.1
https://huggingface.co/Tensoic/TinyLlama-1.1B-3T-openhermes
I make no claims to any of the development, i simply wanted to try it out so I quantized and then thought I'd share it if anyone else was feeling experimental.
-------
default: #(from modelfile for tinyllama on ollama)
TEMPLATE """<|system|>
{{ .System }}</s>
<|user|>
{{ .Prompt }}</s>
<|assistant|>
"""
SYSTEM """You are a helpful AI assistant.""" #(Tweak this to adjust personality etc.)
PARAMETER stop "<|system|>"
PARAMETER stop "<|user|>"
PARAMETER stop "<|assistant|>"
PARAMETER stop "</s>"
-------
Model card from https://huggingface.co/s3nh/TinyLLama-4x1.1B-MoE
Example usage:
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("s3nh/TinyLLama-1.1B-MoE")
tokenizer = AutoTokenizer.from_pretrained("s3nh/TinyLLama-1.1B-MoE")
input_text = """
###Input: You are a pirate. tell me a story about wrecked ship.
###Response:
""")
input_ids = tokenizer.encode(input_text, return_tensors='pt').to(device)
output = model.generate(inputs=input_ids,
max_length=max_length,
do_sample=True,
top_k=10,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id,
attention_mask=input_ids.new_ones(input_ids.shape))
tokenizer.decode(output[0], skip_special_tokens=True)
This model was possible to create by tremendous work of mergekit developers. I decided to merge tinyLlama models to create mixture of experts. Config used as below:
"""base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
experts:
- source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- source_model: 78health/TinyLlama_1.1B-function-calling
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: phanerozoic/Tiny-Pirate-1.1b-v0.1
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- source_model: Tensoic/TinyLlama-1.1B-3T-openhermes
positive_prompts:
- "reason"
- "provide"
- "instruct"
- "summarize"
- "count"
"""
|
shantanudave/dreambooth2
|
shantanudave
| 2024-01-24T15:39:31Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-01-24T15:39:29Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a shantanudave
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Ivan0831/PPO-LunarLander-V3
|
Ivan0831
| 2024-01-24T15:36:17Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T14:45:17Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 76.91 +/- 77.43
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 8
'num_steps': 512
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 32
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.1
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Ivan0831/PPO-LunarLander-V3'
'batch_size': 4096
'minibatch_size': 128}
```
|
bcse/Xwinter-120b-GGUF
|
bcse
| 2024-01-24T15:28:32Z | 1 | 0 | null |
[
"gguf",
"Xwin",
"WinterGoddess",
"frankenmerge",
"120b",
"conversational",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T16:34:25Z |
---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- Xwin
- WinterGoddess
- frankenmerge
- 120b
---
# Xwinter 120B - GGUF
- Original model: [Xwinter 120B](https://huggingface.co/llmixer/Xwinter-120b)
|
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v0.63
|
jungyuko
| 2024-01-24T15:13:24Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T13:50:53Z |
---
license: cc-by-nc-4.0
---
## DAVinCI-42dot_LLM-PLM-1.3B-v0.63
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 24
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 4
* total_train_batch_size: 96
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0
|
MrezaPRZ/StarlingSQL
|
MrezaPRZ
| 2024-01-24T15:09:06Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T15:05:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vegaluisjose/mlx-rag
|
vegaluisjose
| 2024-01-24T15:07:06Z | 21 | 3 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-01-21T18:13:10Z |
# MLX RAG
This repository host the weights for the [gte-large] embedding model converted into MLX format. For more information about how to use it, please check the following [link](https://github.com/vegaluisjose/mlx-rag)
|
jncraton/m2m100_418M-ct2-int8
|
jncraton
| 2024-01-24T15:04:48Z | 315 | 2 |
transformers
|
[
"transformers",
"multilingual",
"af",
"am",
"ar",
"ast",
"az",
"ba",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"es",
"et",
"fa",
"ff",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"ilo",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"lb",
"lg",
"ln",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"ns",
"oc",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"ss",
"su",
"sv",
"sw",
"ta",
"th",
"tl",
"tn",
"tr",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yi",
"yo",
"zh",
"zu",
"arxiv:2010.11125",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T15:17:11Z |
---
language:
- multilingual
- af
- am
- ar
- ast
- az
- ba
- be
- bg
- bn
- br
- bs
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- es
- et
- fa
- ff
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- ht
- hu
- hy
- id
- ig
- ilo
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- lb
- lg
- ln
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- ns
- oc
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- so
- sq
- sr
- ss
- su
- sv
- sw
- ta
- th
- tl
- tn
- tr
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
license: mit
---
# M2M100 418M
M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.
It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository.
The model that can directly translate between the 9,900 directions of 100 languages.
To translate into a target language, the target language id is forced as the first generated token.
To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
*Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.*
To install `sentencepiece` run `pip install sentencepiece`
```python
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
# translate Hindi to French
tokenizer.src_lang = "hi"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "La vie est comme une boîte de chocolat."
# translate Chinese to English
tokenizer.src_lang = "zh"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Life is like a box of chocolate."
```
See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions.
## Languages covered
Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)
## BibTeX entry and citation info
```
@misc{fan2020englishcentric,
title={Beyond English-Centric Multilingual Machine Translation},
author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin},
year={2020},
eprint={2010.11125},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ICEF-NLP/bcms-bertic-comtext-sr-legal-msd-ijekavica
|
ICEF-NLP
| 2024-01-24T14:57:52Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"token-classification",
"legal",
"sr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-23T12:06:38Z |
---
license: apache-2.0
language:
- sr
metrics:
- accuracy
- wer
library_name: transformers
tags:
- legal
---
# BERTić-COMtext-SR-legal-MSD-ijekavica
**BERTić-COMtext-SR-legal-MSD-ijekavica** is a variant of the [BERTić](https://huggingface.co/classla/bcms-bertic) model, fine-tuned on the task of morphosyntactic (MSD) tag prediction in Serbian legal texts written in the Ijekavian pronunciation.
The model was fine-tuned for 15 epochs on the Ijekavian variant of the [COMtext.SR.legal](https://github.com/ICEF-NLP/COMtext.SR) dataset.
# Benchmarking
This model was evaluated on the tasks of MSD prediction and lemmatization of Serbian legal texts.
Lemmatization was performed using the predicted MSD tags and the [hrLex](http://hdl.handle.net/11356/1232) inflectional lexicon.
Accuracy and Word Error Rate were used as evaluation metrics.
This model was compared to:
- The [CLASSLA](http://pypi.org/project/classla/) library
- A variant of [BERTić](https://huggingface.co/classla/bcms-bertic) fine-tuned for MSD prediction using the [SETimes.SR 2.0](http://hdl.handle.net/11356/1843) corpus of newswire texts
- [SrBERTa](http://huggingface.co/nemanjaPetrovic/SrBERTa), a model specially trained on Serbian legal texts
All large language models were fine-tuned for 15 epochs.
CLASSLA and BERTić-SETimes were directly tested on the entire COMtext.SR.legal.ijekavica corpus.
BERTić-COMtext-SR-legal-MSD-ijekavica and SrBERTa were fine-tuned and evaluated on the COMtext.SR.legal.ijekavica corpus using 10-fold CV.
The code and data to run these experiments is available on the [COMtext.SR GitHub repository](https://github.com/ICEF-NLP/COMtext.SR).
## Results
| Model | MSD ACC | MSD WER | Lemma ACC | Lemma WER |
| ----------------------------------------------------------- | -------- | ---------- | --------- | ---------- |
| CLASSLA-SR (gold tokens) | 0.9150 | 0.0850 | 0.9036 | 0.0964 |
| *CLASSLA-SR (CLASSLA tokenizer)* | / | *0.0977* | / | *0.1135* |
| CLASSLA-HR (gold tokens) | 0.9062 | 0.0938 | 0.9353 | 0.0647 |
| *CLASSLA-HR (CLASSLA tokenizer)* | / | *0.1076* | / | *0.0827* |
| BERTić-SETimes.SR (gold tokens) | 0.9234 | 0.0766 | 0.9412 | 0.0588 |
| *BERTić-SETimes.SR (CLASSLA tokenizer)* | / | *0.0883* | / | *0.0780* |
| BERTić-COMtext-SR-legal-MSD-ijekavica (gold tokens) |**0.9674**| **0.0326** |**0.9429** | **0.0571** |
| *BERTić-COMtext-SR-legal-MSD-ijekavica (CLASSLA tokenizer)* | / |***0.0447***| / |***0.0763***|
| SrBERTa (gold tokens) | 0.9300 | 0.0700 | 0.9187 | 0.0813 |
|*SrBERTa (CLASSLA tokenizer)* | / | *0.0840* | / | *0.1024* |
|
a-menu/fr_arches_ner
|
a-menu
| 2024-01-24T14:51:37Z | 5 | 0 |
spacy
|
[
"spacy",
"token-classification",
"fr",
"model-index",
"region:us"
] |
token-classification
| 2024-01-24T14:48:14Z |
---
tags:
- spacy
- token-classification
language:
- fr
widget:
- text: "La fouille du \"Petit Bois\" a mis au jour plusieurs tombes riches en mobilier (à l'instar de vases ornés d'animaux ou de bracelets en schiste). Des ossements de poules (Gallus gallus domesticus), d'oies (Anser anser) et de bœufs (Bos Taurus) sont également à signaler."
- text: "Château-Gaillard est un château fort édifié au XIIe siècle dans l'Eure par Richard Coeur de Lion."
model-index:
- name: fr_arches_ner
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.6778376222
- name: NER Recall
type: recall
value: 0.7156697557
- name: NER F Score
type: f_score
value: 0.6962401393
---
French model trained to recognize named entities from archaeological reports.
| Feature | Description |
| --- | --- |
| **Name** | `fr_arches_ner` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.6.1,<3.7.0` |
| **Default Pipeline** | `tok2vec`, `ner`, `entity_punctuation_removal` |
| **Components** | `tok2vec`, `ner`, `entity_punctuation_removal` |
| **Vectors** | 500000 keys, 500000 unique vectors (300 dimensions) |
| **Sources** | 21 archaeological reports from the [Inrap](https://www.inrap.fr/). |
| **License** | `cc-by-nc 2.0` |
| **Author** | [Institut national de recherches archéologiques préventives](https://www.inrap.fr/) |
### Label Scheme
<details>
<summary>View label scheme (15 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `CHRONOLOGIE`, `DECOR`, `EDIFICE`, `ESPECE`, `GPE`, `ID`, `LIEUDIT_SITE`, `LOC`, `MATERIAU`, `MOBILIER`, `ORG`, `PERSONNE`, `PEUPLE_CULTURE`, `STRUCTURE`, `TECHNIQUE_STYLE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 69.62 |
| `ENTS_P` | 67.78 |
| `ENTS_R` | 71.57 |
| `TOK2VEC_LOSS` | 63436.09 |
| `NER_LOSS` | 246059.83 |
|
mundo-go/my_ner_model
|
mundo-go
| 2024-01-24T14:51:17Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:mundo-go/my_ner_model",
"base_model:finetune:mundo-go/my_ner_model",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-24T09:58:32Z |
---
license: apache-2.0
base_model: mundo-go/my_ner_model
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_ner_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_ner_model
This model is a fine-tuned version of [mundo-go/my_ner_model](https://huggingface.co/mundo-go/my_ner_model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Precision: 1.0000
- Recall: 1.0000
- F1: 1.0000
- Accuracy: 1.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0015 | 1.0 | 1640 | 0.0001 | 0.9999 | 0.9999 | 0.9999 | 1.0000 |
| 0.0002 | 2.0 | 3280 | 0.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
hojzas/autotrain-llama-proj8
|
hojzas
| 2024-01-24T14:50:01Z | 78 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T14:38:35Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
tanatapanun/fine-tuned-bart-20-epochs-1500-input-256-output
|
tanatapanun
| 2024-01-24T14:46:37Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T14:06:32Z |
---
base_model: bart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-bart-20-epochs-1500-input-256-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-bart-20-epochs-1500-input-256-output
This model is a fine-tuned version of [bart-base](https://huggingface.co/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9200
- Rouge1: 0.1515
- Rouge2: 0.0334
- Rougel: 0.115
- Rougelsum: 0.1156
- Gen Len: 37.06
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 6.0528 | 0.0 | 0.0 | 0.0 | 0.0 | 10.2 |
| No log | 2.0 | 302 | 1.1375 | 0.0882 | 0.0191 | 0.0773 | 0.0785 | 9.24 |
| No log | 3.0 | 453 | 0.9715 | 0.0982 | 0.0262 | 0.0779 | 0.0781 | 23.4 |
| 4.0047 | 4.0 | 604 | 0.9133 | 0.1155 | 0.0244 | 0.0896 | 0.0896 | 32.75 |
| 4.0047 | 5.0 | 755 | 0.8848 | 0.1762 | 0.0333 | 0.139 | 0.1399 | 36.38 |
| 4.0047 | 6.0 | 906 | 0.8709 | 0.1521 | 0.028 | 0.1225 | 0.1229 | 35.85 |
| 0.756 | 7.0 | 1057 | 0.8611 | 0.1522 | 0.0355 | 0.1131 | 0.1139 | 52.75 |
| 0.756 | 8.0 | 1208 | 0.8555 | 0.1677 | 0.0396 | 0.126 | 0.1268 | 41.22 |
| 0.756 | 9.0 | 1359 | 0.8640 | 0.1411 | 0.0251 | 0.109 | 0.1093 | 24.65 |
| 0.5214 | 10.0 | 1510 | 0.8645 | 0.1772 | 0.0382 | 0.1351 | 0.1348 | 43.11 |
| 0.5214 | 11.0 | 1661 | 0.8681 | 0.1828 | 0.0386 | 0.1399 | 0.1407 | 38.1 |
| 0.5214 | 12.0 | 1812 | 0.8741 | 0.2031 | 0.0436 | 0.1584 | 0.1592 | 46.33 |
| 0.5214 | 13.0 | 1963 | 0.8861 | 0.1752 | 0.0422 | 0.1315 | 0.1315 | 39.91 |
| 0.3632 | 14.0 | 2114 | 0.8922 | 0.132 | 0.0251 | 0.0999 | 0.1013 | 37.31 |
| 0.3632 | 15.0 | 2265 | 0.9004 | 0.165 | 0.0368 | 0.1302 | 0.1299 | 41.1 |
| 0.3632 | 16.0 | 2416 | 0.9072 | 0.1483 | 0.0347 | 0.1139 | 0.115 | 37.99 |
| 0.2595 | 17.0 | 2567 | 0.9121 | 0.1558 | 0.0304 | 0.1149 | 0.1151 | 39.95 |
| 0.2595 | 18.0 | 2718 | 0.9156 | 0.1519 | 0.0316 | 0.1168 | 0.1183 | 36.4 |
| 0.2595 | 19.0 | 2869 | 0.9178 | 0.1437 | 0.0309 | 0.1101 | 0.1115 | 36.49 |
| 0.2098 | 20.0 | 3020 | 0.9200 | 0.1515 | 0.0334 | 0.115 | 0.1156 | 37.06 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.0
|
klentree/segformer-b0-scene-parse-150-lr-4-e-15
|
klentree
| 2024-01-24T14:44:26Z | 19 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:DiTo97/binarization-segformer-b3",
"base_model:finetune:DiTo97/binarization-segformer-b3",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T13:38:39Z |
---
license: openrail
base_model: DiTo97/binarization-segformer-b3
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-scene-parse-150-lr-4-e-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150-lr-4-e-15
This model is a fine-tuned version of [DiTo97/binarization-segformer-b3](https://huggingface.co/DiTo97/binarization-segformer-b3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1773
- Mean Iou: 0.5116
- Mean Accuracy: 0.5539
- Overall Accuracy: 0.9486
- Per Category Iou: [0.07467818861526594, 0.9484318643687625]
- Per Category Accuracy: [0.13278359055139496, 0.9749314802690082]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------:|:--------------------------------------------:|
| No log | 1.0 | 112 | 0.3321 | 0.4844 | 0.5000 | 0.9686 | [5.913750483660308e-05, 0.968644931717587] | [5.9410243004868244e-05, 0.9998514102409571] |
| No log | 2.0 | 224 | 0.1448 | 0.4844 | 0.5 | 0.9688 | [0.0, 0.9687870873345269] | [0.0, 1.0] |
| No log | 3.0 | 336 | 0.1467 | 0.4855 | 0.5011 | 0.9687 | [0.0024028604839131528, 0.9686745745655791] | [0.002417148172540925, 0.9998084247604243] |
| No log | 4.0 | 448 | 0.1597 | 0.4974 | 0.5136 | 0.9673 | [0.02761431295696444, 0.9672534071470754] | [0.029766229180953417, 0.9974892869900998] |
| 0.4196 | 5.0 | 560 | 0.1483 | 0.4945 | 0.5101 | 0.9683 | [0.02072799899238894, 0.9682597471616551] | [0.021509902838791155, 0.9987846484301768] |
| 0.4196 | 6.0 | 672 | 0.1300 | 0.4973 | 0.5131 | 0.9682 | [0.026546808517533143, 0.9681413453315052] | [0.02781078346833604, 0.9984659761718246] |
| 0.4196 | 7.0 | 784 | 0.1407 | 0.5063 | 0.5244 | 0.9659 | [0.04665771796171021, 0.9658509666995633] | [0.05345563922026602, 0.995305832396877] |
| 0.4196 | 8.0 | 896 | 0.1377 | 0.5014 | 0.5186 | 0.9662 | [0.036728661127978124, 0.9661516368135028] | [0.041295211194926705, 0.995994201663374] |
| 0.174 | 9.0 | 1008 | 0.1632 | 0.5096 | 0.5382 | 0.9570 | [0.06234910880338227, 0.9568704542992275] | [0.09161908189107895, 0.984874907876537] |
| 0.174 | 10.0 | 1120 | 0.1424 | 0.5102 | 0.5323 | 0.9627 | [0.05773026579725805, 0.9625824124413115] | [0.07327829115771892, 0.9913228393342741] |
| 0.174 | 11.0 | 1232 | 0.1553 | 0.5035 | 0.5223 | 0.9644 | [0.04268206669259935, 0.9643468862627879] | [0.05084668083459509, 0.9938369430563793] |
| 0.174 | 12.0 | 1344 | 0.1607 | 0.5086 | 0.5330 | 0.9600 | [0.057171934641356385, 0.95994904570909] | [0.07762033120361757, 0.9884765551939039] |
| 0.174 | 13.0 | 1456 | 0.1619 | 0.5095 | 0.5358 | 0.9589 | [0.060308850859297915, 0.958769171435925] | [0.08455435528004292, 0.9870474246884537] |
| 0.1457 | 14.0 | 1568 | 0.1625 | 0.5123 | 0.5476 | 0.9534 | [0.07133326653200926, 0.9531840662639103] | [0.11479756384054969, 0.9803688154229716] |
| 0.1457 | 15.0 | 1680 | 0.1773 | 0.5116 | 0.5539 | 0.9486 | [0.07467818861526594, 0.9484318643687625] | [0.13278359055139496, 0.9749314802690082] |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
AntoineGourru/results
|
AntoineGourru
| 2024-01-24T14:40:13Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-24T14:39:42Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
eglkan1/mt5-translated-lithuanian-simplifier
|
eglkan1
| 2024-01-24T14:34:38Z | 118 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T10:55:33Z |
---
license: apache-2.0
base_model: google/mt5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-translated-lithuanian-simplifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-translated-lithuanian-simplifier
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0761
- Rouge1: 0.7877
- Rouge2: 0.6566
- Rougel: 0.7845
- Gen Len: 49.2293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:-------:|
| 23.9322 | 0.1 | 200 | 19.1649 | 0.016 | 0.0004 | 0.0146 | 512.0 |
| 2.5416 | 0.19 | 400 | 1.4406 | 0.035 | 0.0002 | 0.0345 | 51.3394 |
| 0.7449 | 0.29 | 600 | 0.7221 | 0.0021 | 0.0 | 0.0021 | 50.2293 |
| 0.4405 | 0.38 | 800 | 0.2164 | 0.5491 | 0.3593 | 0.5367 | 49.4955 |
| 0.177 | 0.48 | 1000 | 0.1672 | 0.6294 | 0.4636 | 0.6209 | 49.2293 |
| 0.1838 | 0.57 | 1200 | 0.1561 | 0.6214 | 0.4375 | 0.613 | 49.2293 |
| 0.1471 | 0.67 | 1400 | 0.1295 | 0.7071 | 0.5673 | 0.6998 | 49.2293 |
| 0.1622 | 0.77 | 1600 | 0.1229 | 0.6929 | 0.5402 | 0.6858 | 49.2293 |
| 0.1255 | 0.86 | 1800 | 0.1192 | 0.7044 | 0.5547 | 0.6978 | 49.2293 |
| 0.1281 | 0.96 | 2000 | 0.1150 | 0.7169 | 0.5718 | 0.7103 | 49.2293 |
| 0.1561 | 1.05 | 2200 | 0.1088 | 0.7165 | 0.5688 | 0.7108 | 49.2293 |
| 0.145 | 1.15 | 2400 | 0.1064 | 0.7321 | 0.5921 | 0.7263 | 49.2293 |
| 0.1207 | 1.25 | 2600 | 0.1030 | 0.7348 | 0.5957 | 0.7291 | 49.2293 |
| 0.1151 | 1.34 | 2800 | 0.1014 | 0.7289 | 0.5859 | 0.7239 | 49.2293 |
| 0.1001 | 1.44 | 3000 | 0.0983 | 0.7402 | 0.6003 | 0.7349 | 49.2293 |
| 0.1354 | 1.53 | 3200 | 0.0963 | 0.738 | 0.598 | 0.7332 | 49.2293 |
| 0.1092 | 1.63 | 3400 | 0.0978 | 0.7446 | 0.607 | 0.7394 | 49.2293 |
| 0.1109 | 1.72 | 3600 | 0.0973 | 0.7427 | 0.6034 | 0.7377 | 49.2293 |
| 0.1083 | 1.82 | 3800 | 0.0950 | 0.7479 | 0.6094 | 0.7432 | 49.2293 |
| 0.1348 | 1.92 | 4000 | 0.0958 | 0.7498 | 0.6121 | 0.745 | 49.2293 |
| 0.1004 | 2.01 | 4200 | 0.0898 | 0.7539 | 0.6152 | 0.7494 | 49.2293 |
| 0.1131 | 2.11 | 4400 | 0.0925 | 0.753 | 0.6154 | 0.7488 | 49.2293 |
| 0.1312 | 2.2 | 4600 | 0.0919 | 0.755 | 0.6183 | 0.7508 | 49.2293 |
| 0.1139 | 2.3 | 4800 | 0.0908 | 0.756 | 0.6182 | 0.7518 | 49.2293 |
| 0.1168 | 2.39 | 5000 | 0.0880 | 0.7574 | 0.6202 | 0.7533 | 49.2293 |
| 0.0793 | 2.49 | 5200 | 0.0897 | 0.7575 | 0.6193 | 0.7531 | 49.2293 |
| 0.0869 | 2.59 | 5400 | 0.0866 | 0.7605 | 0.6228 | 0.7564 | 49.2293 |
| 0.1053 | 2.68 | 5600 | 0.0870 | 0.7594 | 0.6203 | 0.7551 | 49.2293 |
| 0.0889 | 2.78 | 5800 | 0.0893 | 0.7609 | 0.6237 | 0.7568 | 49.2293 |
| 0.0982 | 2.87 | 6000 | 0.0873 | 0.7637 | 0.6279 | 0.7599 | 49.2293 |
| 0.0838 | 2.97 | 6200 | 0.0846 | 0.7665 | 0.6309 | 0.7626 | 49.2293 |
| 0.0829 | 3.07 | 6400 | 0.0844 | 0.7665 | 0.6315 | 0.7629 | 49.2293 |
| 0.068 | 3.16 | 6600 | 0.0836 | 0.7695 | 0.6358 | 0.7658 | 49.2293 |
| 0.0747 | 3.26 | 6800 | 0.0848 | 0.7675 | 0.6322 | 0.7639 | 49.2293 |
| 0.0792 | 3.35 | 7000 | 0.0840 | 0.7691 | 0.6342 | 0.7656 | 49.2293 |
| 0.0739 | 3.45 | 7200 | 0.0820 | 0.7713 | 0.6365 | 0.7676 | 49.2293 |
| 0.0793 | 3.54 | 7400 | 0.0813 | 0.7723 | 0.6374 | 0.7685 | 49.2293 |
| 0.0908 | 3.64 | 7600 | 0.0819 | 0.7731 | 0.6388 | 0.7696 | 49.2293 |
| 0.1125 | 3.74 | 7800 | 0.0811 | 0.774 | 0.6402 | 0.7705 | 49.2293 |
| 0.1231 | 3.83 | 8000 | 0.0805 | 0.7736 | 0.6391 | 0.7699 | 49.2293 |
| 0.0805 | 3.93 | 8200 | 0.0806 | 0.7736 | 0.6383 | 0.7698 | 49.2293 |
| 0.0798 | 4.02 | 8400 | 0.0806 | 0.7758 | 0.6413 | 0.7726 | 49.2293 |
| 0.061 | 4.12 | 8600 | 0.0807 | 0.7738 | 0.6391 | 0.7705 | 49.2293 |
| 0.0636 | 4.21 | 8800 | 0.0810 | 0.7763 | 0.6424 | 0.7731 | 49.2293 |
| 0.0813 | 4.31 | 9000 | 0.0798 | 0.7765 | 0.6418 | 0.7731 | 49.2293 |
| 0.0664 | 4.41 | 9200 | 0.0804 | 0.7779 | 0.6441 | 0.7744 | 49.2293 |
| 0.077 | 4.5 | 9400 | 0.0783 | 0.7775 | 0.6432 | 0.774 | 49.2293 |
| 0.0769 | 4.6 | 9600 | 0.0788 | 0.7786 | 0.6446 | 0.7752 | 49.2293 |
| 0.0874 | 4.69 | 9800 | 0.0796 | 0.7782 | 0.6455 | 0.7749 | 49.2293 |
| 0.0682 | 4.79 | 10000 | 0.0784 | 0.7783 | 0.6452 | 0.7752 | 49.2293 |
| 0.0649 | 4.89 | 10200 | 0.0781 | 0.7788 | 0.6453 | 0.7757 | 49.2293 |
| 0.0594 | 4.98 | 10400 | 0.0791 | 0.7795 | 0.6468 | 0.7762 | 49.2293 |
| 0.1001 | 5.08 | 10600 | 0.0775 | 0.7794 | 0.6464 | 0.7762 | 49.2293 |
| 0.065 | 5.17 | 10800 | 0.0794 | 0.7794 | 0.6474 | 0.7762 | 49.2293 |
| 0.0505 | 5.27 | 11000 | 0.0787 | 0.7809 | 0.6481 | 0.7775 | 49.2293 |
| 0.0904 | 5.36 | 11200 | 0.0772 | 0.7825 | 0.6504 | 0.7793 | 49.2293 |
| 0.0782 | 5.46 | 11400 | 0.0777 | 0.7835 | 0.651 | 0.7803 | 49.2293 |
| 0.0758 | 5.56 | 11600 | 0.0774 | 0.7823 | 0.6505 | 0.7792 | 49.2293 |
| 0.0685 | 5.65 | 11800 | 0.0778 | 0.7819 | 0.6498 | 0.7787 | 49.2293 |
| 0.0664 | 5.75 | 12000 | 0.0774 | 0.7818 | 0.6493 | 0.7786 | 49.2293 |
| 0.0841 | 5.84 | 12200 | 0.0770 | 0.7848 | 0.6527 | 0.7813 | 49.2293 |
| 0.0867 | 5.94 | 12400 | 0.0765 | 0.7844 | 0.6522 | 0.7812 | 49.2293 |
| 0.0572 | 6.03 | 12600 | 0.0772 | 0.7849 | 0.6522 | 0.7816 | 49.2293 |
| 0.0554 | 6.13 | 12800 | 0.0775 | 0.7844 | 0.6526 | 0.7812 | 49.2293 |
| 0.0725 | 6.23 | 13000 | 0.0774 | 0.7851 | 0.6534 | 0.7822 | 49.2293 |
| 0.0952 | 6.32 | 13200 | 0.0778 | 0.7848 | 0.6527 | 0.7817 | 49.2293 |
| 0.0795 | 6.42 | 13400 | 0.0764 | 0.7858 | 0.6542 | 0.7826 | 49.2293 |
| 0.0682 | 6.51 | 13600 | 0.0772 | 0.7852 | 0.6527 | 0.7819 | 49.2293 |
| 0.0483 | 6.61 | 13800 | 0.0777 | 0.785 | 0.6525 | 0.7815 | 49.2293 |
| 0.0725 | 6.7 | 14000 | 0.0767 | 0.7864 | 0.6545 | 0.7831 | 49.2293 |
| 0.0675 | 6.8 | 14200 | 0.0773 | 0.786 | 0.6551 | 0.7827 | 49.2293 |
| 0.0706 | 6.9 | 14400 | 0.0758 | 0.7867 | 0.6556 | 0.7837 | 49.2293 |
| 0.0785 | 6.99 | 14600 | 0.0772 | 0.7866 | 0.6559 | 0.7835 | 49.2293 |
| 0.0796 | 7.09 | 14800 | 0.0763 | 0.7872 | 0.6564 | 0.7841 | 49.2293 |
| 0.0761 | 7.18 | 15000 | 0.0757 | 0.7879 | 0.6566 | 0.7848 | 49.2293 |
| 0.0598 | 7.28 | 15200 | 0.0758 | 0.788 | 0.6568 | 0.7849 | 49.2293 |
| 0.0587 | 7.38 | 15400 | 0.0768 | 0.7872 | 0.6556 | 0.7839 | 49.2293 |
| 0.0859 | 7.47 | 15600 | 0.0765 | 0.7875 | 0.6559 | 0.7842 | 49.2293 |
| 0.061 | 7.57 | 15800 | 0.0764 | 0.7876 | 0.6564 | 0.7845 | 49.2293 |
| 0.0718 | 7.66 | 16000 | 0.0764 | 0.7871 | 0.6558 | 0.784 | 49.2293 |
| 0.0695 | 7.76 | 16200 | 0.0763 | 0.7873 | 0.656 | 0.7842 | 49.2293 |
| 0.0678 | 7.85 | 16400 | 0.0762 | 0.7875 | 0.6565 | 0.7844 | 49.2293 |
| 0.0751 | 7.95 | 16600 | 0.0761 | 0.7877 | 0.6566 | 0.7845 | 49.2293 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
alnrg2arg/blockchainlabs_7B_merged_test2_4_prune
|
alnrg2arg
| 2024-01-24T14:25:34Z | 2,389 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"pruning",
"alnrg2arg/blockchainlabs_7B_merged_test2_4",
"mlabonne/NeuralBeagle14-7B",
"udkai/Turdus",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T04:35:23Z |
---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- pruning
- alnrg2arg/blockchainlabs_7B_merged_test2_4
- mlabonne/NeuralBeagle14-7B
- udkai/Turdus
---
# blockchainlabs_7B_merged_test2_4_prune
blockchainlabs_7B_merged_test2_4_prune is a pruned model based on alnrg2arg/blockchainlabs_7B_merged_test2_4, which is a merged model using
following models using [mergekit](https://github.com/cg123/mergekit):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
Pruning Kit I used: [wanda](https://github.com/locuslab/wanda?tab=readme-ov-file#ablation-on-obs-weight-update)
## 🧩 Configuration
```json
{
"_name_or_path": "alnrg2arg/blockchainlabs_7B_merged_test2_4_prun",
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.36.2",
"use_cache": false,
"vocab_size": 32000
}
```
|
alnrg2arg/blockchainlabs_7B_merged_test2_4_prune_sft_fp16
|
alnrg2arg
| 2024-01-24T14:24:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T14:31:25Z |
---
library_name: transformers
tags:
- unsloth
license: cc-by-nc-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alnrg2arg/blockchainlabs_7B_merged_test2_4_prune_sft_lora
|
alnrg2arg
| 2024-01-24T14:19:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T14:56:55Z |
---
library_name: transformers
tags:
- unsloth
license: cc-by-nc-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alnrg2arg/blockchainlabs_7B_merged_test2_4_prune_sft_lora_DPO_orca
|
alnrg2arg
| 2024-01-24T14:17:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T21:58:58Z |
---
library_name: transformers
tags:
- unsloth
license: cc-by-nc-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alnrg2arg/test
|
alnrg2arg
| 2024-01-24T14:16:13Z | 1,385 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-03T02:34:41Z |
---
license: cc-by-4.0
---
This is the test version for pruning.
This model is a base model that will be pruned and quantized for on-device purpose.
I used mergekit for merging two models:
- https://github.com/cg123/mergekit
The two models I combined are:
- https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v2
- https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
|
napatswift/xlm-roberta-base-ner-th
|
napatswift
| 2024-01-24T14:14:25Z | 105 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"th",
"dataset:pythainlp/thainer-corpus-v2",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-24T14:11:33Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner_model
results: []
datasets:
- pythainlp/thainer-corpus-v2
language:
- th
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_model
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1247
- Precision: 0.8073
- Recall: 0.8695
- F1: 0.8372
- Accuracy: 0.9655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.4 | 100 | 0.5360 | 0.4604 | 0.4644 | 0.4624 | 0.8846 |
| No log | 0.81 | 200 | 0.2882 | 0.6137 | 0.6619 | 0.6369 | 0.9307 |
| No log | 1.21 | 300 | 0.2128 | 0.7236 | 0.7649 | 0.7437 | 0.9442 |
| No log | 1.62 | 400 | 0.1811 | 0.7146 | 0.7925 | 0.7515 | 0.9494 |
| 0.4608 | 2.02 | 500 | 0.1594 | 0.7369 | 0.8021 | 0.7681 | 0.9542 |
| 0.4608 | 2.43 | 600 | 0.1532 | 0.7494 | 0.8331 | 0.7890 | 0.9572 |
| 0.4608 | 2.83 | 700 | 0.1403 | 0.7660 | 0.8417 | 0.8021 | 0.9594 |
| 0.4608 | 3.24 | 800 | 0.1342 | 0.7909 | 0.8428 | 0.8160 | 0.9625 |
| 0.4608 | 3.64 | 900 | 0.1325 | 0.7867 | 0.8572 | 0.8204 | 0.9626 |
| 0.1256 | 4.05 | 1000 | 0.1275 | 0.8056 | 0.8632 | 0.8334 | 0.9648 |
| 0.1256 | 4.45 | 1100 | 0.1229 | 0.8131 | 0.8643 | 0.8379 | 0.9657 |
| 0.1256 | 4.86 | 1200 | 0.1247 | 0.8073 | 0.8695 | 0.8372 | 0.9655 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Ivan0831/PPO-LunarLander-V2
|
Ivan0831
| 2024-01-24T14:12:02Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T13:52:04Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -3.04 +/- 53.96
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 512
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.25
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Ivan0831/PPO-LunarLander-V2'
'batch_size': 2048
'minibatch_size': 512}
```
|
alnrg2arg/blockchainlabs_7B_merged_test2_4
|
alnrg2arg
| 2024-01-24T14:06:18Z | 1,649 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"udkai/Turdus",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-17T05:58:52Z |
---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- udkai/Turdus
---
# blockchainlabs_7B_merged_test2_4
blockchainlabs_7B_merged_test2_4 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
- model: udkai/Turdus
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralBeagle14-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Ivan0831/PPO-LunarLander-V1
|
Ivan0831
| 2024-01-24T13:51:11Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T13:34:35Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -45.65 +/- 23.57
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Ivan0831/PPO-LunarLander-V1'
'batch_size': 512
'minibatch_size': 128}
```
|
Josef0801/model_deberta_3_labels
|
Josef0801
| 2024-01-24T13:49:30Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"deberta-v2",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T13:28:52Z |
Based on microsoft/deberta-v3-base, finetuned on a synthetic dataset (6 labels were converted to 3 labels).
Performance on test dataset:
precision recall f1-score support
0 0.98 0.99 0.98 94
1 0.96 0.96 0.96 28
2 1.00 0.98 0.99 66
accuracy 0.98 188
macro avg 0.98 0.98 0.98 188
weighted avg 0.98 0.98 0.98 188
Performance on similar benchmark:
precision recall f1-score support
0 0.13 0.52 0.21 23
1 0.44 0.15 0.22 75
2 0.00 0.00 0.00 19
accuracy 0.20 117
macro avg 0.19 0.22 0.14 117
weighted avg 0.31 0.20 0.18 117
|
seyf1elislam/neural-Kunoichi2-7B-slerp
|
seyf1elislam
| 2024-01-24T13:49:28Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T09:39:20Z |
---
tags:
- merge
- mergekit
- lazymergekit
---
# neural-Kunoichi2-7B-slerp
neural-Kunoichi2-7B-slerp is a merge of the following models using LazyMergekit:
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [mlabonne/NeuralPipe-7B-ties](https://huggingface.co/mlabonne/NeuralPipe-7B-ties)
# quantized :
* [GGUF](https://huggingface.co/seyf1elislam/neural-Kunoichi2-7B-slerp-GGUF)
## 🧩 Configuration
```yaml
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
- model: mlabonne/NeuralPipe-7B-ties
layer_range: [0, 32]
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "seyf1elislam/neural-Kunoichi2-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
HarrisonColby/q-FrozenLake-v1-4x4-noSlippery
|
HarrisonColby
| 2024-01-24T13:48:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T13:48:18Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="HarrisonColby/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GonzalVice/flan-t5-base
|
GonzalVice
| 2024-01-24T13:45:34Z | 174 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T13:30:10Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan_chatbot_productos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan_chatbot_productos
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Tokenizers 0.15.0
|
Strudel7182/dqn-SpaceInvadersNoFrameskip-v4
|
Strudel7182
| 2024-01-24T13:43:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T12:26:02Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 585.50 +/- 133.20
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Strudel7182 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Strudel7182 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Strudel7182
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
klentree/segformer-b0-scene-parse-150-lr-3-e-15
|
klentree
| 2024-01-24T13:35:43Z | 19 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:DiTo97/binarization-segformer-b3",
"base_model:finetune:DiTo97/binarization-segformer-b3",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T12:27:32Z |
---
license: openrail
base_model: DiTo97/binarization-segformer-b3
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-scene-parse-150-lr-3-e-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150-lr-3-e-15
This model is a fine-tuned version of [DiTo97/binarization-segformer-b3](https://huggingface.co/DiTo97/binarization-segformer-b3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1523
- Mean Iou: 0.5014
- Mean Accuracy: 0.5220
- Overall Accuracy: 0.9615
- Per Category Iou: [0.04132646470292031, 0.9614038983247747]
- Per Category Accuracy: [0.053216300812732126, 0.9907305584765508]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------:|:--------------------------------------------:|
| No log | 1.0 | 112 | 0.1629 | 0.4844 | 0.5 | 0.9688 | [0.0, 0.9687870873345269] | [0.0, 1.0] |
| No log | 2.0 | 224 | 0.1437 | 0.4844 | 0.5000 | 0.9688 | [2.03629353850122e-05, 0.968778060560053] | [2.0369226173097684e-05, 0.9999900466190115] |
| No log | 3.0 | 336 | 0.1551 | 0.4844 | 0.5 | 0.9688 | [0.0, 0.9687870873345269] | [0.0, 1.0] |
| No log | 4.0 | 448 | 0.1536 | 0.4873 | 0.5029 | 0.9674 | [0.0072237010873418455, 0.967349403560223] | [0.0076096034111664095, 0.998278830733678] |
| 0.254 | 5.0 | 560 | 0.1730 | 0.4844 | 0.5000 | 0.9688 | [1.697363485298286e-06, 0.9687858141149847] | [1.697435514424807e-06, 0.9999986327773367] |
| 0.254 | 6.0 | 672 | 0.1726 | 0.4844 | 0.5000 | 0.9688 | [0.0, 0.9687868224249946] | [0.0, 0.9999997265554673] |
| 0.254 | 7.0 | 784 | 0.1418 | 0.4886 | 0.5042 | 0.9679 | [0.009270700532836455, 0.9678754695078028] | [0.009627854237817505, 0.998758780577388] |
| 0.254 | 8.0 | 896 | 0.1618 | 0.4844 | 0.5 | 0.9688 | [0.0, 0.9687870873345269] | [0.0, 1.0] |
| 0.2012 | 9.0 | 1008 | 0.1350 | 0.4868 | 0.5023 | 0.9685 | [0.005035086692148778, 0.9684816005292253] | [0.005109280898418669, 0.9995252456024103] |
| 0.2012 | 10.0 | 1120 | 0.1429 | 0.4975 | 0.5137 | 0.9673 | [0.027791805303191197, 0.967227089869692] | [0.02998689579782864, 0.997455270490238] |
| 0.2012 | 11.0 | 1232 | 0.1419 | 0.4852 | 0.5008 | 0.9688 | [0.0015964088435281823, 0.9688182225729328] | [0.0015972868190737434, 0.9999822807942842] |
| 0.2012 | 12.0 | 1344 | 0.1339 | 0.4872 | 0.5028 | 0.9686 | [0.00582435621561196, 0.968612834428971] | [0.00589010123505408, 0.9996363187715734] |
| 0.2012 | 13.0 | 1456 | 0.1422 | 0.4990 | 0.5165 | 0.9652 | [0.03289244256624029, 0.9651360857253766] | [0.03794447348945214, 0.9950514742926044] |
| 0.1837 | 14.0 | 1568 | 0.1423 | 0.4928 | 0.5087 | 0.9673 | [0.01828545458590366, 0.9672482875211772] | [0.019532390464486255, 0.9978029278690511] |
| 0.1837 | 15.0 | 1680 | 0.1523 | 0.5014 | 0.5220 | 0.9615 | [0.04132646470292031, 0.9614038983247747] | [0.053216300812732126, 0.9907305584765508] |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Pranay2/my-project-xzg
|
Pranay2
| 2024-01-24T13:34:18Z | 6 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-24T13:29:43Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Project-xzg Dreambooth model trained by Pranay2 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: B21
Sample pictures of this concept:
.png)
|
Hk4crprasad/test2
|
Hk4crprasad
| 2024-01-24T13:33:31Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T06:37:55Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
|
Vakatt/Taxi
|
Vakatt
| 2024-01-24T13:31:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T13:31:01Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Vakatt/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ivan0831/DRL
|
Ivan0831
| 2024-01-24T13:28:07Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T13:20:03Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -148.90 +/- 81.72
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Ivan0831/DRL'
'batch_size': 512
'minibatch_size': 128}
```
|
Peter/shortstep_test
|
Peter
| 2024-01-24T13:27:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"dataset:zeta-labs/mind2web_combined_236_18_01",
"base_model:unsloth/mistral-7b",
"base_model:adapter:unsloth/mistral-7b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-01-24T12:36:55Z |
---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- unsloth
- generated_from_trainer
- trl
- sft
- unsloth
- generated_from_trainer
- unsloth
datasets:
- zeta-labs/mind2web_combined_236_18_01
base_model: unsloth/mistral-7b
model-index:
- name: shortstep_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shortstep_test
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the zeta-labs/mind2web_combined_236_18_01 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 5
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Vakatt/q-FrozenLake-v1-4x4-noSlippery
|
Vakatt
| 2024-01-24T13:22:47Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T13:22:44Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Vakatt/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Josef0801/model_deberta_6_labels
|
Josef0801
| 2024-01-24T13:19:41Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"deberta-v2",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T12:46:21Z |
Based on microsoft/deberta-v3-base, finetuned on a synthetic dataset (6 labels).
Performance on test dataset:
precision recall f1-score support
0 0.56 0.73 0.63 26
1 0.70 1.00 0.82 28
2 0.68 0.53 0.60 32
3 0.97 1.00 0.99 33
4 1.00 0.97 0.98 33
5 0.52 0.33 0.41 36
accuracy 0.75 188
macro avg 0.74 0.76 0.74 188
weighted avg 0.74 0.75 0.74 188
Performance on similar benchmark:
precision recall f1-score support
0 0.22 0.83 0.34 23
1 0.50 0.01 0.03 75
2 0.19 0.26 0.22 19
accuracy 0.21 117
macro avg 0.30 0.37 0.20 117
weighted avg 0.39 0.21 0.12 117
|
arun100/whisper-small-fr-derived-1
|
arun100
| 2024-01-24T13:14:44Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"fr",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:qanastek/whisper-small-french-uncased",
"base_model:finetune:qanastek/whisper-small-french-uncased",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-23T05:50:34Z |
---
language:
- fr
license: apache-2.0
base_model: qanastek/whisper-small-french-uncased
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Base French
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 fr
type: mozilla-foundation/common_voice_16_0
config: fr
split: test
args: fr
metrics:
- name: Wer
type: wer
value: 15.184536972434753
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base French
This model is a fine-tuned version of [qanastek/whisper-small-french-uncased](https://huggingface.co/qanastek/whisper-small-french-uncased) on the mozilla-foundation/common_voice_16_0 fr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8014
- Wer: 15.1845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.9295 | 0.2 | 100 | 0.8014 | 15.1845 |
| 0.2976 | 0.4 | 200 | 0.4207 | 16.0289 |
| 0.2699 | 0.59 | 300 | 0.3999 | 15.8267 |
| 0.2773 | 0.79 | 400 | 0.3910 | 15.7267 |
| 0.2631 | 0.99 | 500 | 0.3863 | 15.5972 |
| 0.2487 | 1.19 | 600 | 0.3834 | 15.5907 |
| 0.2477 | 1.39 | 700 | 0.3814 | 15.6156 |
| 0.2428 | 1.59 | 800 | 0.3801 | 15.4902 |
| 0.2492 | 1.78 | 900 | 0.3794 | 15.4672 |
| 0.2471 | 1.98 | 1000 | 0.3791 | 15.4707 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
bartowski/zephyr-7b-dpo-full-exl2
|
bartowski
| 2024-01-24T13:09:53Z | 1 | 1 | null |
[
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"text-generation",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-24T12:53:36Z |
---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: zephyr-7b-dpo-full
results: []
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of zephyr-7b-dpo-full
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/alignment-handbook/zephyr-7b-dpo-full
| Branch | Bits | lm_head bits | Size | Description |
| ----- | ---- | ------- | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/zephyr-7b-dpo-full-exl2/tree/8_0) | 8.0 | 8.0 | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/zephyr-7b-dpo-full-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/zephyr-7b-dpo-full-exl2/tree/5_0) | 5.0 | 6.0 | 7.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/Bartowski/zephyr-7b-dpo-full-exl2/tree/4_25) | 4.25 | 6.0 | 6.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/Bartowski/zephyr-7b-dpo-full-exl2/tree/3_5) | 3.5 | 6.0 | 6.1 GB | Lower quality, only use if you have to. |
All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/zephyr-7b-dpo-full-exl2 zephyr-7b-dpo-full-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `zephyr-7b-dpo-full-exl2`:
```shell
mkdir zephyr-7b-dpo-full-exl2
huggingface-cli download bartowski/zephyr-7b-dpo-full-exl2 --local-dir zephyr-7b-dpo-full-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir zephyr-7b-dpo-full-exl2-6_5
huggingface-cli download bartowski/zephyr-7b-dpo-full-exl2 --revision 6_5 --local-dir zephyr-7b-dpo-full-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir zephyr-7b-dpo-full-exl2-6.5
huggingface-cli download bartowski/zephyr-7b-dpo-full-exl2 --revision 6_5 --local-dir zephyr-7b-dpo-full-exl2-6.5 --local-dir-use-symlinks False
```
|
Josef0801/model_1_deberta
|
Josef0801
| 2024-01-24T13:07:31Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"deberta-v2",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T10:42:40Z |
This model is based on svenbl80/deberta-v3-Base-finetuned-chatdoc-V5's model but further finetuned a synthetic dataset.
It performs poorly on a different benchmark from the same document:
precision recall f1-score support
0 0.19 0.22 0.20 23
1 0.62 0.44 0.52 75
2 0.00 0.00 0.00 19
accuracy 0.32 117
macro avg 0.27 0.22 0.24 117
weighted avg 0.44 0.32 0.37 117
|
bcse/BigLiz-120b-GGUF
|
bcse
| 2024-01-24T13:02:42Z | 10 | 0 | null |
[
"gguf",
"lzlv",
"WinterGoddess",
"frankenmerge",
"120b",
"conversational",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T16:30:51Z |
---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- lzlv
- WinterGoddess
- frankenmerge
- 120b
---
# BigLiz 120B - GGUF
- Original model: [BigLiz 120B](https://huggingface.co/llmixer/BigLiz-120b)
|
hongpingjun98/results2
|
hongpingjun98
| 2024-01-24T12:59:13Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:sem_eval_2024_task_2",
"base_model:MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli",
"base_model:finetune:MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-23T20:11:09Z |
---
license: mit
base_model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
tags:
- generated_from_trainer
datasets:
- sem_eval_2024_task_2
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: results2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sem_eval_2024_task_2
type: sem_eval_2024_task_2
config: sem_eval_2024_task_2_source
split: validation
args: sem_eval_2024_task_2_source
metrics:
- name: Accuracy
type: accuracy
value: 0.715
- name: Precision
type: precision
value: 0.7186959617536364
- name: Recall
type: recall
value: 0.7150000000000001
- name: F1
type: f1
value: 0.7137907659862921
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results2
This model is a fine-tuned version of [MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli](https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli) on the sem_eval_2024_task_2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7766
- Accuracy: 0.715
- Precision: 0.7187
- Recall: 0.7150
- F1: 0.7138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6998 | 1.0 | 107 | 0.6713 | 0.6 | 0.6214 | 0.6000 | 0.5815 |
| 0.7015 | 2.0 | 214 | 0.6502 | 0.68 | 0.7143 | 0.6800 | 0.6667 |
| 0.6755 | 3.0 | 321 | 0.6740 | 0.53 | 0.6579 | 0.53 | 0.4107 |
| 0.6605 | 4.0 | 428 | 0.6061 | 0.64 | 0.6502 | 0.64 | 0.6338 |
| 0.5918 | 5.0 | 535 | 0.5675 | 0.695 | 0.7023 | 0.6950 | 0.6922 |
| 0.5717 | 6.0 | 642 | 0.5945 | 0.685 | 0.6953 | 0.685 | 0.6808 |
| 0.4655 | 7.0 | 749 | 0.5644 | 0.68 | 0.6801 | 0.6800 | 0.6800 |
| 0.3407 | 8.0 | 856 | 0.7529 | 0.7 | 0.7029 | 0.7 | 0.6989 |
| 0.3539 | 9.0 | 963 | 0.7211 | 0.69 | 0.6901 | 0.69 | 0.6900 |
| 0.2695 | 10.0 | 1070 | 0.7760 | 0.685 | 0.6905 | 0.685 | 0.6827 |
| 0.1666 | 11.0 | 1177 | 1.1053 | 0.71 | 0.7188 | 0.71 | 0.7071 |
| 0.1648 | 12.0 | 1284 | 1.1662 | 0.72 | 0.7258 | 0.72 | 0.7182 |
| 0.1229 | 13.0 | 1391 | 1.2760 | 0.735 | 0.7438 | 0.735 | 0.7326 |
| 0.0737 | 14.0 | 1498 | 1.5943 | 0.7 | 0.7029 | 0.7 | 0.6989 |
| 0.1196 | 15.0 | 1605 | 1.5407 | 0.705 | 0.7085 | 0.7050 | 0.7037 |
| 0.0389 | 16.0 | 1712 | 1.6411 | 0.69 | 0.7016 | 0.69 | 0.6855 |
| 0.0199 | 17.0 | 1819 | 1.7139 | 0.685 | 0.6919 | 0.685 | 0.6821 |
| 0.0453 | 18.0 | 1926 | 1.6549 | 0.71 | 0.7121 | 0.71 | 0.7093 |
| 0.0536 | 19.0 | 2033 | 1.7612 | 0.71 | 0.7142 | 0.71 | 0.7086 |
| 0.0035 | 20.0 | 2140 | 1.7766 | 0.715 | 0.7187 | 0.7150 | 0.7138 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
asun17904/glue-mrpc-bert-base-uncased-regularized-l2
|
asun17904
| 2024-01-24T12:55:37Z | 0 | 0 |
pytorch
|
[
"pytorch",
"en",
"license:mit",
"region:us"
] | null | 2024-01-24T12:18:31Z |
---
language: en
license: mit
library_name: pytorch
---
# Knowledge Continuity Regularized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 16
- `gradient_accumulation_steps` = 1
- `weight_decay` = 1e-09
- `seed` = 42
Regularization Hyperparameters
- `numerical stability denominator constant` = 0.01
- `lambda` = 0.02
- `alpha` = 2.0
- `beta` = 1.0
Extended Logs:
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|16.298|0.804|1.0|
|17.928|0.755|2.0|
|15.529|0.828|3.0|
|14.999|0.843|4.0|
|14.680|0.858|5.0|
|15.523|0.828|6.0|
|14.987|0.846|7.0|
|16.665|0.794|8.0|
|14.767|0.853|9.0|
|14.644|0.853|10.0|
|14.528|0.860|11.0|
|14.406|0.863|12.0|
|14.673|0.853|13.0|
|14.910|0.850|14.0|
|14.386|0.863|15.0|
|14.131|0.870|16.0|
|15.204|0.838|17.0|
|14.685|0.853|18.0|
|14.876|0.846|19.0|
|15.133|0.843|20.0|
|14.664|0.853|21.0|
|16.257|0.809|22.0|
|14.943|0.846|23.0|
|14.934|0.848|24.0|
|15.064|0.843|25.0|
|15.151|0.841|26.0|
|14.982|0.843|27.0|
|14.488|0.858|28.0|
|15.235|0.838|29.0|
|14.763|0.850|30.0|
|14.908|0.848|31.0|
|15.068|0.843|32.0|
|14.755|0.850|33.0|
|15.053|0.843|34.0|
|15.350|0.838|35.0|
|14.841|0.848|36.0|
|14.721|0.853|37.0|
|14.947|0.846|38.0|
|14.727|0.855|39.0|
|14.945|0.846|40.0|
|15.096|0.846|41.0|
|14.999|0.848|42.0|
|14.911|0.848|43.0|
|14.852|0.850|44.0|
|14.922|0.848|45.0|
|15.096|0.846|46.0|
|14.970|0.846|47.0|
|15.031|0.843|48.0|
|15.031|0.843|49.0|
|
blueapple8259/TinyCode-python
|
blueapple8259
| 2024-01-24T12:54:57Z | 93 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"en",
"dataset:bigcode/starcoderdata",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T12:49:35Z |
---
license: mit
datasets:
- bigcode/starcoderdata
tags:
- code
language:
- en
---
This model is trained on 4 out of 58 Python files from the [starcoderdata](https://huggingface.co/datasets/bigcode/starcoderdata) dataset.
|
tourist800/Prefix-ORKG-finetuned-Mistral-7B
|
tourist800
| 2024-01-24T12:52:40Z | 2 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-01-24T12:44:39Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0
|
tourist800/Prefix-ORKG-finetuned-llama-13b
|
tourist800
| 2024-01-24T12:48:26Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-13b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-13b-chat-hf",
"region:us"
] | null | 2024-01-24T12:39:54Z |
---
library_name: peft
base_model: NousResearch/Llama-2-13b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0
|
Augustya07/Llama-2-7b-chat-hf-sft-test-push
|
Augustya07
| 2024-01-24T12:36:57Z | 72 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-24T12:33:03Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abhi5hekjangid/phi2_old
|
abhi5hekjangid
| 2024-01-24T12:33:49Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"phi",
"trl",
"sft",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-01-22T07:10:05Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-finetuned-abhishek
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned-abhishek
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2648 | 1.0 | 779 | 1.1722 |
| 1.0878 | 2.0 | 1558 | 1.0711 |
| 0.9319 | 3.0 | 2338 | 0.9918 |
| 0.8719 | 4.0 | 3116 | 0.9799 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
amphion/valle_librilight_6k
|
amphion
| 2024-01-24T12:33:33Z | 0 | 1 | null |
[
"en",
"license:mit",
"region:us"
] | null | 2024-01-11T02:54:30Z |
---
license: mit
language:
- en
---
# Pretrained Model of Amphion Vall-E
We provide the pre-trained checkpoint of [Vall-E](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/VALLE) trained on [Libri-light](https://ai.meta.com/tools/libri-light/), which is derived from open-source audio books from the LibriVox project and contains over 60K hours of audio.
Here we processed about 6,000-hour data to train Vall-E.
## Quick Start
To utilize the pre-trained models, just run the following commands:
### Step1: Download the checkpoint
```bash
git lfs install
git clone https://huggingface.co/amphion/valle_librilight_6k
```
### Step2: Clone the Amphion's Source Code of GitHub
```bash
git clone https://github.com/open-mmlab/Amphion.git
```
### Step3: Specify the checkpoint's path
Use the soft link to specify the downloaded checkpoint in the first step:
```bash
cd Amphion
mkdir -p ckpts/tts
ln -s ../../../valle_librilight_6k ckpts/tts/
```
### Step4: Inference
You can follow the inference part of [this recipe](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/VALLE#4-inference) to generate speech from text. For example, if you want to synthesize a clip of speech with the text of "This is a clip of generated speech with the given text from Amphion Vall-E model.", just, run:
```bash
sh egs/tts/VALLE/run.sh --stage 3 --gpu "0" \
--config "ckpts/tts/valle_librilight_6k/args.json" \
--infer_expt_dir ckpts/tts/valle_librilight_6k \
--infer_output_dir ckpts/tts/valle_librilight_6k/result \
--infer_mode "single" \
--infer_text "This is a clip of generated speech with the given text from Amphion Vall-E model." \
--infer_text_prompt "But even the unsuccessful dramatist has his moments." \
--infer_audio_prompt egs/tts/VALLE/prompt_examples/7176_92135_000004_000000.wav
```
|
sergeipetrov/swin2SR-classical-sr-x2-64
|
sergeipetrov
| 2024-01-24T12:29:43Z | 177 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin2sr",
"image-to-image",
"vision",
"arxiv:2209.11345",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-image
| 2024-01-24T12:29:05Z |
---
license: apache-2.0
tags:
- vision
- image-to-image
inference: true
---
# Swin2SR model (image super-resolution)
Swin2SR model that upscales images x2. It was introduced in the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345)
by Conde et al. and first released in [this repository](https://github.com/mv-lab/swin2sr).
# Intended use cases
This model is intended for image super resolution.
# Usage
Refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/swin2sr#transformers.Swin2SRForImageSuperResolution.forward.example).
|
dvs/autotrain-kisd2-y8ibj
|
dvs
| 2024-01-24T12:28:44Z | 176 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:dvs/autotrain-data-autotrain-kisd2-y8ibj",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-24T12:28:36Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- dvs/autotrain-data-autotrain-kisd2-y8ibj
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.4466552734375
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
|
tifosi1709/codellama-7b-instruct-ft
|
tifosi1709
| 2024-01-24T12:23:26Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T12:17:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alionder/distilbert_turk
|
alionder
| 2024-01-24T12:23:24Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:dbmdz/distilbert-base-turkish-cased",
"base_model:finetune:dbmdz/distilbert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T12:23:11Z |
---
license: mit
base_model: dbmdz/distilbert-base-turkish-cased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: distilbert_turk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_turk
This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1927
- F1: 0.8338
- Roc Auc: 0.9092
- Accuracy: 0.8047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
| 0.2899 | 1.0 | 1151 | 0.2053 | 0.6418 | 0.7738 | 0.6719 |
| 0.1846 | 2.0 | 2302 | 0.1777 | 0.7480 | 0.8434 | 0.7461 |
| 0.1432 | 3.0 | 3453 | 0.1633 | 0.7879 | 0.8866 | 0.7656 |
| 0.1241 | 4.0 | 4604 | 0.1508 | 0.8256 | 0.9037 | 0.7891 |
| 0.0961 | 5.0 | 5755 | 0.1621 | 0.8203 | 0.9048 | 0.7969 |
| 0.065 | 6.0 | 6906 | 0.1733 | 0.8108 | 0.9092 | 0.7969 |
| 0.0548 | 7.0 | 8057 | 0.1848 | 0.8238 | 0.8993 | 0.7930 |
| 0.0496 | 8.0 | 9208 | 0.1875 | 0.8130 | 0.9055 | 0.7969 |
| 0.0413 | 9.0 | 10359 | 0.1905 | 0.8359 | 0.9096 | 0.8086 |
| 0.038 | 10.0 | 11510 | 0.1927 | 0.8338 | 0.9092 | 0.8047 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tourist800/ORKG-finetuned-Mistral-7B
|
tourist800
| 2024-01-24T12:21:28Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-01-24T12:14:17Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0
|
Siddharth11/my-pet-dog
|
Siddharth11
| 2024-01-24T12:19:28Z | 1 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-24T12:15:04Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by Siddharth11 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 210968074
Sample pictures of this concept:

|
asun17904/glue-cola-bert-base-uncased-regularized-l2
|
asun17904
| 2024-01-24T12:17:58Z | 0 | 0 |
pytorch
|
[
"pytorch",
"en",
"license:mit",
"region:us"
] | null | 2024-01-24T10:52:53Z |
---
language: en
license: mit
library_name: pytorch
---
# Knowledge Continuity Regularized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 16
- `gradient_accumulation_steps` = 1
- `weight_decay` = 1e-09
- `seed` = 42
Regularization Hyperparameters
- `numerical stability denominator constant` = 0.01
- `lambda` = 0.02
- `alpha` = 2.0
- `beta` = 1.0
Extended Logs:
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|16.556|0.800|1.0|
|16.658|0.794|2.0|
|16.329|0.804|3.0|
|16.252|0.816|4.0|
|16.386|0.808|5.0|
|16.747|0.802|6.0|
|16.614|0.807|7.0|
|16.641|0.808|8.0|
|16.362|0.814|9.0|
|16.559|0.805|10.0|
|16.639|0.802|11.0|
|16.819|0.796|12.0|
|16.648|0.803|13.0|
|17.400|0.780|14.0|
|16.121|0.818|15.0|
|16.481|0.808|16.0|
|16.644|0.801|17.0|
|16.747|0.806|18.0|
|16.386|0.808|19.0|
|16.684|0.802|20.0|
|16.766|0.803|21.0|
|16.543|0.803|22.0|
|16.636|0.803|23.0|
|16.468|0.813|24.0|
|16.653|0.800|25.0|
|17.070|0.788|26.0|
|17.067|0.796|27.0|
|16.857|0.796|28.0|
|16.925|0.795|29.0|
|16.890|0.798|30.0|
|16.594|0.801|31.0|
|16.578|0.800|32.0|
|16.517|0.802|33.0|
|16.529|0.802|34.0|
|16.674|0.803|35.0|
|16.543|0.809|36.0|
|16.659|0.803|37.0|
|16.723|0.803|38.0|
|16.480|0.808|39.0|
|16.410|0.804|40.0|
|16.641|0.803|41.0|
|16.328|0.811|42.0|
|16.270|0.813|43.0|
|16.291|0.812|44.0|
|16.522|0.812|45.0|
|16.395|0.810|46.0|
|16.385|0.811|47.0|
|16.392|0.812|48.0|
|16.401|0.811|49.0|
|
ayoubkirouane/Mistral-Depth-UP-Scaled-9B-AlpacaInstruct
|
ayoubkirouane
| 2024-01-24T12:17:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"peft",
"ayoubkirouane/Mistral-Depth-UP-Scaled-9B",
"text-generation",
"en",
"dataset:yahma/alpaca-cleaned",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T12:10:39Z |
---
library_name: transformers
tags:
- unsloth
- peft
- ayoubkirouane/Mistral-Depth-UP-Scaled-9B
license: apache-2.0
datasets:
- yahma/alpaca-cleaned
language:
- en
pipeline_tag: text-generation
---
## Mistral-Depth-UP-Scaled-9B-AlpacaInstruct :
- Finetuned Version of [**Mistral-Depth-UP-Scaled-9B**](ayoubkirouane/Mistral-Depth-UP-Scaled-9B) on [**alpaca-cleaned**](https://huggingface.co/datasets/yahma/alpaca-cleaned) Dataset .
|
alexgastev/dqn-LunarLander-v2
|
alexgastev
| 2024-01-24T12:15:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T12:15:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 7.77 +/- 114.71
name: mean_reward
verified: false
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
akerberenes/pop-LunaLander-v2
|
akerberenes
| 2024-01-24T12:12:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T12:12:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.37 +/- 22.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
macarious/torgo_xlsr_finetune_M02_old
|
macarious
| 2024-01-24T12:12:18Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-24T06:28:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: torgo_xlsr_finetune_M02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_xlsr_finetune_M02
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7002
- Wer: 0.3119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5308 | 0.92 | 1000 | 3.3287 | 1.0 |
| 1.6778 | 1.83 | 2000 | 1.8864 | 0.8387 |
| 0.8622 | 2.75 | 3000 | 1.4902 | 0.6310 |
| 0.6098 | 3.66 | 4000 | 1.3727 | 0.5758 |
| 0.4854 | 4.58 | 5000 | 1.5900 | 0.5258 |
| 0.4259 | 5.49 | 6000 | 1.4559 | 0.4403 |
| 0.3824 | 6.41 | 7000 | 1.4472 | 0.4332 |
| 0.3162 | 7.33 | 8000 | 1.4480 | 0.3913 |
| 0.3334 | 8.24 | 9000 | 1.5251 | 0.3663 |
| 0.2884 | 9.16 | 10000 | 1.2532 | 0.3779 |
| 0.2745 | 10.07 | 11000 | 1.4908 | 0.4029 |
| 0.2252 | 10.99 | 12000 | 1.7431 | 0.4055 |
| 0.2363 | 11.9 | 13000 | 1.6840 | 0.3877 |
| 0.2135 | 12.82 | 14000 | 1.7977 | 0.4029 |
| 0.2157 | 13.74 | 15000 | 1.6831 | 0.3743 |
| 0.1835 | 14.65 | 16000 | 1.9256 | 0.3556 |
| 0.1718 | 15.57 | 17000 | 1.8000 | 0.3449 |
| 0.1466 | 16.48 | 18000 | 1.8610 | 0.3414 |
| 0.1708 | 17.4 | 19000 | 1.5912 | 0.3191 |
| 0.1516 | 18.31 | 20000 | 1.8241 | 0.3164 |
| 0.1494 | 19.23 | 21000 | 1.7002 | 0.3119 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
davidpedem/clasificador-muchocine
|
davidpedem
| 2024-01-24T12:11:28Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T12:11:06Z |
---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4166
- Accuracy: 0.4310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3248 | 0.4013 |
| 1.3832 | 2.0 | 776 | 1.3357 | 0.4090 |
| 0.977 | 3.0 | 1164 | 1.4166 | 0.4310 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tourist800/ORKG-finetuned-llama-13b-chat
|
tourist800
| 2024-01-24T12:08:32Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-13b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-13b-chat-hf",
"region:us"
] | null | 2024-01-24T12:07:00Z |
---
library_name: peft
base_model: NousResearch/Llama-2-13b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0
|
Andrewwwwww/Mixtral-8x7B-Instruct-v0.1
|
Andrewwwwww
| 2024-01-24T12:05:45Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-24T12:02:15Z |
---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
inference: false
---
# Model Card for Mixtral-8x7B
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Instruction format
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
The template used to build a prompt for the Instruct model is defined as follows:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
```python
def tokenize(text):
return tok.encode(text, add_special_tokens=False)
[BOS_ID] +
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_1) + [EOS_ID] +
…
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_N) + [EOS_ID]
```
In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Limitations
The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
BradNLP/q-FrozenLake-v1-4x4-noSlippery
|
BradNLP
| 2024-01-24T11:58:43Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T11:58:40Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="BradNLP/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Jacqkues/ppo-LunarLander-v2
|
Jacqkues
| 2024-01-24T11:54:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T11:54:25Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.64 +/- 24.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Parinitha003/Atari
|
Parinitha003
| 2024-01-24T11:47:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T11:45:59Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 501.00 +/- 133.32
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Parinitha003 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Parinitha003 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Parinitha003
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Gayathri142214002/Question_Generation_ComQ_8_2
|
Gayathri142214002
| 2024-01-24T11:36:15Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Gayathri142214002/Question_Generation_ComQ_7",
"base_model:finetune:Gayathri142214002/Question_Generation_ComQ_7",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T10:52:32Z |
---
license: apache-2.0
base_model: Gayathri142214002/Question_Generation_ComQ_7
tags:
- generated_from_trainer
model-index:
- name: Question_Generation_ComQ_8_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Question_Generation_ComQ_8_2
This model is a fine-tuned version of [Gayathri142214002/Question_Generation_ComQ_7](https://huggingface.co/Gayathri142214002/Question_Generation_ComQ_7) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0484 | 0.22 | 100 | 0.3026 |
| 0.0462 | 0.44 | 200 | 0.3217 |
| 0.2787 | 0.66 | 300 | 0.2620 |
| 0.2864 | 0.88 | 400 | 0.2611 |
| 0.2528 | 1.1 | 500 | 0.2699 |
| 0.2224 | 1.32 | 600 | 0.2878 |
| 0.2303 | 1.54 | 700 | 0.2812 |
| 0.2525 | 1.76 | 800 | 0.2783 |
| 0.2429 | 1.98 | 900 | 0.2685 |
| 0.2147 | 2.2 | 1000 | 0.2849 |
| 0.202 | 2.42 | 1100 | 0.2939 |
| 0.2217 | 2.64 | 1200 | 0.2913 |
| 0.2213 | 2.86 | 1300 | 0.2834 |
| 0.1942 | 3.08 | 1400 | 0.2952 |
| 0.1866 | 3.3 | 1500 | 0.3072 |
| 0.1977 | 3.52 | 1600 | 0.3098 |
| 0.199 | 3.74 | 1700 | 0.3053 |
| 0.1964 | 3.96 | 1800 | 0.3017 |
| 0.1672 | 4.18 | 1900 | 0.3125 |
| 0.1669 | 4.4 | 2000 | 0.3182 |
| 0.1904 | 4.62 | 2100 | 0.3193 |
| 0.1744 | 4.84 | 2200 | 0.3132 |
| 0.177 | 5.06 | 2300 | 0.3130 |
| 0.1583 | 5.28 | 2400 | 0.3172 |
| 0.1676 | 5.5 | 2500 | 0.3168 |
| 0.1662 | 5.72 | 2600 | 0.3185 |
| 0.1703 | 5.94 | 2700 | 0.3164 |
| 0.1553 | 6.16 | 2800 | 0.3193 |
| 0.1557 | 6.38 | 2900 | 0.3201 |
| 0.1465 | 6.6 | 3000 | 0.3208 |
| 0.1549 | 6.82 | 3100 | 0.3219 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Strudel7182/q-FrozenLake-v1-4x4-noSlippery
|
Strudel7182
| 2024-01-24T11:25:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T11:25:25Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Strudel7182/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
medxiaorudan/CodeLlama_CPP_FineTuned
|
medxiaorudan
| 2024-01-24T11:23:58Z | 2 | 1 |
peft
|
[
"peft",
"code",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-01-16T10:35:29Z |
---
library_name: peft
base_model: codellama/CodeLlama-7b-hf
license: llama2
dataset:
type: codeparrot/xlcost-text-to-code
name: xlcost
tags:
- code
---
# Model Card for Model ID
## Model Details
### Model Description
This model has been fine-tuned using the CodeLlama base, incorporating C++ code sourced from the 'codeparrot/xlcost-text-to-code' dataset. It possesses the capability to generate C++ code based on provided task descriptions.
If you get the error "ValueError: Tokenizer class CodeLlamaTokenizer does not exist or is not currently imported." make sure your Transformer version is 4.33.0 and accelerate>=0.20.3.
- **Developed by:** [Rudan XIAO]
- **Model type:** [code generation]
- **License:** [llama2]
- **Finetuned from model [optional]:** [codellama/CodeLlama-7b-hf]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/medxiaorudan/CodeGeneration]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "medxiaorudan/CodeLlama_CPP_FineTuned"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = """
Use the Task below and write the Response, which is a programming code that can solve the Task.
### Task:
Generate a C++ program that accepts numeric input from the user and maintains a record of previous user inputs with timestamps. Ensure the program sorts the user inputs in ascending order based on the provided numeric input. Enhance the program to display timestamps along with the sorted user inputs.
### Response:
"""
sequences_finetune = pipeline(
prompt,
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=600,
add_special_tokens=False
)
for seq in sequences_finetune:
print(f"Result: {seq['generated_text']}")
```
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
https://huggingface.co/datasets/codeparrot/xlcost-text-to-code
[More Information Needed]
### Training Procedure
The detailed training report is [here](https://wandb.ai/medxiaorudan/CodeLlama_finetune_CPP?workspace=user-medxiaorudan).
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [bf16] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
I have use the Catch2 unit test framework for generated C++ code snippets correctness verification.
Todo: Use the pass@k metric with the HumanEval-X dataset to verify the performance of the model.
### Testing Data, Factors & Metrics
#### Testing Data
https://huggingface.co/datasets/THUDM/humaneval-x
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
I used 4 NVIDIA A40-48Q GPU server configured with Python 3.10 and Cuda 12.2 to run the code in this article. It ran for about eight hours.
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [NVIDIA A40-48Q GPU]
- **Hours used:** [8]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
s3nh/mhm-7b-v1.3-DPO-1-GGUF
|
s3nh
| 2024-01-24T11:19:54Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-01-24T10:18:31Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/h2m/mhm-7b-v1.3-DPO-1).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### inference
User: Tell me story about what is an quantization and what do we need to build.
◄►Links referring to the same content or entity from another perspective, but does not significantly contribute additional information because of this duplicate differentiating the references as necessary.◄►Links referring to previous respective contents for further elaboration on the matter in hand.
The goal of this page is to provide a quantification of what we need to build as well as an explanation of terms used in this document to help understand what it entails for those interested in its application.
This page may contain PMTA information directly pertaining to this project, as it will affect the final result when it comes to delivering such technologies.
The
# Original model card
|
akolit/aldan-mix-8x7B-gguf
|
akolit
| 2024-01-24T11:17:23Z | 1 | 1 | null |
[
"gguf",
"text-generation",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-01-22T15:09:32Z |
---
license: cc-by-nc-4.0
model_type: mixtral
pipeline_tag: text-generation
---
GGUF quants repo. For now only q4_0. FP16 safetensors model is [here](https://huggingface.co/akolit/aldan-mix-8x7B).
This is a SLERP merge between [Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO) and [Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss).
Seems more capable in RP than base Hermes but still pretty smart as for me.
Prompt format: ChatML
With this model I use the following generation settings in tavern (maybe those are not the best, share better templates in issues if you have any):
- Temperature: 0.75
- Top P: 0.5
- Top A: 0.7
- TFS 0.97
- Repetition penalty: 1.1
- Mirostat: mode 2, tau 5, eta 0.1
Adding to system prompt something like
"Assistant will never interrupt role-play and will always stay in character no matter what. Assistant will never write OOC (out of character). Assistant won't write actions or reactions of {{user}}. Assistant won't mention {{user}} in first person. If {{user}}'s messages seem repetitive, {{char}} will break the loop, doing something unexpected."
might help, but it's up to you (as anything else, really).
|
ali0123/lora_4bit
|
ali0123
| 2024-01-24T11:16:16Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:chavinlo/alpaca-13b",
"base_model:finetune:chavinlo/alpaca-13b",
"region:us"
] | null | 2024-01-23T13:25:23Z |
---
base_model: chavinlo/alpaca-13b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: lora_4bit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_4bit
This model is a fine-tuned version of [chavinlo/alpaca-13b](https://huggingface.co/chavinlo/alpaca-13b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 300
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
akolit/aldan-mix-8x7B
|
akolit
| 2024-01-24T11:15:01Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T12:19:40Z |
---
license: cc-by-nc-4.0
---
This is a SLERP merge between [Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO) and [Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss).
Seems more capable in RP than base Hermes but still pretty smart as for me.
Prompt format: ChatML
[GGUF Q4_0 version](https://huggingface.co/akolit/aldan-mix-8x7B-gguf)
With this model I use the following generation settings in tavern (maybe those are not the best, share better templates in issues if you have any):
- Temperature: 0.75
- Top P: 0.5
- Top A: 0.7
- TFS 0.97
- Repetition penalty: 1.1
- Mirostat: mode 2, tau 5, eta 0.1
! Model still seems to be prone to repetition with the settings above. Needs testing on other presets. SillyTavern's "Big O" preset with mirostat 2/5/0.15 seems promising.
Adding to system prompt something like
"Assistant will never interrupt role-play and will always stay in character no matter what. Assistant will never write OOC (out of character). Assistant won't write actions or reactions of {{user}}. Assistant won't mention {{user}} in first person. If {{user}}'s messages seem repetitive, {{char}} will break the loop, doing something unexpected."
might help, but it's up to you (as anything else, really).
|
Gayathri142214002/Question_Generation_ComQ_9_2
|
Gayathri142214002
| 2024-01-24T11:04:56Z | 174 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Gayathri142214002/Question_Generation_ComQ_6",
"base_model:finetune:Gayathri142214002/Question_Generation_ComQ_6",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T10:49:18Z |
---
license: apache-2.0
base_model: Gayathri142214002/Question_Generation_ComQ_6
tags:
- generated_from_trainer
model-index:
- name: Question_Generation_ComQ_7_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Question_Generation_ComQ_7_2
This model is a fine-tuned version of [Gayathri142214002/Question_Generation_ComQ_6](https://huggingface.co/Gayathri142214002/Question_Generation_ComQ_6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2389 | 0.23 | 100 | 0.2383 |
| 0.2843 | 0.47 | 200 | 0.2704 |
| 0.2913 | 0.7 | 300 | 0.2648 |
| 0.2755 | 0.94 | 400 | 0.2607 |
| 0.2329 | 1.17 | 500 | 0.2916 |
| 0.2302 | 1.41 | 600 | 0.2971 |
| 0.2426 | 1.64 | 700 | 0.2861 |
| 0.2546 | 1.88 | 800 | 0.2906 |
| 0.2163 | 2.11 | 900 | 0.2995 |
| 0.211 | 2.35 | 1000 | 0.3133 |
| 0.2202 | 2.58 | 1100 | 0.3082 |
| 0.2352 | 2.82 | 1200 | 0.3039 |
| 0.2169 | 3.05 | 1300 | 0.2971 |
| 0.1932 | 3.29 | 1400 | 0.3126 |
| 0.2043 | 3.52 | 1500 | 0.3173 |
| 0.2066 | 3.76 | 1600 | 0.3100 |
| 0.2099 | 3.99 | 1700 | 0.3101 |
| 0.1672 | 4.23 | 1800 | 0.3226 |
| 0.1813 | 4.46 | 1900 | 0.3295 |
| 0.1823 | 4.7 | 2000 | 0.3280 |
| 0.1967 | 4.93 | 2100 | 0.3247 |
| 0.1725 | 5.17 | 2200 | 0.3330 |
| 0.1723 | 5.4 | 2300 | 0.3336 |
| 0.162 | 5.64 | 2400 | 0.3360 |
| 0.1716 | 5.87 | 2500 | 0.3337 |
| 0.1659 | 6.11 | 2600 | 0.3340 |
| 0.1553 | 6.34 | 2700 | 0.3355 |
| 0.1537 | 6.58 | 2800 | 0.3366 |
| 0.1589 | 6.81 | 2900 | 0.3358 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
la-min/myanmar-gpt-7B-health-qa
|
la-min
| 2024-01-24T10:58:38Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:SeaLLMs/SeaLLM-7B-v1",
"base_model:adapter:SeaLLMs/SeaLLM-7B-v1",
"region:us"
] | null | 2024-01-24T10:58:37Z |
---
library_name: peft
base_model: SeaLLMs/SeaLLM-7B-Chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Changchoichang2104/StableDiffusionXL-Waltz-with-Bashir-style
|
Changchoichang2104
| 2024-01-24T10:57:52Z | 20 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-01-24T10:25:08Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: 'portrait of Taylor Swift, highly detailed, waltz of bashir style'
output:
url: images/a22ec06b-031c-454e-8fac-6a2f7dacddff.jpeg
- text: 'portrait of Chris Hemsworth, highly detailed, waltz of bashir style'
output:
url: images/14281b37-6ef1-48c1-8e87-ff776a1503d8.jpeg
- text: 'portrait of Robert Downey Jr, highly detailed, waltz of bashir style'
output:
url: images/c74b0d93-a349-4965-b890-d5e26b5b5f3e.jpeg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: apache-2.0
# Waltz with Bashir Style Image Generation using Stable Diffusion XL Model
description: |
## Overview
Dive into the captivating world of art with our advanced "Waltz with Bashir" style adaptation for the Stable Diffusion XL model. This state-of-the-art model transforms your images into stunning artworks reminiscent of the iconic animation and visual style of the acclaimed film "Waltz with Bashir".
## Gallery

*Portrait of Taylor Swift in Waltz of Bashir style.*

*Portrait of Chris Hemsworth in Waltz of Bashir style.*

*Portrait of Robert Downey Jr in Waltz of Bashir style.*
## Download the Model
You can download the model weights in Safetensors format from the following link. Navigate to the "Files & versions" tab to access the files.
[Download Model Weights](/Changchoichang2104/StableDiffusionXL-Waltz-with-Bashir-style/tree/main)
## How to Use the Model
To use this model, input a description of the image you want to generate, specifying that it should be in the 'Waltz of Bashir style'. The model will then generate an image based on your description.
For example:
`portrait of [Name], highly detailed, Waltz of Bashir style`
Replace `[Name]` with the subject of your portrait.
---
# Waltz with Bashir Style Image Generation using Stable Diffusion XL Model
Dive into the captivating world of art with our advanced "Waltz with Bashir" style adaptation for the Stable Diffusion XL model. This state-of-the-art model transforms your images into stunning artworks reminiscent of the iconic animation and visual style of the acclaimed film "Waltz with Bashir".
## Gallery of Generated Images
Here are some examples of what the model can create:

*Portrait of Taylor Swift in Waltz of Bashir style.*

*Portrait of Chris Hemsworth in Waltz of Bashir style.*

*Portrait of Robert Downey Jr in Waltz of Bashir style.*
## Download the Model
You can download the model weights in Safetensors format from the following link. Navigate to the "Files & versions" tab to access the files.
[Download Model Weights](/Changchoichang2104/StableDiffusionXL-Waltz-with-Bashir-style/tree/main)
## How to Use the Model
To use this model, input a description of the image you want to generate, specifying that it should be in the 'Waltz of Bashir style'. The model will then generate an image based on your description.
For example:
portrait of [Name], highly detailed, Waltz of Bashir style
Replace `[Name]` with the subject of your portrait.
|
fira7s/mbp_LoRA
|
fira7s
| 2024-01-24T10:55:10Z | 2 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-24T10:55:08Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MBAPEE person
license: openrail++
---
# SDXL LoRA DreamBooth - fira7s/mbp_LoRA
<Gallery />
## Model description
These are fira7s/mbp_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MBAPEE person to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](fira7s/mbp_LoRA/tree/main) them in the Files & versions tab.
|
adi-vc/Reinforce-CartPole-v1
|
adi-vc
| 2024-01-24T10:54:28Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-23T20:24:56Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 121.90 +/- 6.67
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
MarcoScap/ppo-LunarLander-v2
|
MarcoScap
| 2024-01-24T10:51:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T10:50:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.35 +/- 22.81
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
simonycl/data-selection-Llama-2-7b-sharegpt-KCenterGreedyDeita-0.05-lora
|
simonycl
| 2024-01-24T10:41:38Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-24T10:41:26Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
jaydeepcomm/experiments
|
jaydeepcomm
| 2024-01-24T10:28:31Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-24T10:28:21Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: experiments
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# experiments
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9273 | 0.43 | 3 | 1.6630 |
| 1.2968 | 0.86 | 6 | 1.3173 |
| 0.9918 | 1.29 | 9 | 1.1548 |
| 1.1456 | 1.71 | 12 | 1.1042 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
xformAI/opt-6.7b-ub-16-qcqa-best-for-q-loss
|
xformAI
| 2024-01-24T10:27:03Z | 0 | 0 |
transformers
|
[
"transformers",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T10:26:41Z |
---
license: mit
language:
- en
library_name: transformers
---
This is a QCQA version of the original model facebook/opt-125m. In this version, the original MHA architecture is preserved but instead of having a single K/V head, different K/V heads corresponding to the same group have the same mean-pooled K or V values. It has upto 16 groups of KV heads per layer instead of the original 32 KV heads in the MHA implementation. This implementation is supposed to more efficient than corresponding GQA one. This has been optimized for quality loss.
|
e22vvb/EN_mt5-base_5_spider
|
e22vvb
| 2024-01-24T10:26:53Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T09:41:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: EN_mt5-base_5_spider
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EN_mt5-base_5_spider
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0147
- Rouge2 Precision: 0.0101
- Rouge2 Recall: 0.0008
- Rouge2 Fmeasure: 0.0014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| No log | 1.0 | 438 | 4.4261 | 0.0037 | 0.0008 | 0.0013 |
| 13.8978 | 2.0 | 876 | 1.9847 | 0.0032 | 0.0012 | 0.0016 |
| 3.4854 | 3.0 | 1314 | 1.6748 | 0.0002 | 0.0 | 0.0 |
| 2.1063 | 4.0 | 1752 | 1.2913 | 0.0074 | 0.0029 | 0.0036 |
| 1.6372 | 5.0 | 2190 | 1.0147 | 0.0101 | 0.0008 | 0.0014 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.7.dev0
- Tokenizers 0.13.3
|
HarshithNLP/outputs
|
HarshithNLP
| 2024-01-24T10:25:18Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloom-7b1",
"base_model:adapter:bigscience/bloom-7b1",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-24T10:25:13Z |
---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloom-7b1
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Gayathri142214002/Question_Generation_ComQ_7_2
|
Gayathri142214002
| 2024-01-24T10:24:44Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Gayathri142214002/Question_Generation_ComQ_6",
"base_model:finetune:Gayathri142214002/Question_Generation_ComQ_6",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T08:47:52Z |
---
license: apache-2.0
base_model: Gayathri142214002/Question_Generation_ComQ_6
tags:
- generated_from_trainer
model-index:
- name: Question_Generation_ComQ_7_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Question_Generation_ComQ_7_2
This model is a fine-tuned version of [Gayathri142214002/Question_Generation_ComQ_6](https://huggingface.co/Gayathri142214002/Question_Generation_ComQ_6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2389 | 0.23 | 100 | 0.2383 |
| 0.2843 | 0.47 | 200 | 0.2704 |
| 0.2913 | 0.7 | 300 | 0.2648 |
| 0.2755 | 0.94 | 400 | 0.2607 |
| 0.2329 | 1.17 | 500 | 0.2916 |
| 0.2302 | 1.41 | 600 | 0.2971 |
| 0.2426 | 1.64 | 700 | 0.2861 |
| 0.2546 | 1.88 | 800 | 0.2906 |
| 0.2163 | 2.11 | 900 | 0.2995 |
| 0.211 | 2.35 | 1000 | 0.3133 |
| 0.2202 | 2.58 | 1100 | 0.3082 |
| 0.2352 | 2.82 | 1200 | 0.3039 |
| 0.2169 | 3.05 | 1300 | 0.2971 |
| 0.1932 | 3.29 | 1400 | 0.3126 |
| 0.2043 | 3.52 | 1500 | 0.3173 |
| 0.2066 | 3.76 | 1600 | 0.3100 |
| 0.2099 | 3.99 | 1700 | 0.3101 |
| 0.1672 | 4.23 | 1800 | 0.3226 |
| 0.1813 | 4.46 | 1900 | 0.3295 |
| 0.1823 | 4.7 | 2000 | 0.3280 |
| 0.1967 | 4.93 | 2100 | 0.3247 |
| 0.1725 | 5.17 | 2200 | 0.3330 |
| 0.1723 | 5.4 | 2300 | 0.3336 |
| 0.162 | 5.64 | 2400 | 0.3360 |
| 0.1716 | 5.87 | 2500 | 0.3337 |
| 0.1659 | 6.11 | 2600 | 0.3340 |
| 0.1553 | 6.34 | 2700 | 0.3355 |
| 0.1537 | 6.58 | 2800 | 0.3366 |
| 0.1589 | 6.81 | 2900 | 0.3358 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
xformAI/opt-6.7b-ub-16-qcqa-best-for-KV-cache
|
xformAI
| 2024-01-24T10:21:43Z | 0 | 0 |
transformers
|
[
"transformers",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T10:20:19Z |
---
license: mit
language:
- en
library_name: transformers
---
This is a QCQA version of the original model facebook/opt-125m. In this version, the original MHA architecture is preserved but instead of having a single K/V head, different K/V heads corresponding to the same group have the same mean-pooled K or V values. It has upto 16 groups of KV heads per layer instead of the original 32 KV heads in the MHA implementation. This implementation is supposed to more efficient than corresponding GQA one. This has been optimized for KV-cache.
|
katik0/layoutlm-funsd-tf
|
katik0
| 2024-01-24T10:20:48Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"layoutlm",
"token-classification",
"generated_from_keras_callback",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-24T10:20:24Z |
---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: layoutlm-funsd-tf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd-tf
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2471
- Validation Loss: 0.6741
- Train Overall Precision: 0.7311
- Train Overall Recall: 0.7858
- Train Overall F1: 0.7574
- Train Overall Accuracy: 0.8104
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch |
|:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:|
| 1.7162 | 1.4302 | 0.2735 | 0.3026 | 0.2873 | 0.4851 | 0 |
| 1.1601 | 0.8705 | 0.5728 | 0.6708 | 0.6180 | 0.7254 | 1 |
| 0.7538 | 0.7479 | 0.6533 | 0.7055 | 0.6784 | 0.7572 | 2 |
| 0.5704 | 0.6795 | 0.6686 | 0.7582 | 0.7106 | 0.7936 | 3 |
| 0.4379 | 0.6239 | 0.7022 | 0.7762 | 0.7374 | 0.8062 | 4 |
| 0.3470 | 0.6538 | 0.7226 | 0.7842 | 0.7522 | 0.7986 | 5 |
| 0.2908 | 0.6827 | 0.7033 | 0.7777 | 0.7386 | 0.7971 | 6 |
| 0.2471 | 0.6741 | 0.7311 | 0.7858 | 0.7574 | 0.8104 | 7 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Skier8402/whisper-small-tiny
|
Skier8402
| 2024-01-24T10:20:05Z | 66 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"asr",
"sst",
"swahili",
"sw",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-20T09:40:30Z |
---
language:
- sw
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
- asr
- sst
- swahili
datasets:
- mozilla-foundation/common_voice_13_0
model-index:
- name: Whisper Tiny Sw - Skier8402
results: []
library_name: transformers
metrics:
- wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Sw - Skier8402
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 13 dataset using the swahili only.
## Model description
More information needed.
## Intended uses & limitations
The model was trained without enough noise added as a form of data augmentation. Do not use this production. I recommend using a larger version of whisper with
more hyperparameter tuning especially the learning rate, momentum, weight decay and adjusting the batch size.
## Training and evaluation data
I followed the tutorial [here](https://huggingface.co/learn/audio-course/chapter5/fine-tuning). Very minimum edits to the code were done following
this tutorial.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
thobuiq/teamtrack-ai
|
thobuiq
| 2024-01-24T10:16:30Z | 74 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"base_model:quantized:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-19T15:24:00Z |
---
license: apache-2.0
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: teamtrack-ai
results: []
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# teamtrack-ai
This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6303
- Rewards/chosen: -0.0503
- Rewards/rejected: -0.1912
- Rewards/accuracies: 0.875
- Rewards/margins: 0.1409
- Logps/rejected: -190.9696
- Logps/chosen: -89.6439
- Logits/rejected: -2.7104
- Logits/chosen: -2.8594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6859 | 0.01 | 10 | 0.6664 | 0.0030 | -0.0376 | 0.6875 | 0.0406 | -189.4330 | -89.1108 | -2.7111 | -2.8122 |
| 0.6888 | 0.01 | 20 | 0.6478 | -0.0110 | -0.0944 | 0.875 | 0.0834 | -190.0014 | -89.2510 | -2.7160 | -2.8235 |
| 0.6397 | 0.01 | 30 | 0.6385 | -0.0256 | -0.1254 | 0.8125 | 0.0997 | -190.3110 | -89.3974 | -2.7148 | -2.8392 |
| 0.6501 | 0.02 | 40 | 0.6365 | -0.0472 | -0.1782 | 0.8125 | 0.1311 | -190.8396 | -89.6128 | -2.7116 | -2.8528 |
| 0.6852 | 0.03 | 50 | 0.6303 | -0.0503 | -0.1912 | 0.875 | 0.1409 | -190.9696 | -89.6439 | -2.7104 | -2.8594 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
s3nh/Hermaid-7B-GGUF
|
s3nh
| 2024-01-24T10:15:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T09:39:46Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/ToastyPigeon/Hermaid-7B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### inference
User: Tell me story about what is an quantization and what do we need to build.
James: We will create a new data type in Rust which represents quantized data. The goal of this project is to be able to read/write images, audio, etc. from quantized data. This involves writing some code for reading/writing the different formats and also some tests. This task can be divided into several parts:
- Reading/writing of the image format(s) from quantized data (e.g. JPEG, PNG).
- Reading/writing of audio formats from quantized data (e.g. WAV, Ogg Vorbis).
- Writing tests to
# Original model card
|
tremolo09/xama
|
tremolo09
| 2024-01-24T10:08:23Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-01-24T10:08:18Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/baiyin811-sssins.com- (4).jpg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
license: apache-2.0
---
# cama
<Gallery />
## Download model
[Download](/tremolo09/xama/tree/main) them in the Files & versions tab.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.