modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
ericp/mynewmodel
|
ericp
| 2024-02-02T02:47:18Z | 4 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-02-01T22:47:55Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
jlbaker361/ddpo-stability-test-3
|
jlbaker361
| 2024-02-02T02:43:02Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-02T02:41:22Z |
---
{}
---
# DDPO trained model
num_epochs=4
train_gradient_accumulation_steps=1
sample_num_steps=10
sample_batch_size=4
train_batch_size=4
sample_num_batches_per_epoch=1
based off of stabilityai/stable-diffusion-2-base
and then trained off of jlbaker361/ddpo-stability-test-2
|
asun17904/glue-qnli-bert-base-uncased-kd
|
asun17904
| 2024-02-02T02:41:15Z | 0 | 0 |
pytorch
|
[
"pytorch",
"en",
"license:mit",
"region:us"
] | null | 2024-02-02T02:17:49Z |
---
language: en
license: mit
library_name: pytorch
---
# Plainly Optimized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 0.0
- `seed` = 42
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|
Samee-ur/DareTIES-7B
|
Samee-ur
| 2024-02-02T02:38:27Z | 0 | 0 | null |
[
"merge",
"mergekit",
"lazymergekit",
"samir-fama/SamirGPT-v1",
"abacusai/Slerp-CM-mist-dpo",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"base_model:merge:EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"base_model:abacusai/Slerp-CM-mist-dpo",
"base_model:merge:abacusai/Slerp-CM-mist-dpo",
"base_model:samir-fama/SamirGPT-v1",
"base_model:merge:samir-fama/SamirGPT-v1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-02T02:38:26Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- samir-fama/SamirGPT-v1
- abacusai/Slerp-CM-mist-dpo
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2
base_model:
- samir-fama/SamirGPT-v1
- abacusai/Slerp-CM-mist-dpo
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2
---
# DareTIES-7B
DareTIES-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1)
* [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo)
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: samir-fama/SamirGPT-v1
parameters:
density: 0.53
weight: 0.4
- model: abacusai/Slerp-CM-mist-dpo
parameters:
density: 0.53
weight: 0.3
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Samee-ur/DareTIES-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
PowerInfer/ReluLLaMA-70B-PowerInfer-GGUF
|
PowerInfer
| 2024-02-02T02:24:33Z | 50 | 9 |
transformers
|
[
"transformers",
"gguf",
"relullama",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2023-12-16T02:51:46Z |
---
license: llama2
language:
- en
---
# ReluLLaMA-70B-PowerInfer-GGUF
- Original model: [SparseLLM/ReluLLaMA-70B](https://huggingface.co/SparseLLM/ReluLLaMA-70B)
- Converted & distributed by: [PowerInfer](https://huggingface.co/PowerInfer)
This model is the downstream distribution of [SparseLLM/ReluLLaMA-70B](https://huggingface.co/SparseLLM/ReluLLaMA-70B) in PowerInfer GGUF format consisting of the LLM model weights and predictor weights.
|
zhuxunyu/etd-codet5p-770m-py
|
zhuxunyu
| 2024-02-02T02:24:07Z | 93 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"dataset:gsm8k",
"dataset:ChilleD/SVAMP",
"dataset:EleutherAI/asdiv",
"arxiv:2401.11864",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-22T09:29:57Z |
---
license: apache-2.0
datasets:
- gsm8k
- ChilleD/SVAMP
- EleutherAI/asdiv
metrics:
- accuracy
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
We use Ensemble Thoughts Distillation to distill mathematical reasoning ability from gpt-3.5-turbo to CodeT5+-770m-py.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Xunyu Zhu
- **Model type:** encoder-decoder
- **Language(s) (NLP):** python
- **License:** apache-2.0
- **Finetuned from model:** [Salesforce/codet5p-770m-py](https://huggingface.co/Salesforce/codet5p-770m-py)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model can be easily loaded using the AutoModelForSeq2SeqLM functionality and employs the same tokenizer as original [Salesforce/codet5p-770m-py](https://huggingface.co/Salesforce/codet5p-770m-py).
When given a question, the prompt "Let’s break down the code step by step" is needed to add as the input to instruct the model to generate program in PoT.
When given a question, the prompt "Let's think step by step." is needed to add as the input to instruct the model to generate rationale in CoT.
When given a question, the prompt "System of linear equations: (Do not simplify)" is needed to add as the input to instruct the model to generate equations in EoT.
### PoT
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "zhuxunyu/etd-codet5p-770m-py"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint).to(device)
question = "Question: Janet\u2019s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?\nLet’s break down the code step by step\n".
input = tokenizer(question, max_length=256, padding="max_length", truncation=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(**input, max_length=256)
generation = tokenizer.decode(output, skip_special_tokens=True)
```
### CoT
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "zhuxunyu/etd-codet5p-770m-py"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint).to(device)
question = "Question: Janet\u2019s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?\nLet's think step by step.\n".
input = tokenizer(question, max_length=256, padding="max_length", truncation=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(**input, max_length=256)
generation = tokenizer.decode(output, skip_special_tokens=True)
```
### EoT
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "zhuxunyu/etd-codet5p-770m-py"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint).to(device)
question = "Question: Janet\u2019s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?\nSystem of linear equations: (Do not simplify)\n".
input = tokenizer(question, max_length=256, padding="max_length", truncation=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(**input, max_length=256)
generation = tokenizer.decode(output, skip_special_tokens=True)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
We prompt gpt-3.5-turbo to generate reasoning programs to solve questions in GSM8K training dataset, and each question includes 4 reasoning programs, 4 reasoning rationales, 4 reasoning equations systems. Then, questions in GSM8K training dataset and
their corresponding reasoning processes are built as a training dataset, and we use the training dataset to fine-tune the LM.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Results
| Dataset | GSM8K | ASDiv | SVAMP | MultiArith |
| :-----: | :---: | :---: | :---: | :--------: |
| PoT | 50.34 | 55.2 | 51.6 | 88.33 |
| EoT | 48.21 | 52.81 | 55.7 | 70.16 |
| CoT | 25.47 | 29.67 | 23.3 | 46.5 |
| Ensemble_all | 50.56 | 55.34 | 52.3 | 88.83 |
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{zhu2024improving,
title={Distilling Mathematical Reasoning Capabilities into Small Language Models},
author={Xunyu Zhu and Jian Li and Yong Liu and Can Ma and Weiping Wang},
year={2024},
eprint={2401.11864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
asun17904/glue-qqp-t5-base-kd
|
asun17904
| 2024-02-02T02:01:22Z | 0 | 0 |
pytorch
|
[
"pytorch",
"en",
"license:mit",
"region:us"
] | null | 2024-02-02T00:36:07Z |
---
language: en
license: mit
library_name: pytorch
---
# Plainly Optimized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 1e-09
- `seed` = 42
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|
asun17904/glue-qnli-gpt2-alum
|
asun17904
| 2024-02-02T01:58:55Z | 0 | 0 |
pytorch
|
[
"pytorch",
"en",
"license:mit",
"region:us"
] | null | 2024-02-01T16:26:20Z |
---
language: en
license: mit
library_name: pytorch
---
# Plainly Optimized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 0.0
- `seed` = 42
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|0.463|0.851|1.0|
|0.454|0.857|2.0|
|
jlbaker361/ddpo-stability-test
|
jlbaker361
| 2024-02-02T01:57:56Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-01T21:56:39Z |
---
{}
---
# DDPO trained model
num_epochs=6
train_gradient_accumulation_steps=1
sample_num_steps=10
sample_batch_size=4
train_batch_size=4
sample_num_batches_per_epoch=1
based off of stabilityai/stable-diffusion-2-base
and then trained off of None
|
alnrg2arg/blockchainlabs_test3_seminar
|
alnrg2arg
| 2024-02-02T01:55:01Z | 50 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"FelixChao/WestSeverus-7B-DPO-v2",
"macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T01:51:09Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
---
# blockchainlabs_test3_seminar
blockchainlabs_test3_seminar is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: FelixChao/WestSeverus-7B-DPO-v2
layer_range: [0, 32]
- model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
layer_range: [0, 32]
merge_method: slerp
base_model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: #bfloat16 #bfloat16이 float16보다 학습할때 더 빠릅니다.
```
|
jlbaker361/ddpo-stability-good
|
jlbaker361
| 2024-02-02T01:46:29Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-02T01:10:31Z |
---
{}
---
# DDPO trained model
num_epochs=2
train_gradient_accumulation_steps=1
sample_num_steps=30
sample_batch_size=4
train_batch_size=4
sample_num_batches_per_epoch=1
based off of stabilityai/stable-diffusion-2-base
and then trained off of None
|
wiusdy/VQA_fashion_hvar
|
wiusdy
| 2024-02-02T01:36:21Z | 0 | 0 | null |
[
"visual-question-answering-for-fashion-context",
"license:apache-2.0",
"region:us"
] | null | 2024-02-02T01:35:50Z |
---
tags:
- visual-question-answering-for-fashion-context
license: apache-2.0
widget:
- text: "Testing.."
src: "figure"
- text: "Testing again.."
src: "new figure"
---
# This is a simple VQA system using Hugging Face, PyTorch and Vision-and-Language Transformer (ViLT)
-------------
In this repository we created a simple VQA system capable of recognize spatial and context information of fashion images (e.g. clothes color and details).
The project was based in this paper **FashionVQA: A Domain-Specific Visual Question Answering System** [[1]](#1).
## References
<a id="1">[1]</a>
Min Wang and Ata Mahjoubfar and Anupama Joshi, 2022
FashionVQA: A Domain-Specific Visual Question Answering System
|
Kamaljp/gpt2-wiki
|
Kamaljp
| 2024-02-02T01:34:40Z | 203 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T01:04:28Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-wiki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wiki
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7678 | 1.0 | 1125 | 6.6381 |
| 6.4202 | 2.0 | 2250 | 6.3866 |
| 6.2486 | 3.0 | 3375 | 6.3088 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
DanielClough/Candle_SOLAR-10.7B-v1.0
|
DanielClough
| 2024-02-02T01:32:13Z | 53 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"dataset:upstage/SOLAR-10.7B-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-19T23:17:47Z |
---
datasets:
- upstage/SOLAR-10.7B-v1.0
language:
- en
pipeline_tag: text-generation
license: apache-2.0
---
This repo includes `.gguf` built for HuggingFace/Candle.
Refer to the [original repo](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) for more details.
|
hannahbernstein/outputs
|
hannahbernstein
| 2024-02-02T01:30:24Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-02T01:30:01Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 120
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Ricardo54321/A2C-PandaReach
|
Ricardo54321
| 2024-02-02T01:24:48Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T01:20:22Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
karawalla/aqmodel_20240128
|
karawalla
| 2024-02-02T01:05:00Z | 60 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-28T07:03:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Denox05/william
|
Denox05
| 2024-02-02T01:00:10Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-02-02T01:00:10Z |
---
license: other
license_name: rvc
license_link: LICENSE
---
|
LoneStriker/miquella-120b-3.5bpw-h6-exl2
|
LoneStriker
| 2024-02-02T00:53:07Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T00:30:54Z |
---
base_model: []
tags:
- mergekit
- merge
---
# Miquella 120B
## Model has been remade with the [fixed dequantization](https://huggingface.co/152334H/miqu-1-70b-sf) of miqu.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
An attempt at re-creating [goliath-120b](https://huggingface.co/alpindale/goliath-120b) using the new miqu-1-70b model instead of Xwin.
The merge ratios are the same as goliath, only that Xwin is swapped with miqu.
### Models Merged
The following models were included in the merge:
* [miqu-1-70b](https://huggingface.co/alpindale/miqu-1-70b-fp16)
* [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B)

Miquella the Unalloyed, by @eldrtchmoon
|
enricai/chat-es
|
enricai
| 2024-02-02T00:47:42Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:adapter:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"region:us"
] | null | 2024-01-31T11:10:00Z |
---
library_name: peft
base_model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
cloudyu/19B_TRUTH_DPO
|
cloudyu
| 2024-02-02T00:47:14Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-30T12:13:39Z |
---
license: cc-by-nc-4.0
---
* [This is DPO improved version of cloudyu/Mixtral_11Bx2_MoE_19B](https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B)
* [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer)
* metrics not test!
*
|
BarraHome/zephyr-dpo-fast-gguf
|
BarraHome
| 2024-02-02T00:47:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"en",
"base_model:unsloth/zephyr-sft",
"base_model:quantized:unsloth/zephyr-sft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-01T22:16:45Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/zephyr-sft
---
# Uploaded model
- **Developed by:** BarraHome
- **License:** apache-2.0
- **Finetuned from model :** unsloth/zephyr-sft
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zhaoxinwind/QA_model2
|
zhaoxinwind
| 2024-02-02T00:37:12Z | 94 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-01T09:27:46Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: QA_model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA_model2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0004 | 1.0 | 1202 | 0.0001 |
| 0.0001 | 2.0 | 2404 | 0.0001 |
| 0.0001 | 3.0 | 3606 | 0.0000 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Chattiori/SagarMix
|
Chattiori
| 2024-02-02T00:36:58Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-01T06:45:21Z |
---
license: creativeml-openrail-m
---
|
stevugnin/llama-2-7b-bics-multi_woz_v22
|
stevugnin
| 2024-02-02T00:32:35Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T18:39:14Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
Patcas/plbart-nodocssnew-v3
|
Patcas
| 2024-02-02T00:30:13Z | 22 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"plbart",
"text2text-generation",
"generated_from_trainer",
"base_model:Patcas/plbart-works",
"base_model:finetune:Patcas/plbart-works",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-01T19:05:46Z |
---
base_model: Patcas/plbart-works
tags:
- generated_from_trainer
model-index:
- name: plbart-nodocssnew-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbart-nodocssnew-v3
This model is a fine-tuned version of [Patcas/plbart-works](https://huggingface.co/Patcas/plbart-works) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 1.1271 |
| No log | 2.0 | 460 | 1.0330 |
| 0.9597 | 3.0 | 690 | 1.0390 |
| 0.9597 | 4.0 | 920 | 1.0419 |
| 0.356 | 5.0 | 1150 | 1.0621 |
| 0.356 | 6.0 | 1380 | 1.0592 |
| 0.1706 | 7.0 | 1610 | 1.0828 |
| 0.1706 | 8.0 | 1840 | 1.0934 |
| 0.1116 | 9.0 | 2070 | 1.0964 |
| 0.1116 | 10.0 | 2300 | 1.0991 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
noguchis/medusa-1.0-ELYZA-japanese-Llama-2-7b-instruct
|
noguchis
| 2024-02-02T00:24:34Z | 3 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:elyza/ELYZA-japanese-Llama-2-7b-instruct",
"base_model:quantized:elyza/ELYZA-japanese-Llama-2-7b-instruct",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-02T00:15:15Z |
---
base_model: elyza/ELYZA-japanese-Llama-2-7b-instruct
tags:
- generated_from_trainer
model-index:
- name: medusa-1.0-ELYZA-japanese-Llama-2-7b-instruct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/Open
Access-AI-Collective/axolotl)
# medusa-1.0-ELYZA-japanese-Llama-2-7b-instruct
This model is a fine-tuned version of [elyza/ELYZA-japanese-Llama-2-7b-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8838
## Model description
This is a Medusa-1 created using [Medusa](https://github.com/FasterDecoding/Medusa).
## Intended uses & limitations
- [【Orion-14B Series】 Models Community License Agreement](https://huggingface.co/OrionStarAI/Orion-14B-Chat/blob/main/ModelsCommunityLicenseAgreement)
## Training and evaluation data
- [shi3z/ja_conv_wikipedia_orion14B_100K](https://huggingface.co/datasets/shi3z/ja_conv_wikipedia_orion14B_100K)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4268 | 0.06 | 40 | 2.4129 |
| 2.204 | 0.11 | 80 | 2.2328 |
| 2.1341 | 0.17 | 120 | 2.1939 |
| 2.1774 | 0.23 | 160 | 2.1762 |
| 2.1331 | 0.28 | 200 | 2.1652 |
| 2.1485 | 0.34 | 240 | 2.1537 |
| 2.1608 | 0.4 | 280 | 2.1485 |
| 2.0998 | 0.45 | 320 | 2.1340 |
| 2.1478 | 0.51 | 360 | 2.1233 |
| 2.1374 | 0.57 | 400 | 2.1165 |
| 2.0657 | 0.62 | 440 | 2.0984 |
| 2.1227 | 0.68 | 480 | 2.0834 |
| 2.0573 | 0.74 | 520 | 2.0739 |
| 2.0501 | 0.79 | 560 | 2.0602 |
| 2.0664 | 0.85 | 600 | 2.0431 |
| 2.0231 | 0.91 | 640 | 2.0277 |
| 2.0263 | 0.96 | 680 | 2.0119 |
| 1.8635 | 1.02 | 720 | 1.9990 |
| 1.8844 | 1.07 | 760 | 1.9905 |
| 1.8585 | 1.13 | 800 | 1.9796 |
| 1.8018 | 1.19 | 840 | 1.9728 |
| 1.8134 | 1.24 | 880 | 1.9595 |
| 1.7795 | 1.3 | 920 | 1.9498 |
| 1.7603 | 1.36 | 960 | 1.9371 |
| 1.8302 | 1.41 | 1000 | 1.9262 |
| 1.7909 | 1.47 | 1040 | 1.9172 |
| 1.7787 | 1.53 | 1080 | 1.9083 |
| 1.7476 | 1.58 | 1120 | 1.9010 |
| 1.7897 | 1.64 | 1160 | 1.8945 |
| 1.7795 | 1.7 | 1200 | 1.8895 |
| 1.7329 | 1.75 | 1240 | 1.8864 |
| 1.7221 | 1.81 | 1280 | 1.8846 |
| 1.7624 | 1.87 | 1320 | 1.8840 |
| 1.793 | 1.92 | 1360 | 1.8838 |
| 1.7839 | 1.98 | 1400 | 1.8838 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.14.1
|
mlx-community/CodeLlama-13b-Instruct-hf-4bit-MLX
|
mlx-community
| 2024-02-02T00:18:18Z | 86 | 2 |
mlx
|
[
"mlx",
"llama",
"llama-2",
"text-generation",
"code",
"license:llama2",
"region:us"
] |
text-generation
| 2024-01-30T03:03:15Z |
---
language:
- code
license: llama2
tags:
- llama-2
- mlx
pipeline_tag: text-generation
---

# mlx-community/CodeLlama-13b-Instruct-hf-4bit-MLX
This model was converted to MLX format from [`codellama/CodeLlama-13b-Instruct-hf`]().
Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeLlama-13b-Instruct-hf-4bit-MLX")
response = generate(model, tokenizer, prompt="<s>[INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always do... If you are unsure about an answer, truthfully say \"I don't know\" <</SYS>> What's the meaning of life [/INST]", verbose=True)
```
|
mlx-community/CodeLlama-7b-Python-4bit-MLX
|
mlx-community
| 2024-02-02T00:17:41Z | 56 | 14 |
mlx
|
[
"mlx",
"llama",
"llama-2",
"text-generation",
"code",
"license:llama2",
"region:us"
] |
text-generation
| 2024-01-29T20:29:21Z |
---
language:
- code
license: llama2
tags:
- llama-2
- mlx
pipeline_tag: text-generation
---

# mlx-community/CodeLlama-7b-Python-4bit
This model was converted to MLX format from [`codellama/CodeLlama-7b-Python-hf`]().
Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeLlama-7b-Python-4bit")
response = generate(model, tokenizer, prompt="<s>[INST] <<SYS>> You are a helpful, respectful, and honest assistant. Always do... If you are unsure about an answer, truthfully say \"I don't know\" <</SYS>> What's the meaning of life [/INST]", verbose=True)
```
|
tompkinsguitar/bloom-560m_lora_chat_test
|
tompkinsguitar
| 2024-02-02T00:15:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T00:15:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CLMBR/pp-mod-subj-lstm-2
|
CLMBR
| 2024-02-02T00:12:31Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-26T10:11:20Z |
---
tags:
- generated_from_trainer
model-index:
- name: pp-mod-subj2-lstm-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pp-mod-subj2-lstm-2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7935 | 0.03 | 76320 | 4.8062 |
| 4.5109 | 1.03 | 152640 | 4.5276 |
| 4.364 | 0.03 | 228960 | 4.3927 |
| 4.2769 | 1.03 | 305280 | 4.3088 |
| 4.218 | 2.03 | 381600 | 4.2524 |
| 4.1632 | 0.03 | 457920 | 4.2110 |
| 4.122 | 1.03 | 534240 | 4.1790 |
| 4.0898 | 0.03 | 610560 | 4.1564 |
| 4.061 | 1.03 | 686880 | 4.1367 |
| 4.0412 | 0.03 | 763200 | 4.1209 |
| 4.017 | 1.03 | 839520 | 4.1068 |
| 4.0005 | 2.03 | 915840 | 4.0968 |
| 3.9871 | 0.03 | 992160 | 4.0886 |
| 3.9741 | 1.03 | 1068480 | 4.0804 |
| 3.9581 | 0.03 | 1144800 | 4.0726 |
| 3.9422 | 1.03 | 1221120 | 4.0674 |
| 3.9348 | 2.03 | 1297440 | 4.0626 |
| 3.9277 | 0.03 | 1373760 | 4.0584 |
| 3.9161 | 1.03 | 1450080 | 4.0541 |
| 3.9137 | 2.03 | 1526400 | 4.0513 |
| 3.9114 | 0.03 | 1602720 | 4.0470 |
| 3.9006 | 1.03 | 1679040 | 4.0451 |
| 3.8935 | 2.03 | 1755360 | 4.0424 |
| 3.8884 | 0.03 | 1831680 | 4.0403 |
| 3.8812 | 1.03 | 1908000 | 4.0383 |
| 3.8775 | 0.03 | 1984320 | 4.0364 |
| 3.8699 | 1.03 | 2060640 | 4.0352 |
| 3.8649 | 0.03 | 2136960 | 4.0336 |
| 3.8632 | 1.03 | 2213280 | 4.0325 |
| 3.8588 | 0.03 | 2289600 | 4.0305 |
| 3.8516 | 1.03 | 2365920 | 4.0299 |
| 3.8467 | 2.03 | 2442240 | 4.0294 |
| 3.8464 | 0.03 | 2518560 | 4.0286 |
| 3.8457 | 0.03 | 2594880 | 4.0275 |
| 3.8392 | 1.03 | 2671200 | 4.0269 |
| 3.841 | 0.03 | 2747520 | 4.0259 |
| 3.8415 | 1.03 | 2823840 | 4.0257 |
| 3.8387 | 0.03 | 2900160 | 4.0253 |
| 3.8357 | 1.03 | 2976480 | 4.0248 |
| 3.834 | 2.02 | 3052726 | 4.0244 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LoneStriker/miquella-120b-4.5bpw-h6-exl2
|
LoneStriker
| 2024-02-02T00:11:44Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T23:43:16Z |
---
base_model: []
tags:
- mergekit
- merge
---
# Miquella 120B
## Model has been remade with the [fixed dequantization](https://huggingface.co/152334H/miqu-1-70b-sf) of miqu.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
An attempt at re-creating [goliath-120b](https://huggingface.co/alpindale/goliath-120b) using the new miqu-1-70b model instead of Xwin.
The merge ratios are the same as goliath, only that Xwin is swapped with miqu.
### Models Merged
The following models were included in the merge:
* [miqu-1-70b](https://huggingface.co/alpindale/miqu-1-70b-fp16)
* [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B)

Miquella the Unalloyed, by @eldrtchmoon
|
Overgrown7380/dqn-SpaceInvadersNoFrameskip-v4
|
Overgrown7380
| 2024-02-02T00:11:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T00:10:54Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 583.50 +/- 156.32
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Overgrown7380 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Overgrown7380 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Overgrown7380
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nlewins/whisper-small-translate-X-gen2-examples-quality-step4-1e-6
|
nlewins
| 2024-02-02T00:06:06Z | 60 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ceb",
"dataset:nlewins/ceb-sentences-with-audio",
"dataset:nlewins/standard_dataset_nonsynthetic",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-01T01:06:22Z |
---
language:
- ceb
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- nlewins/ceb-sentences-with-audio
- nlewins/standard_dataset_nonsynthetic
model-index:
- name: Whisper finetuned for ceb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper finetuned for ceb
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Fleurs Ceb subset, LSK nonsynthetic, Onetalk Q&A, and Ceb sentences dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 20000
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
stablediffusionapi/hentai
|
stablediffusionapi
| 2024-02-02T00:04:57Z | 31 | 2 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-02-02T00:03:06Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# hentai API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "hentai"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/hentai)
Model link: [View model](https://modelslab.com/models/hentai)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "hentai",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
davibelo/autotrain-phi2-sft
|
davibelo
| 2024-02-02T00:00:44Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T00:00:41Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
coke0zero/poca-SoccerTwos
|
coke0zero
| 2024-02-01T23:55:08Z | 23 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-02-01T23:54:03Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: coke0zero/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
asun17904/glue-qnli-t5-base-kd
|
asun17904
| 2024-02-01T23:51:38Z | 0 | 0 |
pytorch
|
[
"pytorch",
"en",
"license:mit",
"region:us"
] | null | 2024-02-01T22:37:11Z |
---
language: en
license: mit
library_name: pytorch
---
# Plainly Optimized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 1e-09
- `seed` = 42
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|13.281|0.897|1.0|
|12.993|0.905|2.0|
|
silmarillion/orca-2-7B-v01-fine-tuned-using-ludwig-4bit
|
silmarillion
| 2024-02-01T23:12:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:microsoft/Orca-2-7b",
"base_model:adapter:microsoft/Orca-2-7b",
"region:us"
] | null | 2024-02-01T19:44:03Z |
---
library_name: peft
base_model: microsoft/Orca-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
Samee-ur/NeuralPipe-7B-slerp
|
Samee-ur
| 2024-02-01T22:59:01Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T22:53:39Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Samee-ur/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
jlbaker361/dcgan-cond-wikiart500-resized
|
jlbaker361
| 2024-02-01T22:34:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-01T03:52:45Z |
---
{}
---
Creative Adversarial Network
epochs: 100
dataset jlbaker361/wikiart-balanced500
n classes 27
batch_size 32
images where resized to 768
and then center cropped to: 512
used clip=False
discriminator parameters:
init_dim: 32
final_dim 512
generator parameters:
input noise_dim: 100
|
alpindale/miquella-120b
|
alpindale
| 2024-02-01T22:11:47Z | 44 | 20 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T12:45:17Z |
---
base_model: []
tags:
- mergekit
- merge
---
# Miquella 120B
## Model has been remade with the [fixed dequantization](https://huggingface.co/152334H/miqu-1-70b-sf) of miqu.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
An attempt at re-creating [goliath-120b](https://huggingface.co/alpindale/goliath-120b) using the new miqu-1-70b model instead of Xwin.
The merge ratios are the same as goliath, only that Xwin is swapped with miqu.
### Models Merged
The following models were included in the merge:
* [miqu-1-70b](https://huggingface.co/152334H/miqu-1-70b-sf)
* [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B)

Miquella the Unalloyed, by @eldrtchmoon
|
mlx-community/Mistral7B-Inst-v0.2-4bit-mlx-distilabel-capybara-dpo-7k
|
mlx-community
| 2024-02-01T22:10:10Z | 13 | 6 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"finetuned",
"text-generation",
"conversational",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-02-01T07:16:27Z |
---
license: apache-2.0
tags:
- finetuned
- mlx
- mlx
pipeline_tag: text-generation
inference: false
---

# Mistral7B-Inst-v0.2-4bit-mlx-distilabel-capybara-dpo-7k
This model was converted to MLX format from [`mlx-community/Mistral-7B-Instruct-v0.2-8-bit-mlx`]().
Refer to the [original model card](https://huggingface.co/mlx-community/Mistral-7B-Instruct-v0.2-8-bit-mlx) for more details on the model.
Using a DPO dataset by Argilla [dataset](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized)
## Use with mlx
```bash
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model mlx-community/Mistral7B-Inst-v0.2-4bit-mlx-distilabel-capybara-dpo-7k --prompt "What wights more 1kg of feathers or 0.5kg of steel?"
```
|
mtc/mistralai-Mistral-7B-v0.1-7b-xnli-100-5-epoch-lora-full
|
mtc
| 2024-02-01T22:00:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-02-01T22:00:32Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
renatex333/ppo-LunarLander-v2
|
renatex333
| 2024-02-01T21:42:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-01T21:41:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 207.74 +/- 71.48
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
achimoraites/flan-t5-base-samsum
|
achimoraites
| 2024-02-01T21:30:30Z | 108 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"en",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-02-18T21:05:46Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- summarization
datasets:
- samsum
metrics:
- rouge
widget:
- text: 'Olivia: Hey Carter, are you still developing that restaurant business? Carter:
Hi Olivia Carter: Yes, we want to launch next month :) Olivia: Next month? That''s
soon! Congrats :) Carter: thanks, I''m a bit nervous but I seriously believe we''re
delivering something innovative and needed Olivia: I think it''s a great concept
and I am sure you''ll do great! Olivia: I am currently involved with a new restaurant
in the city centre Carter: Which one? Olivia: Spicy and chilled Carter: I heard
about it :) Is it any good? ;) Olivia: I love the restaurant and really like working
there Carter: good for you! Olivia: and here''s the question - are you still looking
for restaurant to include in your discount app? Carter: sure, but I think it would
be better to discuss it in person - would you like to meet up? Olivia: That would
be great!'
example_title: Dialogue 1
- text: 'Chad: Elton John is goat Eva: what do you mean by goat? Frank: greatest of
all time Chad: indeed Eva: ahh... it makes sense now :P'
example_title: Dialogue 2
- text: 'Astonishing X-Men is the name of four X-Men comic book series from Marvel
Comics, the first two of which were limited series. The third volume, an ongoing
series, began in 2004, with its first run written by Joss Whedon and art by John
Cassaday. It was then written by Warren Ellis with art by Simone Bianchi and Phil
Jimenez.[1] Daniel Way and Christos Gage then took over the title writing alternating
stories. They were followed by James Asmus who wrote one issue, then Greg Pak,
who took over for four issues in November 2011.[2] Marjorie Liu wrote the final
21 issues of the series until its end at issue #68 in 2013. The title''s fourth
volume and second ongoing series launched in 2017 during the "ResurrXion" storyline.[3]
The first run was written by Charles Soule and illustrated by a rotating cast
of artists. Matthew Rosenberg and artist Greg Land would then take over the series
before its end in 2018. The original Astonishing X-Men was a four-issue limited
series that replaced Uncanny X-Men during the 1995 alternate universe storyline
"Age of Apocalypse", in which all X-titles were given new names and issue numbers.
In the storyline, Professor X was murdered 20 years in the past by his own son,
Legion. Magneto, witnessing his friend''s death, committed himself to Xavier''s
dream and created his own team of X-Men. However, he was unable to prevent the
rise of the despotic Apocalypse and hence the series primarily dealt with the
X-Men''s battle against him. Astonishing X-Men, written by Scott Lobdell and illustrated
by Joe Madureira, featured a team of X-Men led by Rogue and consisted of Sunfire,
Blink, Morph, Sabretooth and Wildchild. source: https://en.wikipedia.org/wiki/Astonishing_X-Men'
example_title: Wikipedia Article
model-index:
- name: flan-t5-base-samsum
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- type: rouge
value: 46.8876
name: Rouge1
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- type: rouge
value: 47.1604
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzAzNjBhZmU3ZWE1Nzg2OGNmNWQxZTRkMWI3MGJmY2U3NzdiN2NhMzA2ZGY2N2VmOGQzNThkZDg5YmI1NTQzMCIsInZlcnNpb24iOjF9.fj5dvLTJmdTud-r9NBx468b_q7128WFc84Oa_ogUq1YuHfgK9KRBJl0V8YVP-UrVOB-5Mwcy_kVo2gqUq2fQCA
- type: rouge
value: 23.5947
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2ExZTYyMDMzYjQyZWU0NjY4YWZiN2NjMjAyNzUwMzU3ZjQxOTdjZDdhNjE0MDE1NDVmY2Y5MDEyZTI5ODA5ZCIsInZlcnNpb24iOjF9.4XKnhKi4PtU0KnyXnBDRop-tWwDvAgJqbWkuPAVUPThcCjVrpjLiSgTWP49NEK-l3QBaLluoh7M-OF8OTwasBQ
- type: rouge
value: 39.7299
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWZiMDU1ODY4Y2ViOWJlZjhhZTAzNjY4NDhjYzdlYzg1MDRmZDM2ZDFkZGVhNjQzMmZjZDA3OWEzYjUzOTU0NCIsInZlcnNpb24iOjF9.EctQIDlK_ksR7NiCtHsxnWWzUF8WNmZ58JIsTUTjQPqmf8Igm82tihK78S4nit7IF24lug5_Ua7X5gWzMHDvDA
- type: rouge
value: 43.3052
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzQwYTMyOGNlNzJiNDEzMjQ5NzEwMzMyZmRhZDAxOGNhMWNkZjA0YWEyM2NkZGU3ODU3ZDU4ZWFhODkyNzNkOCIsInZlcnNpb24iOjF9.nsQAnUdVTySov7ZkNYJjMbIjb7V87D1w0HFLdOzSq5gaKuZmkAXmh14c_bL4Fbyf3AV_skLCDCJZEnsJHN7mDQ
- type: loss
value: 1.3786224126815796
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDgzMTkxY2EwOWU5MDgyODM3ZjE3MzBiN2Q0YmQ5MDI2MjI2NWNmMjUwZDY4MjZkZDg4ODcwMzVkN2Q4NTRmNSIsInZlcnNpb24iOjF9.vV700h6j3hdlzf-CEDIR3C9XND1jH3nW0r6Njgw0qB3Avfsq6zywr8ip2sxoo6aFCCQcmmcnmHiy7x1_xdwYAA
- type: gen_len
value: 17.3443
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTNjYjFiYjgzNjBlMDY2MWUwZTVmY2Y1OWMwNGZkZTg0Mzc5ZmU2MWIwOWZlYWMzZGM1YWI0NTJjOTFhOTU2YiIsInZlcnNpb24iOjF9.-RshHr8uVG0B4qGh5Tr3bgqqai9R_Xho0M9iQyd5g0fyQJlYhIT12cUkcy2_NKUJEqu_JxSul723UWpiZgBHAQ
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-samsum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3709
- Rouge1: 46.8876
- Rouge2: 23.2689
- Rougel: 39.5369
- Rougelsum: 43.1602
- Gen Len: 17.2027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4403 | 1.0 | 1842 | 1.3829 | 46.5321 | 23.0912 | 39.4008 | 42.8993 | 17.0977 |
| 1.3534 | 2.0 | 3684 | 1.3732 | 47.1111 | 23.4456 | 39.5462 | 43.2534 | 17.4554 |
| 1.2795 | 3.0 | 5526 | 1.3709 | 46.8876 | 23.2689 | 39.5369 | 43.1602 | 17.2027 |
| 1.2313 | 4.0 | 7368 | 1.3736 | 47.4418 | 23.701 | 39.9856 | 43.6294 | 17.2198 |
| 1.1934 | 5.0 | 9210 | 1.3772 | 47.4656 | 23.9199 | 40.0284 | 43.7039 | 17.3162 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jiagaoxiang/stable-video-diffusion-img2vid
|
jiagaoxiang
| 2024-02-01T21:26:34Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"diffusers:StableVideoDiffusionPipeline",
"region:us"
] | null | 2024-02-01T21:15:45Z |
All the model components are saved from fp16 format of stabilityai/stable-video-diffusion-img2vid except the vae folder is replaced with the fp32 format of stabilityai/stable-video-diffusion-img2vid. This may help solve the black image issue caused by the vae.
More context: For SDXL, converting vae to fp16 will cause NaNs which results in black images. This is because of overflow numbers inside vae weights. Link: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
|
amaandhada/mistral-7b-easter-egg-finetuned
|
amaandhada
| 2024-02-01T21:23:33Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T21:23:18Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
MaziyarPanahi/CodeLlama-34b-hf-GGUF
|
MaziyarPanahi
| 2024-02-01T21:22:49Z | 62 | 3 |
transformers
|
[
"transformers",
"gguf",
"llama",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"pytorch",
"safetensors",
"text-generation",
"llama-2",
"code",
"arxiv:2308.12950",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us",
"base_model:codellama/CodeLlama-34b-hf",
"base_model:quantized:codellama/CodeLlama-34b-hf",
"license:apache-2.0"
] |
text-generation
| 2024-02-01T13:00:40Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- pytorch
- safetensors
- llama
- text-generation
- llama-2
- code
- arxiv:2308.12950
- license:llama2
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: CodeLlama-34b-hf-GGUF
base_model: codellama/CodeLlama-34b-hf
inference: false
model_creator: codellama
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/CodeLlama-34b-hf-GGUF](https://huggingface.co/MaziyarPanahi/CodeLlama-34b-hf-GGUF)
- Model creator: [codellama](https://huggingface.co/codellama)
- Original model: [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf)
## Description
[MaziyarPanahi/CodeLlama-34b-hf-GGUF](https://huggingface.co/MaziyarPanahi/CodeLlama-34b-hf-GGUF) contains GGUF format model files for [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/CodeLlama-34b-hf-GGUF](https://huggingface.co/MaziyarPanahi/CodeLlama-34b-hf-GGUF) and below it, a specific filename to download, such as: CodeLlama-34b-hf-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/CodeLlama-34b-hf-GGUF CodeLlama-34b-hf-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/CodeLlama-34b-hf-GGUF](https://huggingface.co/MaziyarPanahi/CodeLlama-34b-hf-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/CodeLlama-34b-hf-GGUF CodeLlama-34b-hf-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m CodeLlama-34b-hf-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./CodeLlama-34b-hf-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./CodeLlama-34b-hf-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
mtc/mistralai-Mistral-7B-v0.1-7b-xsum-75-5-epoch-lora-full
|
mtc
| 2024-02-01T21:20:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-02-01T21:20:19Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
jlbaker361/dcgan-lazy-wikiart1000-clip-resized
|
jlbaker361
| 2024-02-01T21:17:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-01T14:17:12Z |
---
{}
---
Creative Adversarial Network
epochs: 2
dataset jlbaker361/wikiart-balanced1000
n classes 27
batch_size 4
images where resized to 768
and then center cropped to: 512
used clip=True
discriminator parameters:
init_dim: 32
final_dim 512
generator parameters:
input noise_dim: 100
|
mtc/mistralai-Mistral-7B-v0.1-7b-xnli-100-lora-full
|
mtc
| 2024-02-01T21:16:28Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-02-01T21:16:03Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
mtc/mistralai-Mistral-7B-v0.1-7b-xsum-100-lora-full
|
mtc
| 2024-02-01T21:11:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-02-01T21:10:49Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
mtc/mistralai-Mistral-7B-v0.1-7b-xsum-with-explanation-100-lora-full
|
mtc
| 2024-02-01T21:10:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-02-01T21:10:26Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
HazemHM/Reinforce-Pixelcopter
|
HazemHM
| 2024-02-01T21:08:18Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-01T18:22:20Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 54.90 +/- 54.56
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
birgermoell/BeagleCatMunin-Flashback-Bellman
|
birgermoell
| 2024-02-01T20:52:14Z | 31 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"birgermoell/Flashback-Bellman",
"base_model:birgermoell/Flashback-Bellman",
"base_model:finetune:birgermoell/Flashback-Bellman",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T20:47:47Z |
---
tags:
- merge
- mergekit
- lazymergekit
- birgermoell/Flashback-Bellman
base_model:
- birgermoell/Flashback-Bellman
---
# BeagleCatMunin-Flashback-Bellman
BeagleCatMunin-Flashback-Bellman is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [birgermoell/Flashback-Bellman](https://huggingface.co/birgermoell/Flashback-Bellman)
## 🧩 Configuration
```yaml
models:
- model: timpal0l/BeagleCatMunin
# No parameters necessary for base model
- model: birgermoell/Flashback-Bellman
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: timpal0l/BeagleCatMunin
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "birgermoell/BeagleCatMunin-Flashback-Bellman"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
cartesigpt/cartesigpt
|
cartesigpt
| 2024-02-01T20:37:27Z | 58 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-v0.1-GPTQ",
"base_model:quantized:TheBloke/Mistral-7B-v0.1-GPTQ",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-31T11:40:43Z |
---
license: apache-2.0
base_model: TheBloke/Mistral-7B-v0.1-GPTQ
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: cartesi-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cartesi-finetuned
This model is a fine-tuned version of [TheBloke/Mistral-7B-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.14.7
- Tokenizers 0.14.1
|
sjonas50/sft_zephyr
|
sjonas50
| 2024-02-01T20:29:57Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"base_model:adapter:HuggingFaceH4/zephyr-7b-alpha",
"license:mit",
"region:us"
] | null | 2024-02-01T20:29:40Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: HuggingFaceH4/zephyr-7b-alpha
model-index:
- name: sft_zephyr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_zephyr
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
asun17904/glue-qnli-bert-base-uncased-alum
|
asun17904
| 2024-02-01T20:18:24Z | 0 | 0 |
pytorch
|
[
"pytorch",
"en",
"license:mit",
"region:us"
] | null | 2024-02-01T18:03:01Z |
---
language: en
license: mit
library_name: pytorch
---
# Plainly Optimized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 0.0
- `seed` = 42
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|0.423|0.886|1.0|
|0.412|0.899|2.0|
|
gizemsoylutr/sap-sustainability-ai
|
gizemsoylutr
| 2024-02-01T20:16:57Z | 0 | 0 | null |
[
"en",
"license:wtfpl",
"region:us"
] | null | 2024-02-01T20:15:57Z |
---
license: wtfpl
language:
- en
---
|
ryusangwon/xsum_1677_bart-base
|
ryusangwon
| 2024-02-01T20:16:00Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-31T13:17:56Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: xsum_1677_bart-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xsum_1677_bart-base
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6469
- Rouge1: 0.3879
- Rouge2: 0.1787
- Rougel: 0.3238
- Rougelsum: 0.3238
- Gen Len: 19.6644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.8336 | 0.31 | 500 | 0.7274 | 0.3493 | 0.139 | 0.2847 | 0.2847 | 19.511 |
| 0.7963 | 0.63 | 1000 | 0.6994 | 0.3637 | 0.1506 | 0.2977 | 0.2976 | 19.6179 |
| 0.7543 | 0.94 | 1500 | 0.6876 | 0.365 | 0.1531 | 0.2999 | 0.2999 | 19.5356 |
| 0.7461 | 1.25 | 2000 | 0.6795 | 0.3709 | 0.1584 | 0.3052 | 0.3051 | 19.6224 |
| 0.7193 | 1.57 | 2500 | 0.6739 | 0.3684 | 0.1593 | 0.3048 | 0.3047 | 19.5721 |
| 0.7225 | 1.88 | 3000 | 0.6666 | 0.371 | 0.16 | 0.3063 | 0.3063 | 19.5672 |
| 0.6779 | 2.2 | 3500 | 0.6660 | 0.3745 | 0.1632 | 0.31 | 0.31 | 19.5619 |
| 0.673 | 2.51 | 4000 | 0.6618 | 0.3763 | 0.1653 | 0.3117 | 0.3117 | 19.6738 |
| 0.6848 | 2.82 | 4500 | 0.6578 | 0.3803 | 0.168 | 0.3145 | 0.3145 | 19.6308 |
| 0.6526 | 3.14 | 5000 | 0.6581 | 0.3803 | 0.1679 | 0.3141 | 0.3141 | 19.6503 |
| 0.6497 | 3.45 | 5500 | 0.6555 | 0.3776 | 0.1681 | 0.3132 | 0.3133 | 19.643 |
| 0.6483 | 3.76 | 6000 | 0.6520 | 0.3803 | 0.17 | 0.3153 | 0.3152 | 19.6666 |
| 0.6249 | 4.08 | 6500 | 0.6535 | 0.383 | 0.1736 | 0.3186 | 0.3185 | 19.6371 |
| 0.628 | 4.39 | 7000 | 0.6531 | 0.3825 | 0.1728 | 0.3181 | 0.318 | 19.6159 |
| 0.6288 | 4.7 | 7500 | 0.6495 | 0.3827 | 0.1727 | 0.3181 | 0.3181 | 19.6695 |
| 0.5921 | 5.02 | 8000 | 0.6509 | 0.3825 | 0.173 | 0.318 | 0.318 | 19.6447 |
| 0.6003 | 5.33 | 8500 | 0.6513 | 0.3833 | 0.1742 | 0.3198 | 0.3197 | 19.6866 |
| 0.5922 | 5.65 | 9000 | 0.6482 | 0.3837 | 0.1737 | 0.3195 | 0.3195 | 19.719 |
| 0.5878 | 5.96 | 9500 | 0.6483 | 0.3824 | 0.1737 | 0.3185 | 0.3185 | 19.6156 |
| 0.5646 | 6.27 | 10000 | 0.6503 | 0.3851 | 0.1754 | 0.3203 | 0.3204 | 19.6693 |
| 0.5753 | 6.59 | 10500 | 0.6473 | 0.3855 | 0.1761 | 0.3206 | 0.3206 | 19.6873 |
| 0.579 | 6.9 | 11000 | 0.6467 | 0.3861 | 0.1769 | 0.3223 | 0.3223 | 19.6635 |
| 0.5865 | 7.21 | 11500 | 0.6480 | 0.3862 | 0.176 | 0.3213 | 0.3212 | 19.7016 |
| 0.5746 | 7.53 | 12000 | 0.6480 | 0.3878 | 0.1785 | 0.3235 | 0.3236 | 19.6531 |
| 0.5678 | 7.84 | 12500 | 0.6460 | 0.3868 | 0.1776 | 0.3221 | 0.322 | 19.7039 |
| 0.5584 | 8.15 | 13000 | 0.6485 | 0.3875 | 0.178 | 0.3233 | 0.3233 | 19.6565 |
| 0.5484 | 8.47 | 13500 | 0.6477 | 0.3867 | 0.1777 | 0.3223 | 0.3224 | 19.6937 |
| 0.558 | 8.78 | 14000 | 0.6468 | 0.3873 | 0.1781 | 0.323 | 0.323 | 19.6823 |
| 0.5482 | 9.1 | 14500 | 0.6475 | 0.3878 | 0.1787 | 0.3231 | 0.3232 | 19.6896 |
| 0.5551 | 9.41 | 15000 | 0.6475 | 0.388 | 0.1783 | 0.3238 | 0.3237 | 19.666 |
| 0.5488 | 9.72 | 15500 | 0.6469 | 0.3879 | 0.1787 | 0.3238 | 0.3238 | 19.6644 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
mrzeiss/Rafale-PA10
|
mrzeiss
| 2024-02-01T20:12:24Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T19:47:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wknehrboss/autotrain-rdceb-j7r69
|
wknehrboss
| 2024-02-01T20:11:32Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"phi",
"text-generation",
"autotrain",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T19:52:15Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
momori-chegg/masa_model_test
|
momori-chegg
| 2024-02-01T19:28:13Z | 0 | 0 | null |
[
"moe",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-01-04T23:01:06Z |
---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
tags:
- moe
---
| Name | Age |
|-------------------|-------|
| Alice raamaowiejb | 24 |
| Bob | 19 |
# Model Card for Mixtral-8x7B
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Expert
|
sanchit-gandhi/distil-zephyr-1.5b-ssft
|
sanchit-gandhi
| 2024-02-01T19:26:22Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:sanchit-gandhi/Mistral-1.5B-v0.1",
"base_model:finetune:sanchit-gandhi/Mistral-1.5B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T17:10:34Z |
---
base_model: sanchit-gandhi/Mistral-7B-v0.1-6-layer
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: sanchit-gandhi/Mistral-7B-v0.1-6-layer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sanchit-gandhi/Mistral-7B-v0.1-6-layer
This model is a fine-tuned version of [sanchit-gandhi/Mistral-7B-v0.1-6-layer](https://huggingface.co/sanchit-gandhi/Mistral-7B-v0.1-6-layer) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 512
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.8342 | 1.0 | 273 | 4.7379 |
| 3.3301 | 2.0 | 546 | 3.2846 |
| 2.4158 | 3.0 | 819 | 2.4134 |
| 2.1322 | 4.0 | 1092 | 2.1637 |
| 2.0369 | 5.0 | 1365 | 2.1183 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.0
|
jlbaker361/dcgan-lazy-wikiart500-clip-resized
|
jlbaker361
| 2024-02-01T19:25:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-01T13:58:21Z |
---
{}
---
Creative Adversarial Network
epochs: 2
dataset jlbaker361/wikiart-balanced500
n classes 27
batch_size 4
images where resized to 768
and then center cropped to: 512
used clip=True
discriminator parameters:
init_dim: 32
final_dim 512
generator parameters:
input noise_dim: 100
|
Katelie/q-taxi
|
Katelie
| 2024-02-01T19:25:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-01T18:20:34Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Katelie/q-taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Weni/WeniGPT-2.3.3-Zephyr-7B-alpaca-prompt-step3742-merge-LLM_Base_2.0.3_SFT_reduction_variation
|
Weni
| 2024-02-01T19:24:52Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T19:22:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Randomtalks/dome
|
Randomtalks
| 2024-02-01T19:21:42Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"endpoints_compatible",
"region:us"
] | null | 2024-02-01T20:04:04Z |
# DOME wrapper for docstring intent classification
This wrapper allows to
* split docstrings into sentences
* convert to required DOME inputs
* predict class for each sentence in docstring
## Model architecture
Architecture is based on https://github.com/ICSE-DOME/DOME.
## Usage
```python
docstring = "sentences of docstring"
dome = DOME("dome_location")
sentences, predictions = dome.predict(docstring)
```
## Dependencies
```
spacy
torch
transformers
```
## Code of the model
````python
"""
Model is based on replication package for ICSE23 Paper Developer-Intent Driven Code Comment Generation.
Initial solution: https://github.com/ICSE-DOME/DOME
Pipeline consists of several parts:
* split docstring into sentences
* prepare input data for DOMEBertForClassification
* predict class
How to use:
```python
docstring = "sentences of docstring"
dome = DOME("dome_location")
sentences, predictions = dome.predict(docstring)
```
"""
import re
from typing import Tuple, List
import spacy
import torch
import torch.nn as nn
import torch.nn.functional as F
from transformers import AutoTokenizer, RobertaConfig, RobertaModel
MAX_LENGTH_BERT = 510
class DOME:
"""
End-to-end pipeline for docstring classification
* split sentences
* prepare inputs
* classify
"""
def __init__(self, pretrained_model: str):
"""
:param pretrained_model: location of pretrained model
"""
self.model = DOMEBertForClassification.from_pretrained(pretrained_model)
self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model)
self.docstring2sentences = Docstring2Sentences()
def predict(self, docstring: str) -> Tuple[List[str], List[str]]:
"""
Predict DOME classes for each sentence in docstring.
:param docstring: docstring to process
:return: tuple with list of sentences and list of predictions for each sentence.
"""
sentences = self.docstring2sentences.docstring2sentences(docstring)
predictions = [self.model.predict(*dome_preprocess(tokenizer=self.tokenizer, comment=sentence))
for sentence in sentences]
return sentences, predictions
class DOMEBertForClassification(RobertaModel):
"""
A custom classification model based on the RobertaModel for intent classification.
This model extends the RobertaModel with additional linear layers to incorporate
comment length as an additional feature for classification tasks.
"""
DOME_CLASS_NAMES = ["what", "why", "how-to-use", "how-it-is-done", "property", "others"]
def __init__(self, config: RobertaConfig):
"""
Initialize the DOMEBertForClassification model.
:param config: The configuration information for the RobertaModel.
"""
super().__init__(config)
# I omit possibility to configure number of classes and so on because we need to load pretrained model
# DOME layers for intent classification:
self.fc1 = nn.Linear(768 + 1, 768 // 3)
self.fc2 = nn.Linear(768 // 3, 6)
self.dropout = nn.Dropout(0.2)
def forward(self, input_ids: torch.Tensor, attention_mask: torch.Tensor = None, comment_len: torch.Tensor = None) \
-> torch.Tensor:
"""
Forward pass for the DOMEBertForClassification model.
:param input_ids: Tensor of token ids to be fed to a model.
:param attention_mask: Mask to avoid performing attention on padding token indices. Always equals 1.
:param comment_len: Tensor representing the length of comments. Equal 1 if comment has less than 3 words,
0 otherwise.
:return: The logits after passing through the model.
"""
# Use the parent class's forward method to get the base outputs
outputs = super().forward(
input_ids=input_ids,
attention_mask=attention_mask
)
# Extract the pooled output (last hidden state of the [CLS] token)
pooled_output = outputs.pooler_output
# DOME custom layers:
comment_len = comment_len.view(-1, 1).float() # Ensure comment_len is correctly shaped
# DOME use comment len as additional feature
combined_input = torch.cat([pooled_output, comment_len], dim=-1)
x = self.dropout(F.relu(self.fc1(self.dropout(combined_input))))
logits = self.fc2(x)
return logits
def predict(self, input_ids: torch.Tensor, attention_mask: torch.Tensor = None, comment_len: torch.Tensor = None) \
-> str:
"""
Predict class for tokenized docstring.
:param input_ids: Tensor of token ids to be fed to a model.
:param attention_mask: Mask to avoid performing attention on padding token indices. Always equals 1.
:param comment_len: Tensor representing the length of comments. Equal 1 if comment has less than 3 words,
0 otherwise.
:return: class
"""
logits = self.forward(input_ids=input_ids, attention_mask=attention_mask, comment_len=comment_len)
return self.DOME_CLASS_NAMES[int(torch.argmax(logits, 1))]
def dome_preprocess(tokenizer, comment):
"""
DOME preprocessor - returns all required values for "DOMEBertForClassification.forward".
This function limits maximum number of tokens to fit into BERT.
:param tokenizer: tokenizer to use.
:param comment: text of sentence from docstring/comment that should be classified by DOMEBertForClassification.
:return: tuple with (input_ids, attention_mask, comment_len).
"""
input_ids = tokenizer.convert_tokens_to_ids([tokenizer.cls_token] + tokenizer.tokenize(comment) +
[tokenizer.sep_token])[:MAX_LENGTH_BERT]
attention_mask = [1] * len(input_ids)
if len(comment.strip().split()) < 3:
comment_len = 1
else:
comment_len = 0
return (torch.tensor(input_ids).unsqueeze(0), torch.tensor(attention_mask).unsqueeze(0),
torch.tensor(comment_len).unsqueeze(0))
class Docstring2Sentences:
"""Helper class to split docstrings into sentences"""
def __init__(self):
self.spacy_nlp = spacy.load("en_core_web_sm")
@staticmethod
def split_docstring(docstring: str, delimiters: List[Tuple[str, str]]):
"""
Splits the docstring into separate parts of text and code blocks, preserving the original formatting.
:param docstring: The docstring to split.
:param delimiters: A list of tuples, each containing start and end delimiters for code blocks.
:return: A list of strings, each either a text block or a code block.
"""
# Escape delimiter parts for regex and create a combined pattern
escaped_delimiters = [tuple(map(re.escape, d)) for d in delimiters]
combined_pattern = '|'.join([f'({start}.*?{end})' for start, end in escaped_delimiters])
# Split using the combined pattern, preserving the delimiters
parts = re.split(combined_pattern, docstring, flags=re.DOTALL)
# Filter out empty strings
parts = [part for part in parts if part]
return parts
@staticmethod
def is_only_spaces_and_newlines(string):
"""
Check if the given string contains only spaces and newlines.
:param string: The string to check.
:return: True if the string contains only spaces and newlines, False otherwise.
"""
return bool(re.match(r'^[\s\n]+$', string))
def docstring2sentences(self, docstring):
"""
Splits a docstring into individual sentences, preserving code blocks.
This function uses `docstring2parts` to split the docstring into parts based on predefined code block delimiters.
It then utilizes a SpaCy NLP model to split the non-code text parts into sentences.
Code blocks are kept intact as single elements.
:param docstring: The docstring to be processed, which may contain both regular text and code blocks.
:return: A list containing individual sentences and intact code blocks.
"""
delimiters = [("@code", "@endcode"), ("\code", "\endcode")]
parts = self.split_docstring(docstring=docstring, delimiters=delimiters)
sentences = []
for part in parts:
if part[1:5] == "code" and part[-7:] == "endcode":
# code block
sentences.append(part)
else:
sentences.extend(sentence.text for sentence in self.spacy_nlp(part).sents)
return [sentence for sentence in sentences if not self.is_only_spaces_and_newlines(sentence)]
````
|
hsan512/ppo-LunarLander-v2
|
hsan512
| 2024-02-01T19:17:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-30T15:39:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.45 +/- 22.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ludis/tsukasa-8x7b-qlora-gptq
|
ludis
| 2024-02-01T19:12:23Z | 3 | 0 |
transformers
|
[
"transformers",
"mixtral",
"text-generation",
"dataset:PygmalionAI/PIPPA",
"dataset:lemonilia/LimaRP",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-15T01:04:08Z |
---
datasets:
- PygmalionAI/PIPPA
- lemonilia/LimaRP
---
## Gen Settings & Prompting
https://rentry.org/tsukasamodel
## Training
[axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
on a 4x nvidia a100 gpu cluster.
the a100 GPU cluster has been graciously provided by [lloorree](https://huggingface.co/lloorree).
rank 16 qlora (all modules) tune
base model mistralai/Mixtral-8x7B-v0.1 tuned on koishi commit 6e675d1 for one epoch
then tuned on pippa 6412b0c for one epoch (metharme completion)
then tuned on limarp Version 2023-10-19 for 2 epochs in metharme completion format with limit_data_length set to 32768 in dataprepare-templates.py
|
ludis/tsukasa-8x7b-qlora
|
ludis
| 2024-02-01T19:11:36Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mixtral",
"text-generation",
"dataset:PygmalionAI/PIPPA",
"dataset:lemonilia/LimaRP",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-15T01:04:04Z |
---
datasets:
- PygmalionAI/PIPPA
- lemonilia/LimaRP
---
## Gen Settings & Prompting
https://rentry.org/tsukasamodel
## Training
[axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
on a 4x nvidia a100 gpu cluster.
the a100 GPU cluster has been graciously provided by [lloorree](https://huggingface.co/lloorree).
rank 16 qlora (all modules) tune
base model mistralai/Mixtral-8x7B-v0.1 tuned on koishi commit 6e675d1 for one epoch
then tuned on pippa 6412b0c for one epoch (metharme completion)
then tuned on limarp Version 2023-10-19 for 2 epochs in metharme completion format
|
ludis/tsukasa-120b-qlora-gptq
|
ludis
| 2024-02-01T19:11:13Z | 4 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"dataset:PygmalionAI/PIPPA",
"dataset:lemonilia/LimaRP",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-29T14:54:49Z |
---
datasets:
- PygmalionAI/PIPPA
- lemonilia/LimaRP
---
## Gen Settings & Prompting
https://rentry.org/tsukasamodel
## GPTQ
2048 sequence length
wikitext
## Training
[axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
on a 8x nvidia a100 gpu cluster.
the a100 GPU cluster has been graciously provided by [lloorree](https://huggingface.co/lloorree).
rank 8 qlora (all modules) tune
base model alpindale/goliath-120b tuned on koishi commit 6e675d1 for one epoch
then tuned on pippa 6412b0c for one epoch (metharme completion)
then tuned on limarp (without ponyville, lolicit, all the fallen, and eka's portal subsets) Version 2023-10-19 for 2 epochs in metharme completion format
|
Tanor/sr_pln_tesla_dbmu
|
Tanor
| 2024-02-01T19:10:31Z | 1 | 0 |
spacy
|
[
"spacy",
"token-classification",
"sr",
"license:cc-by-sa-3.0",
"model-index",
"region:us"
] |
token-classification
| 2024-02-01T18:59:33Z |
---
tags:
- spacy
- token-classification
language:
- sr
license: cc-by-sa-3.0
model-index:
- name: sr_pln_tesla_dbmu
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9465813405
- name: NER Recall
type: recall
value: 0.9506156136
- name: NER F Score
type: f_score
value: 0.9485941877
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9815057009
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9797101778
---
sr_pln_tesla_dbmu is a spaCy model meticulously fine-tuned for Part-of-Speech Tagging, Lemmatization, and Named Entity Recognition in Serbian language texts. This advanced model incorporates a transformer layer based on distilbert/distilbert-base-multilingual-cased, enhancing its analytical capabilities. It is proficient in identifying 7 distinct categories of entities: PERS (persons), ROLE (professions), DEMO (demonyms), ORG (organizations), LOC (locations), WORK (artworks), and EVENT (events). Detailed information about these categories is available in the accompanying table. The development of this model has been made possible through the support of the Science Fund of the Republic of Serbia, under grant #7276, for the project 'Text Embeddings - Serbian Language Applications - TESLA'.
| Feature | Description |
| --- | --- |
| **Name** | `sr_pln_tesla_dbmu` |
| **Version** | `1.0.0` |
| **spaCy** | `>=3.7.2,<3.8.0` |
| **Default Pipeline** | `transformer`, `tagger`, `trainable_lemmatizer`, `ner` |
| **Components** | `transformer`, `tagger`, `trainable_lemmatizer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | `CC BY-SA 3.0` |
| **Author** | [Milica Ikonić Nešić, Saša Petalinkar, Mihailo Škorić, Ranka Stanković](https://tesla.rgf.bg.ac.rs/) |
### Label Scheme
<details>
<summary>View label scheme (23 labels for 2 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `VERB`, `X` |
| **`ner`** | `DEMO`, `EVENT`, `LOC`, `ORG`, `PERS`, `ROLE`, `WORK` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 98.15 |
| `LEMMA_ACC` | 97.97 |
| `ENTS_F` | 94.86 |
| `ENTS_P` | 94.66 |
| `ENTS_R` | 95.06 |
| `TRANSFORMER_LOSS` | 604959.76 |
| `TAGGER_LOSS` | 359950.85 |
| `TRAINABLE_LEMMATIZER_LOSS` | 466065.88 |
| `NER_LOSS` | 175653.69 |
|
spep/ppo-LunarLander-v2
|
spep
| 2024-02-01T19:02:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-01T19:02:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.78 +/- 13.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LoneStriker/limarp-miqu-1-70b-6.0bpw-h6-exl2
|
LoneStriker
| 2024-02-01T19:00:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"llama 2",
"en",
"dataset:lemonilia/LimaRP",
"region:us"
] | null | 2024-02-01T18:38:10Z |
---
library_name: peft
tags:
- generated_from_trainer
- llama
- llama 2
model-index:
- name: volume/limarp-70b-qlora
results: []
datasets:
- lemonilia/LimaRP
language:
- en
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: models/miqu-1-70b-sf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: train-all-max-alpaca-llama.jsonl
type: completion
dataset_prepared_path:
val_set_size: 0.0
output_dir: ./volume/limarp-70b-qlora
adapter: qlora
lora_model_dir:
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: 70b-lora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0001
train_on_inputs: true
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps:
eval_table_size:
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# limarp-miqu-1-70b-qlora
Experimental limarp qlora trained at 16384 ctx length (greater than size of the longest limarp sample when tokenized via llama's tokenizer) on the fixed dequantized miqu-1-70b model by 152334H.
I wasn't particularly happy with the results I got when I tried applying the lora at varying weights to the miqu-1-70b model. It's possible that this is related to the fact that the model was dequantized from Q5_K_M GGUF, or perhaps due to it already being an instruct-tuned model.
However, I decided to go ahead and release this in case someone else finds a use for it. Provided as-is and YMMV.
## Model description
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
Inspired by the previously named "Roleplay" preset in SillyTavern, with this version of LimaRP it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The lengths using during training are:
`micro`, `tiny`, `short`, `medium`, `long`, `massive`, `huge`, `enormous`, `humongous`, `unlimited`.
**The recommended starting length is medium**. Keep in mind that the AI can ramble or impersonate
the user with very long messages.
The length control effect is reproducible, but the messages will not necessarily follow
lengths very precisely, rather follow certain ranges on average, as seen in this table
with data from tests made with one reply at the beginning of the conversation:

Response length control appears to work well also deep into the conversation. **By omitting
the modifier, the model will choose the most appropriate response length** (although it might
not necessarily be what the user desires).
## Intended uses & limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model.
## Training and evaluation data
For more details about LimaRP, see the dataset page.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
wliu88/blip2-opt-2.7b-peft-slider
|
wliu88
| 2024-02-01T18:47:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-01T18:10:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/limarp-miqu-1-70b-5.0bpw-h6-exl2
|
LoneStriker
| 2024-02-01T18:38:08Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"llama 2",
"en",
"dataset:lemonilia/LimaRP",
"region:us"
] | null | 2024-02-01T18:19:29Z |
---
library_name: peft
tags:
- generated_from_trainer
- llama
- llama 2
model-index:
- name: volume/limarp-70b-qlora
results: []
datasets:
- lemonilia/LimaRP
language:
- en
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: models/miqu-1-70b-sf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: train-all-max-alpaca-llama.jsonl
type: completion
dataset_prepared_path:
val_set_size: 0.0
output_dir: ./volume/limarp-70b-qlora
adapter: qlora
lora_model_dir:
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: 70b-lora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0001
train_on_inputs: true
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps:
eval_table_size:
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# limarp-miqu-1-70b-qlora
Experimental limarp qlora trained at 16384 ctx length (greater than size of the longest limarp sample when tokenized via llama's tokenizer) on the fixed dequantized miqu-1-70b model by 152334H.
I wasn't particularly happy with the results I got when I tried applying the lora at varying weights to the miqu-1-70b model. It's possible that this is related to the fact that the model was dequantized from Q5_K_M GGUF, or perhaps due to it already being an instruct-tuned model.
However, I decided to go ahead and release this in case someone else finds a use for it. Provided as-is and YMMV.
## Model description
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
Inspired by the previously named "Roleplay" preset in SillyTavern, with this version of LimaRP it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The lengths using during training are:
`micro`, `tiny`, `short`, `medium`, `long`, `massive`, `huge`, `enormous`, `humongous`, `unlimited`.
**The recommended starting length is medium**. Keep in mind that the AI can ramble or impersonate
the user with very long messages.
The length control effect is reproducible, but the messages will not necessarily follow
lengths very precisely, rather follow certain ranges on average, as seen in this table
with data from tests made with one reply at the beginning of the conversation:

Response length control appears to work well also deep into the conversation. **By omitting
the modifier, the model will choose the most appropriate response length** (although it might
not necessarily be what the user desires).
## Intended uses & limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model.
## Training and evaluation data
For more details about LimaRP, see the dataset page.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
binbin83/setfit-MiniLM-dialog-themes-13-nov
|
binbin83
| 2024-02-01T18:37:56Z | 49 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"setfit",
"text-classification",
"fr",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2024-02-01T16:59:48Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
language:
- fr
metrics:
- f1
---
# binbin83/setfit-MiniLM-dialog-themes-13-nov
The model is a multi-class multi-label text classifier to distinguish the different dialog act in semi-structured interview. The data used fot fine-tuning were in French.
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("binbin83/setfit-MiniLM-dialog-themes-13-nov")
label_dict = {'CauseConsequences': 0, 'PersonalExperience': 1, 'Connaissance': 2, 'Other': 3, 'Reconstitution': 4, 'Temps': 5, 'Reaction': 6, 'Nouvelle': 7, 'Media': 8, 'Lieux': 9}
# Run inference
preds = model(["Vous pouvez continuer", "Pouvez-vous me dire précisément quel a été l'odre chronologique des événements ?"])
labels = [[[f for f, p in zip(labels_dict, ps) if p] for ps in [pred]] for pred in preds ]
```
## Labels and training data
Based on interview guide, the themes evocated in the interview where :
['CauseConsequences', 'PersonalExperience', 'Connaissance', 'Other', 'Reconstitution', 'Temps', 'Reaction', 'Nouvelle', 'Media', 'Lieux']
We label a small amount of data:
('Other', 50), ('Reaction', 46), ('PersonalExperience', 41), ('CauseConsequences', 41), ('Media', 27), ('Lieux', 13), ('Nouvelle', 10), ('Temps', 9), ('Reconstitution', 7), ('Connaissance', 3)
and finetune a set fit model on it
## Training and Performances
We finetune: "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"
using SetFit with CosineLossSimilarity and this parapeters: epochs = 10, batch_size=32, num_iterations = 20
On our test dataset, we get this results:
{'f1': 0.639, 'f1_micro': 0.6808510638297872, 'f1_sample': 0.6666666666666666, 'accuracy': 0.6086956521739131}
## BibTeX entry and citation info
To cite the current study:
```bibtex
@article{
doi = {conference paper},
url = {https://arxiv.org/abs/2209.11055},
author = {Quillivic Robin, Charles Payet},
keywords = {NLP, JADT},
title = {Semi-Structured Interview Analysis: A French NLP Toolbox for Social Sciences},
publisher = {JADT},
year = {2024},
copyright = {Creative Commons Attribution 4.0 International}
}
```
To cite the setFit paper:
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
AmrutaMuthal/mero_controlnet_scaled_thick_box_lr2
|
AmrutaMuthal
| 2024-02-01T18:33:38Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-01T17:25:44Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-AmrutaMuthal/mero_controlnet_scaled_thick_box_lr2
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
|
LoneStriker/limarp-miqu-1-70b-3.5bpw-h6-exl2
|
LoneStriker
| 2024-02-01T18:19:27Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"llama 2",
"en",
"dataset:lemonilia/LimaRP",
"region:us"
] | null | 2024-02-01T18:06:16Z |
---
library_name: peft
tags:
- generated_from_trainer
- llama
- llama 2
model-index:
- name: volume/limarp-70b-qlora
results: []
datasets:
- lemonilia/LimaRP
language:
- en
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: models/miqu-1-70b-sf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: train-all-max-alpaca-llama.jsonl
type: completion
dataset_prepared_path:
val_set_size: 0.0
output_dir: ./volume/limarp-70b-qlora
adapter: qlora
lora_model_dir:
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: 70b-lora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0001
train_on_inputs: true
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps:
eval_table_size:
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# limarp-miqu-1-70b-qlora
Experimental limarp qlora trained at 16384 ctx length (greater than size of the longest limarp sample when tokenized via llama's tokenizer) on the fixed dequantized miqu-1-70b model by 152334H.
I wasn't particularly happy with the results I got when I tried applying the lora at varying weights to the miqu-1-70b model. It's possible that this is related to the fact that the model was dequantized from Q5_K_M GGUF, or perhaps due to it already being an instruct-tuned model.
However, I decided to go ahead and release this in case someone else finds a use for it. Provided as-is and YMMV.
## Model description
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
Inspired by the previously named "Roleplay" preset in SillyTavern, with this version of LimaRP it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The lengths using during training are:
`micro`, `tiny`, `short`, `medium`, `long`, `massive`, `huge`, `enormous`, `humongous`, `unlimited`.
**The recommended starting length is medium**. Keep in mind that the AI can ramble or impersonate
the user with very long messages.
The length control effect is reproducible, but the messages will not necessarily follow
lengths very precisely, rather follow certain ranges on average, as seen in this table
with data from tests made with one reply at the beginning of the conversation:

Response length control appears to work well also deep into the conversation. **By omitting
the modifier, the model will choose the most appropriate response length** (although it might
not necessarily be what the user desires).
## Intended uses & limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model.
## Training and evaluation data
For more details about LimaRP, see the dataset page.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Kouskousi/mistral_7b_finetuned_eval_2
|
Kouskousi
| 2024-02-01T18:16:23Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-01T18:12:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Katelie/q-FrozenLake-v1-4x4-noSlippery
|
Katelie
| 2024-02-01T18:16:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-01T18:16:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Katelie/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SeanWu25/Mixtral_8x7b_WuKurtz
|
SeanWu25
| 2024-02-01T18:09:00Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-01T04:32:33Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mixtral-8x7B-v0.1
model-index:
- name: Mixtral_8x7b_WuKurtz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mixtral_8x7b_WuKurtz
Model is fine-tuned from the nephrology 80k dataset that we curated, injected into mixtral 8x7b instruct.
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the generator dataset.
## Model description
Mixtral 8x7b WuKurtz was created by Sean Wu, Michael Koo, Andy Black, Lesley Blum, Fabien Scalzo, and Ira Kurtz at Pepperdine and UCLA.
Arxiv paper out soon!
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data out soon!
## Training procedure
Parameter efficient fine tuning.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.8.1
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
sbulut/distilbert-base-uncased
|
sbulut
| 2024-02-01T18:06:49Z | 93 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-01T15:57:21Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased
results: []
datasets:
- imdb
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2250
- Accuracy: 0.9322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2256 | 1.0 | 1563 | 0.2599 | 0.9039 |
| 0.1528 | 2.0 | 3126 | 0.2250 | 0.9322 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
LoneStriker/limarp-miqu-1-70b-3.0bpw-h6-exl2
|
LoneStriker
| 2024-02-01T18:06:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"llama 2",
"en",
"dataset:lemonilia/LimaRP",
"region:us"
] | null | 2024-02-01T17:54:47Z |
---
library_name: peft
tags:
- generated_from_trainer
- llama
- llama 2
model-index:
- name: volume/limarp-70b-qlora
results: []
datasets:
- lemonilia/LimaRP
language:
- en
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: models/miqu-1-70b-sf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: train-all-max-alpaca-llama.jsonl
type: completion
dataset_prepared_path:
val_set_size: 0.0
output_dir: ./volume/limarp-70b-qlora
adapter: qlora
lora_model_dir:
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: 70b-lora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0001
train_on_inputs: true
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps:
eval_table_size:
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# limarp-miqu-1-70b-qlora
Experimental limarp qlora trained at 16384 ctx length (greater than size of the longest limarp sample when tokenized via llama's tokenizer) on the fixed dequantized miqu-1-70b model by 152334H.
I wasn't particularly happy with the results I got when I tried applying the lora at varying weights to the miqu-1-70b model. It's possible that this is related to the fact that the model was dequantized from Q5_K_M GGUF, or perhaps due to it already being an instruct-tuned model.
However, I decided to go ahead and release this in case someone else finds a use for it. Provided as-is and YMMV.
## Model description
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
Inspired by the previously named "Roleplay" preset in SillyTavern, with this version of LimaRP it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The lengths using during training are:
`micro`, `tiny`, `short`, `medium`, `long`, `massive`, `huge`, `enormous`, `humongous`, `unlimited`.
**The recommended starting length is medium**. Keep in mind that the AI can ramble or impersonate
the user with very long messages.
The length control effect is reproducible, but the messages will not necessarily follow
lengths very precisely, rather follow certain ranges on average, as seen in this table
with data from tests made with one reply at the beginning of the conversation:

Response length control appears to work well also deep into the conversation. **By omitting
the modifier, the model will choose the most appropriate response length** (although it might
not necessarily be what the user desires).
## Intended uses & limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model.
## Training and evaluation data
For more details about LimaRP, see the dataset page.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mzbac/Kunpeng-4x7B-mistral
|
mzbac
| 2024-02-01T18:00:14Z | 46 | 0 |
transformers
|
[
"transformers",
"mixtral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-30T14:02:53Z |
---
license: apache-2.0
---
# Kunpeng-4x7B-mistral
## Architecture: Mixture of Experts (MoE)
A Moe Model of "Mistral-7B-Instruct-v0.2", "Mistral-7B-v0.1", "Starling-LM-7B-alpha", and "Mistral-7B-Instruct-v0.1" then fine-tuned with "WizardLM_evol_instruct_70k" for q_proj, v_proj, and gate.
|
djomo/MISTRALllux2000-7b-v3
|
djomo
| 2024-02-01T17:55:55Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-31T14:00:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/limarp-miqu-1-70b-2.4bpw-h6-exl2
|
LoneStriker
| 2024-02-01T17:54:44Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"llama 2",
"en",
"dataset:lemonilia/LimaRP",
"region:us"
] | null | 2024-02-01T17:45:29Z |
---
library_name: peft
tags:
- generated_from_trainer
- llama
- llama 2
model-index:
- name: volume/limarp-70b-qlora
results: []
datasets:
- lemonilia/LimaRP
language:
- en
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: models/miqu-1-70b-sf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: train-all-max-alpaca-llama.jsonl
type: completion
dataset_prepared_path:
val_set_size: 0.0
output_dir: ./volume/limarp-70b-qlora
adapter: qlora
lora_model_dir:
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: 70b-lora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0001
train_on_inputs: true
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps:
eval_table_size:
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# limarp-miqu-1-70b-qlora
Experimental limarp qlora trained at 16384 ctx length (greater than size of the longest limarp sample when tokenized via llama's tokenizer) on the fixed dequantized miqu-1-70b model by 152334H.
I wasn't particularly happy with the results I got when I tried applying the lora at varying weights to the miqu-1-70b model. It's possible that this is related to the fact that the model was dequantized from Q5_K_M GGUF, or perhaps due to it already being an instruct-tuned model.
However, I decided to go ahead and release this in case someone else finds a use for it. Provided as-is and YMMV.
## Model description
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
Inspired by the previously named "Roleplay" preset in SillyTavern, with this version of LimaRP it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The lengths using during training are:
`micro`, `tiny`, `short`, `medium`, `long`, `massive`, `huge`, `enormous`, `humongous`, `unlimited`.
**The recommended starting length is medium**. Keep in mind that the AI can ramble or impersonate
the user with very long messages.
The length control effect is reproducible, but the messages will not necessarily follow
lengths very precisely, rather follow certain ranges on average, as seen in this table
with data from tests made with one reply at the beginning of the conversation:

Response length control appears to work well also deep into the conversation. **By omitting
the modifier, the model will choose the most appropriate response length** (although it might
not necessarily be what the user desires).
## Intended uses & limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model.
## Training and evaluation data
For more details about LimaRP, see the dataset page.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
weijie210/zephyr-7b-teacher
|
weijie210
| 2024-02-01T17:53:18Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T16:21:24Z |
---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: zephyr-7b-teacher
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-teacher
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7076 | 1.0 | 212 | 0.7019 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
BarraHome/zephyr-dpo-16bit-fast
|
BarraHome
| 2024-02-01T17:47:57Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/zephyr-sft",
"base_model:finetune:unsloth/zephyr-sft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T17:32:00Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/zephyr-sft
---
# Uploaded model
- **Developed by:** BarraHome
- **License:** apache-2.0
- **Finetuned from model :** unsloth/zephyr-sft
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LoneStriker/limarp-miqu-1-70b-4.25bpw-h6-exl2
|
LoneStriker
| 2024-02-01T17:45:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"llama 2",
"en",
"dataset:lemonilia/LimaRP",
"region:us"
] | null | 2024-02-01T17:29:28Z |
---
library_name: peft
tags:
- generated_from_trainer
- llama
- llama 2
model-index:
- name: volume/limarp-70b-qlora
results: []
datasets:
- lemonilia/LimaRP
language:
- en
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: models/miqu-1-70b-sf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: train-all-max-alpaca-llama.jsonl
type: completion
dataset_prepared_path:
val_set_size: 0.0
output_dir: ./volume/limarp-70b-qlora
adapter: qlora
lora_model_dir:
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: 70b-lora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0001
train_on_inputs: true
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps:
eval_table_size:
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# limarp-miqu-1-70b-qlora
Experimental limarp qlora trained at 16384 ctx length (greater than size of the longest limarp sample when tokenized via llama's tokenizer) on the fixed dequantized miqu-1-70b model by 152334H.
I wasn't particularly happy with the results I got when I tried applying the lora at varying weights to the miqu-1-70b model. It's possible that this is related to the fact that the model was dequantized from Q5_K_M GGUF, or perhaps due to it already being an instruct-tuned model.
However, I decided to go ahead and release this in case someone else finds a use for it. Provided as-is and YMMV.
## Model description
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
Inspired by the previously named "Roleplay" preset in SillyTavern, with this version of LimaRP it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The lengths using during training are:
`micro`, `tiny`, `short`, `medium`, `long`, `massive`, `huge`, `enormous`, `humongous`, `unlimited`.
**The recommended starting length is medium**. Keep in mind that the AI can ramble or impersonate
the user with very long messages.
The length control effect is reproducible, but the messages will not necessarily follow
lengths very precisely, rather follow certain ranges on average, as seen in this table
with data from tests made with one reply at the beginning of the conversation:

Response length control appears to work well also deep into the conversation. **By omitting
the modifier, the model will choose the most appropriate response length** (although it might
not necessarily be what the user desires).
## Intended uses & limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model.
## Training and evaluation data
For more details about LimaRP, see the dataset page.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Shalie/BlendSHideriKanzakiPonyXL
|
Shalie
| 2024-02-01T17:44:27Z | 4 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"dataset:Hunko/BlendSHideriKanzaki-Dataset",
"base_model:AstraliteHeart/pony-diffusion-v6",
"base_model:adapter:AstraliteHeart/pony-diffusion-v6",
"license:other",
"region:us"
] |
text-to-image
| 2024-02-01T17:42:38Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/02211-4248957210-score_9, score_8_up, score_7_up, uncensored,
source_anime, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no ko,
hair bow, bl.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/02209-2588563061-score_9, score_8_up, score_7_up, uncensored,
source_anime, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no ko,
hair bow, bl.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/02207-2588563061-score_9, score_8_up, score_7_up, uncensored,
source_anime, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no ko,
hair bow, bl.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/02206-2658691489-score_9, score_8_up, score_7_up, uncensored,
source_anime, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no ko,
hair bow, bl.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs, sitting, desk, upper body
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/02205-1372833793-score_9, score_8_up, score_7_up, uncensored,
source_anime, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no ko,
hair bow, bl.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs, sitting, desk, upper body
parameters:
negative_prompt: >-
worst quality, low quality, 3d, realistic, sketch, normal quality, jpeg
artifacts, depth of field, blurry, bloom, messy drawing, amateur drawing,
fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry
output:
url: >-
images/02204-2981146249-score_9, score_8_up, score_7_up, uncensored,
source_anime, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no ko,
hair bow, bl.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs, holding, holding weapon, looking at viewer,
open mouth, smile, solo, v, weapon
parameters:
negative_prompt: >-
worst quality, low quality, simple background, white background, covered
navel, thick thighs, 3d, realistic, sketch, normal quality, jpeg
artifacts, muscular, depth of field, blurry, bloom, messy drawing, amateur
drawing, fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry, source_cartoon
output:
url: >-
images/02196-345681128-score_9, score_8_up, score_7_up, uncensored,
source_anime, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no ko,
hair bow, bl.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs, cafe, sitting, open-mouth smile
parameters:
negative_prompt: >-
worst quality, low quality, simple background, white background, covered
navel, thick thighs, 3d, realistic, sketch, normal quality, jpeg
artifacts, muscular, depth of field, blurry, bloom, messy drawing, amateur
drawing, fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry, source_cartoon
output:
url: >-
images/02180-1416696227-score_9, score_8_up, score_7_up, uncensored,
source_anime, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no ko,
hair bow, bl.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs
parameters:
negative_prompt: >-
worst quality, low quality, simple background, white background, covered
navel, thick thighs, 3d, realistic, sketch, normal quality, jpeg
artifacts, muscular, depth of field, blurry, bloom, messy drawing, amateur
drawing, fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry, source_cartoon
output:
url: >-
images/02179-2464231425-score_9, score_8_up, score_7_up, uncensored,
source_anime, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no ko,
hair bow, bl.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs, aircraft, airplane, blue sky, blurry, blurry
foreground, cloud, cloudy sky, contrail, day, flower, hibiscus, outdoors,
pink flower, red flower, sky, tree, upper body, watermark
parameters:
negative_prompt: >-
worst quality, low quality, simple background, white background, covered
navel, thick thighs, 3d, realistic, sketch, normal quality, jpeg
artifacts, muscular, depth of field, blurry, bloom, messy drawing, amateur
drawing, fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry, source_cartoon
output:
url: >-
images/02177-366975212-score_9, score_8_up, score_7_up, uncensored,
source_anime, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no ko,
hair bow, bl.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs, fish, flower, goldfish, underwater, water,
white flower
parameters:
negative_prompt: score_5,score_6,source_pony,source_furry
output:
url: >-
images/02174-209050874-score_9, score_8_up, score_7_up, uncensored,
source_anime, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no ko,
hair bow, bl.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1boy,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs, card, chromatic aberration, english text,
floating, floating hair, from behind, full body, glitch, grand piano,
instrument, piano, plant, wind, blush, hands up, looking at viewer, parted
lips, solo
parameters:
negative_prompt: >-
worst quality, low quality, simple background, white background, covered
navel, thick thighs, 3d, realistic, sketch, normal quality, jpeg
artifacts, muscular, depth of field, blurry, bloom, messy drawing, amateur
drawing, fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry, source_cartoon
output:
url: >-
images/02167-2004545213-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1boy, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no
ko, hair b.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1boy,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs, doughnut, english text, food, speech bubble,
hand up, looking at viewer, parted lips, smile, solo
parameters:
negative_prompt: >-
worst quality, low quality, simple background, white background, covered
navel, thick thighs, 3d, realistic, sketch, normal quality, jpeg
artifacts, muscular, depth of field, blurry, bloom, messy drawing, amateur
drawing, fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry, source_cartoon
output:
url: >-
images/02166-4005128592-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1boy, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no
ko, hair b.png
- text: >-
score_9, score_8_up, score_7_up, uncensored, source_anime, 1boy,
<lora:spblendsKanzakiHideriXL:1> hideridef, otoko no ko, hair bow, black
hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt,
white gloves, white thighhighs, bird, branch, bug, butterfly, drawer,
floating hair, flower, sparkle, white bird, window, backpack, blush, closed
mouth, grin, holding, holding bag, looking at viewer, smile, solo, standing
parameters:
negative_prompt: >-
worst quality, low quality, simple background, white background, covered
navel, thick thighs, 3d, realistic, sketch, normal quality, jpeg
artifacts, muscular, depth of field, blurry, bloom, messy drawing, amateur
drawing, fewer digits, extra digits, greyscale, monochrome, source_pony,
source_furry, source_cartoon
output:
url: >-
images/02165-355754941-score_9, score_8_up, score_7_up, uncensored,
source_anime, 1boy, _lora_spblendsKanzakiHideriXL_1_ hideridef, otoko no
ko, hair b.png
base_model: AstraliteHeart/pony-diffusion-v6
instance_prompt: null
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
datasets:
- Hunko/BlendSHideriKanzaki-Dataset
pipeline_tag: text-to-image
---
# Hideri Kanzaki
<Gallery />
## Model description
Hideri Kanzaki From BlendS!
Trained on 1 outfit, every outfit has a trigger word corresponding to the appearance of the character and suggested prompts that summons related clothes and accesories.
Works well with 0.7-1.0 weight
## Trigger words
First Outfit: `hideridef, otoko no ko, hair bow, black hairband, dress, short sleeves, frills, waist apron, frilled apron, skirt, white gloves, white thighhighs`
## Download model
Weights for this model are available in Safetensors format.
[Download](/Hunko/BlendSHideriKanzakiPonyXL/tree/main) them in the Files & versions tab.
### License
This LoRA model is provided under the [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license.
## Restrictions:
- **Usage in Generation Services**: You are not allowed to use the model in any generation services without proper permission from the original creator.
- **Commercial Usage**: The sale of the model or any commercial usage is strictly prohibited without explicit written permission from the original creator.
|
AliRiza/kramer_face_lora_sdxl
|
AliRiza
| 2024-02-01T17:27:23Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-01T17:27:20Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of kramer person
license: openrail++
---
# SDXL LoRA DreamBooth - AliRiza/kramer_face_lora_sdxl
<Gallery />
## Model description
These are AliRiza/kramer_face_lora_sdxl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of kramer person to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](AliRiza/kramer_face_lora_sdxl/tree/main) them in the Files & versions tab.
|
ThuyNT03/SOMD-train-xlm-v1
|
ThuyNT03
| 2024-02-01T17:22:45Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-30T18:59:46Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: SOMD-train-xlm-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SOMD-train-xlm-v1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- F1: 0.9963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 1.0 | 1243 | 0.0069 | 0.6471 |
| No log | 2.0 | 2486 | 0.0147 | 0.4535 |
| No log | 3.0 | 3729 | 0.0030 | 0.8179 |
| No log | 4.0 | 4972 | 0.0014 | 0.9087 |
| No log | 5.0 | 6215 | 0.0007 | 0.9353 |
| No log | 6.0 | 7458 | 0.0004 | 0.9664 |
| No log | 7.0 | 8701 | 0.0002 | 0.9867 |
| No log | 8.0 | 9944 | 0.0001 | 0.9918 |
| No log | 9.0 | 11187 | 0.0001 | 0.9954 |
| No log | 10.0 | 12430 | 0.0001 | 0.9963 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
minhocas/convnextv2-tiny-1k-224-finetuned-eurosat-albumentations
|
minhocas
| 2024-02-01T17:22:18Z | 174 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnextv2-tiny-1k-224",
"base_model:finetune:facebook/convnextv2-tiny-1k-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-31T22:39:15Z |
---
license: apache-2.0
base_model: facebook/convnextv2-tiny-1k-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnextv2-tiny-1k-224-finetuned-eurosat-albumentations
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.05309734513274336
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-tiny-1k-224-finetuned-eurosat-albumentations
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.0531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | nan | 0.0531 |
| 0.0 | 2.0 | 16 | nan | 0.0531 |
| 0.0 | 3.0 | 24 | nan | 0.0531 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
anikettty/openhermes-mistral-dpo-gptq
|
anikettty
| 2024-02-01T17:17:03Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"base_model:finetune:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-02-01T17:02:26Z |
---
license: apache-2.0
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: openhermes-mistral-dpo-gptq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openhermes-mistral-dpo-gptq
This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6183
- Rewards/chosen: 0.0311
- Rewards/rejected: -0.0695
- Rewards/accuracies: 0.625
- Rewards/margins: 0.1006
- Logps/rejected: -143.5769
- Logps/chosen: -125.5347
- Logits/rejected: -2.7201
- Logits/chosen: -2.8454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6852 | 0.01 | 10 | 0.6765 | 0.0305 | 0.0058 | 0.5625 | 0.0247 | -142.8238 | -125.5402 | -2.7089 | -2.8446 |
| 0.7058 | 0.01 | 20 | 0.6604 | 0.0370 | -0.0005 | 0.5625 | 0.0375 | -142.8867 | -125.4757 | -2.7121 | -2.8454 |
| 0.6407 | 0.01 | 30 | 0.6319 | 0.0537 | -0.0265 | 0.6875 | 0.0802 | -143.1462 | -125.3082 | -2.7146 | -2.8457 |
| 0.6445 | 0.02 | 40 | 0.6210 | 0.0345 | -0.0659 | 0.625 | 0.1004 | -143.5407 | -125.5005 | -2.7173 | -2.8463 |
| 0.6847 | 0.03 | 50 | 0.6183 | 0.0311 | -0.0695 | 0.625 | 0.1006 | -143.5769 | -125.5347 | -2.7201 | -2.8454 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.1
|
NeuNav/Reinforce-PixelCopter-1
|
NeuNav
| 2024-02-01T17:11:49Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-01T17:11:45Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 24.20 +/- 13.53
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LoneStriker/limarp-miqu-1-70b-GGUF
|
LoneStriker
| 2024-02-01T17:09:53Z | 18 | 4 |
peft
|
[
"peft",
"gguf",
"generated_from_trainer",
"llama",
"llama 2",
"en",
"dataset:lemonilia/LimaRP",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-01T13:23:17Z |
---
library_name: peft
tags:
- generated_from_trainer
- llama
- llama 2
model-index:
- name: volume/limarp-70b-qlora
results: []
datasets:
- lemonilia/LimaRP
language:
- en
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: models/miqu-1-70b-sf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: train-all-max-alpaca-llama.jsonl
type: completion
dataset_prepared_path:
val_set_size: 0.0
output_dir: ./volume/limarp-70b-qlora
adapter: qlora
lora_model_dir:
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: 70b-lora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0001
train_on_inputs: true
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps:
eval_table_size:
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# limarp-miqu-1-70b-qlora
Experimental limarp qlora trained at 16384 ctx length (greater than size of the longest limarp sample when tokenized via llama's tokenizer) on the fixed dequantized miqu-1-70b model by 152334H.
I wasn't particularly happy with the results I got when I tried applying the lora at varying weights to the miqu-1-70b model. It's possible that this is related to the fact that the model was dequantized from Q5_K_M GGUF, or perhaps due to it already being an instruct-tuned model.
However, I decided to go ahead and release this in case someone else finds a use for it. Provided as-is and YMMV.
## Model description
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
Inspired by the previously named "Roleplay" preset in SillyTavern, with this version of LimaRP it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The lengths using during training are:
`micro`, `tiny`, `short`, `medium`, `long`, `massive`, `huge`, `enormous`, `humongous`, `unlimited`.
**The recommended starting length is medium**. Keep in mind that the AI can ramble or impersonate
the user with very long messages.
The length control effect is reproducible, but the messages will not necessarily follow
lengths very precisely, rather follow certain ranges on average, as seen in this table
with data from tests made with one reply at the beginning of the conversation:

Response length control appears to work well also deep into the conversation. **By omitting
the modifier, the model will choose the most appropriate response length** (although it might
not necessarily be what the user desires).
## Intended uses & limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model.
## Training and evaluation data
For more details about LimaRP, see the dataset page.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
indischepartij/OpenMia-Adapter-Ep2
|
indischepartij
| 2024-02-01T17:03:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-01T17:03:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.