modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
adarshheg/llama2-13b-finetuned-100-v1
|
adarshheg
| 2024-02-07T23:54:20Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T23:54:15Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
joislosinghermind/lola-gunvolt
|
joislosinghermind
| 2024-02-07T23:20:15Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:unlicense",
"region:us"
] |
text-to-image
| 2024-02-07T23:20:12Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\02\0d\0,\0 \0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0a\0n\0i\0m\0e\0,\0 \0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0f\0a\0c\0e\0,\0 \0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0p\0e\0r\0f\0e\0c\0t\0 \0l\0i\0g\0h\0t\0i\0n\0g\0,\0 \0l\0o\0l\0a\0,\0 \0b\0l\0u\0e\0 \0e\0y\0e\0s\0,\0 \0g\0r\0e\0e\0n\0_\0h\0a\0i\0r\0,\0 \0c\0i\0t\0y\0s\0c\0a\0p\0e\0,\0 \0f\0u\0l\0l\0_\0b\0o\0d\0y\0,\0 \0s\0o\0l\0o\0,\0 \0s\0o\0l\0o\0 \0f\0o\0c\0u\0s\0,\0 \0t\0-\0s\0h\0i\0r\0t\0,\0 \0 \0s\0h\0o\0r\0t\0s\0,\0 \0<\0l\0o\0r\0a\0:\0l\0o\0l\0a\0:\01\0>\0"
output:
url: images/00492-abyssorangemix3AOM3_aom3a1b_3939236143.jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: lola
license: unlicense
---
# lola-gunvolt
<Gallery />
## Trigger words
You should use `lola` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/joislosinghermind/lola-gunvolt/tree/main) them in the Files & versions tab.
|
davisalex22/BLOOMTurismEC-7b1-ft
|
davisalex22
| 2024-02-07T22:56:25Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T22:51:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ydang/jsd_Mistral-7B-v0.1-M3
|
ydang
| 2024-02-07T22:51:52Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T22:47:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jcarmody93/Uhd
|
Jcarmody93
| 2024-02-07T22:41:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-07T21:50:30Z |
git lfs install
git clone https://huggingface.co/spaces/tonyassi/text-to-image-SDXL
|
adriana98/whisper-large-v2-LORA-colab
|
adriana98
| 2024-02-07T22:37:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T20:17:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ORromu/Reinforce-CartPole-v1
|
ORromu
| 2024-02-07T22:01:43Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T22:01:34Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Utshav/Llama2-7b-finetuned-alpaca
|
Utshav
| 2024-02-07T21:40:20Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-07T21:16:41Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: Llama2-7b-finetuned-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama2-7b-finetuned-alpaca
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
hanspeterlyngsoeraaschoujensen/deepseek-math-7b-instruct-GPTQ
|
hanspeterlyngsoeraaschoujensen
| 2024-02-07T21:18:11Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-02-07T21:16:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
google/metricx-23-xxl-v2p0
|
google
| 2024-02-07T21:15:25Z | 491 | 5 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T16:34:37Z |
---
license: apache-2.0
---
# MetricX-23
*This is not an officially supported Google product.*
**GitHub repository: [https://github.com/google-research/metricx](https://github.com/google-research/metricx)**
This repository contains the MetricX-23 models,
a family of models for automatic evaluation of translations that were proposed
in the WMT'23 Metrics Shared Task submission
[MetricX-23: The Google Submission to the WMT 2023 Metrics Shared Task](https://aclanthology.org/2023.wmt-1.63/).
The models were trained in [T5X](https://github.com/google-research/t5x) and
then converted for use in PyTorch.
## Available Models
There are 6 models available on HuggingFace that vary in the number of
parameters and whether or not the model is reference-based or reference-free
(also known as quality estimation, or QE):
* [MetricX-23-XXL](https://huggingface.co/google/metricx-23-large-v2p0)
* [MetricX-23-XL](https://huggingface.co/google/metricx-23-xl-v2p0)
* [MetricX-23-Large](https://huggingface.co/google/metricx-23-xxl-v2p0)
* [MetricX-23-QE-XXL](https://huggingface.co/google/metricx-23-qe-large-v2p0)
* [MetricX-23-QE-XL](https://huggingface.co/google/metricx-23-qe-xl-v2p0)
* [MetricX-23-QE-Large](https://huggingface.co/google/metricx-23-qe-xxl-v2p0)
We recommend using the XXL model versions for the best agreement with human
judgments of translation quality, the Large versions for best speed, and the
XL for an intermediate use case.
## Changes to the WMT'23 Submission
These models available here are most similar to the primary submission to the WMT'23 Metrics
Shared Task. They are initialized with [mT5](https://aclanthology.org/2021.naacl-main.41/)
then fine-tuned on a combination of direct assessment and MQM data. However,
we made some changes that make these models different from the WMT'23 submissions.
First, the models are trained to regress the actual MQM score rather than a
normalized score between 0 and 1. **That means the output from the MetricX-23
models is a score in the range [0, 25] where lower is better (i.e., it predicts
an error score).**
Second, these models were trained with a larger variety of synthetic data that
makes them more robust to translation edge cases like over- and undertranslation,
described in more detail in the following section.
### Synthetic Data
In order for our MetricX models to learn to identify certain types of bad
translations that are not sufficiently (or at all) represented in the regular
training data, we created synthetic examples and mixed them in during training.
The synthetic training data was generated from the DA datasets ranging from
WMT15 to WMT21 (~ 43 language pairs). In most cases, the synthetic examples have
the candidate translation manipulated so as to turn it into a bad translation
with a specific issue commonly unrecognized by learned metrics.
The table below provides an overview of the various failure modes that we
considered, including brief descriptions of how we prepared the synthetic data
to address them.
| Failure mode | Synthetic example description |
| ----------- | ----------- |
| Undertranslation | Candidate translation with an arbitrary sentence removed (if multi-sentence); alternatively, candidate with a certain proportion of words removed from the end. |
| Overtranslation | Candidate translation duplicated (with space in between). |
| Fluent but unrelated translation | Arbitrary reference of a similar length from the dataset. |
| Gibberish | Text of a similar length as the reference, generated by sampling words from the reference translation vocabulary (built from all references in the data). |
| Missing punctuation | Reference translation with the end punctuation removed (11 punctuation symbols considered). |
| Latin instead of Chinese/Japanese or Hindi/Bengali punctuation | Candidate translation with the language-specific punctuation symbol at the end replaced with the Latin equivalent (e.g., "." instead of "。" or "।"); alternatively, the punctuation symbol is replaced with the Latin equivalent in the reference, keeping the correct one in the candidate. |
| Reference-matching translation | Reference translation copied as the candidate translation (unlike the rest of the synthetic data, these examples are meant to train the metric to predict a perfect score for candidates matching the reference). |
Examples from the first 4 categories were assigned a label corresponding to the
worst score on the given rating scale (e.g., 25 when mixed with MQM training
data), whereas the reference-matching translation examples are assigned the best
score (e.g., 0 when used with MQM data). The missing/incorrect punctuation
examples were labeled with a score slightly worse than perfect.
Note that some of the synthetic datasets are only meaningful in the
reference-based scenario, and we thus excluded them when training a QE variant
of MetricX. These are the Latin-vs-special punctuation and the
reference-matching translation examples.
Most of the synthetic training sets were created using stratified sampling
across target languages, taking 500 examples per target language. One exception
is the missing punctuation set, which used a stratified sample across different
punctuation symbols instead.
When training MetricX, a small proportion of the synthetic examples was mixed
with the regular training examples. During the first-stage fine-tuning on DA
data, each synthetic training set constituted between 0.1% and 1% of all
training examples, whereas in the second-stage fine-tuning on MQM data we used
an even smaller proportion, around 0.05%.
As for evaluating the effect of the synthetic training data on the model's
performance, the DEMETR challenge set - which we originally used to evaluate the
models submitted to the WMT23 Metrics Shared Task - was not adequate anymore. We
therefore created a new DEMETR-style test set based on the WMT22 DA data, with
examples constructed analogically to the synthetic training examples, as
described above. This test set helped us determine the right proportions of
synthetic data for fine-tuning in order to make MetricX robust for the failure
modes in consideration, without sacrificing the system- and segment-level
correlations with human ratings.
## Usage
The code for using MetricX models can be found at [https://github.com/google-research/metricx](https://github.com/google-research/metricx).
The repository contains example prediction scripts, described below.
The `metricx23/predict.py` script contains an example for how to run inference
on the models.
### Reference-Based
Example usage for a reference-based model:
```bash
python -m metricx23.predict \
--tokenizer google/mt5-xl \
--model_name_or_path google/metricx-23-xl-v2p0 \
--max_input_length 1024 \
--batch_size 1 \
--input_file input.jsonl \
--output_file output.jsonl
```
`input.jsonl` is expected to have 1 serialized JSON object per line with
`"reference"` and `"hypothesis"` fields. The output jsonl will be parallel
to `input.jsonl` but additionally contain a `"prediction"` field with the predicted score.
Note that the model was trained with a maximum input length of 1024 tokens, so
significantly increasing that value may lead to unpredictable behavior.
### Reference-Free
Example usage for a reference-free model:
```bash
python -m metricx23.predict \
--tokenizer google/mt5-xl \
--model_name_or_path google/metricx-23-qe-xl-v2p0 \
--max_input_length 1024 \
--batch_size 1 \
--input_file input.jsonl \
--output_file output.jsonl \
--qe
```
`input.jsonl` is expected to have 1 serialized JSON object per line with
`"source"` and `"hypothesis"` fields. The output jsonl will be parallel
to `input.jsonl` but additionally contain a `"prediction"` field with the predicted score.
## Meta-Evaluation
The `metricx23/evaluate.py` script contains code to calculate various correlations
between the MetricX-23 scores and MQM ratings of translation quality using the
[MT Metrics Eval](https://github.com/google-research/mt-metrics-eval) library.
Example usage:
```bash
python -m metricx23.evaluate \
--dataset wmt22 \
--lp en-de \
--input_file input.jsonl \
--output_file output.json
```
`input.jsonl` is expected to have one JSON object serialized per line.
Each JSON object is expected to contain 4 fields:
* `"system_id"`: The name of the system that generated the translation.
* `"segment_id"`: The 0-based index of the corresponding segment in the MT
Metrics Eval data.
* `"label"`: The ground-truth translation quality score (with higher is better).
* `"prediction"`: The model predicted translation quality score (with lower is
better; the script negates the scores so higher is better).
The script will calculate the 4 agreement/correlations that were used in the
WMT'23 Shared Task. Below are the results for the MetricX-23 models on the
WMT'22 Metrics Shared Task data:
English-German:
| Model | System-Level Accuracy | System-Level Pearson | Segment-Level Pearson | Segment-Level Pairwise Acc |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| MetricX-23-XXL | 0.795 | 0.835 | 0.546 | 0.619 |
| MetricX-23-XL | 0.756 | 0.813 | 0.540 | 0.605 |
| MetricX-23-Large | 0.769 | 0.759 | 0.507 | 0.595 |
| MetricX-23-QE-XXL | 0.769 | 0.830 | 0.490 | 0.606 |
| MetricX-23-QE-XL | 0.718 | 0.684 | 0.421 | 0.594 |
| MetricX-23-QE-Large | 0.744 | 0.671 | 0.387 | 0.579 |
English-Russian:
| Model | System-Level Accuracy | System-Level Pearson | Segment-Level Pearson | Segment-Level Pairwise Acc |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| MetricX-23-XXL | 0.905 | 0.943 | 0.477 | 0.609 |
| MetricX-23-XL | 0.876 | 0.906 | 0.498 | 0.589 |
| MetricX-23-Large | 0.876 | 0.841 | 0.474 | 0.569 |
| MetricX-23-QE-XXL | 0.895 | 0.940 | 0.470 | 0.602 |
| MetricX-23-QE-XL | 0.848 | 0.861 | 0.415 | 0.570 |
| MetricX-23-QE-Large | 0.819 | 0.778 | 0.411 | 0.551 |
Chinese-English:
| Model | System-Level Accuracy | System-Level Pearson | Segment-Level Pearson | Segment-Level Pairwise Acc |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| MetricX-23-XXL | 0.868 | 0.919 | 0.605 | 0.551 |
| MetricX-23-XL | 0.868 | 0.924 | 0.584 | 0.543 |
| MetricX-23-Large | 0.857 | 0.919 | 0.555 | 0.539 |
| MetricX-23-QE-XXL | 0.857 | 0.928 | 0.573 | 0.544 |
| MetricX-23-QE-XL | 0.802 | 0.879 | 0.546 | 0.529 |
| MetricX-23-QE-Large | 0.758 | 0.904 | 0.522 | 0.529 |
The `metricx23/evaluate_wmt23.py` script re-calculates the average correlation
score that was used to rank submissions from the
[WMT'23 Shared Task](https://www2.statmt.org/wmt23/pdf/2023.wmt-1.51.pdf).
Example usage:
```bash
python -m metricx23.evaluate_wmt23 \
--en_de predictions_ende.jsonl \
--he_en predictions_heen.jsonl \
--zh_en predictions_zhen.jsonl \
--output_file output.json
```
Each of the 3 input files is expected to be in the same format as described
above. Each file should correspond to running inference on each of the language
pairs from the WMT'23 dataset.
The results for each of the models is the following:
| Model | Average Correlation |
| ----------- | ----------- |
| MetricX-23-XXL | 0.812 |
| MetricX-23-XL | 0.813 |
| MetricX-23-Large | 0.794 |
| MetricX-23-QE-XXL | 0.797 |
| MetricX-23-QE-XL | 0.767 |
| MetricX-23-QE-Large | 0.762 |
## Citation
If you use MetricX-23 in your research, please cite the following publication:
```bibtex
@inproceedings{juraska-etal-2023-metricx,
title = {{MetricX-23: The Google Submission to the WMT 2023 Metrics Shared Task}},
author = "Juraska, Juraj and
Finkelstein, Mara and
Deutsch, Daniel and
Siddhant, Aditya and
Mirzazadeh, Mehdi and
Freitag, Markus",
editor = "Koehn, Philipp and
Haddow, Barry and
Kocmi, Tom and
Monz, Christof",
booktitle = "Proceedings of the Eighth Conference on Machine Translation",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.wmt-1.63",
doi = "10.18653/v1/2023.wmt-1.63",
pages = "756--767",
}
```
|
Jimmyhd/mistral7btimebookFinetune50rows
|
Jimmyhd
| 2024-02-07T21:13:25Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T21:04:28Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
gayanin/bart-noised-with-gcd-dist-0.5
|
gayanin
| 2024-02-07T21:08:59Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-07T19:03:31Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: bart-noised-with-gcd-dist-0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-noised-with-gcd-dist-0.5
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
gayanin/bart-noised-with-gcd-dist-0.4
|
gayanin
| 2024-02-07T21:08:50Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-07T19:03:27Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: bart-noised-with-gcd-dist-0.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-noised-with-gcd-dist-0.4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
gayanin/bart-noised-with-gcd-dist-0.2
|
gayanin
| 2024-02-07T21:08:37Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-07T17:28:55Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: bart-noised-with-gcd-dist-0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-noised-with-gcd-dist-0.2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
danaleee/Long_rank10_iter500_valprompt
|
danaleee
| 2024-02-07T21:07:20Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-07T18:44:33Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks rc_car
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - danaleee/Long_rank10_iter500_valprompt
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks rc_car using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
ClementeH/faisan-7b-instruct
|
ClementeH
| 2024-02-07T20:58:54Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-02-07T20:44:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
nm-testing/Llama-2-7b-pruned40-retrained
|
nm-testing
| 2024-02-07T20:51:19Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:cerebras/SlimPajama-627B",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T20:46:25Z |
---
base_model: meta-llama/Llama-2-7b-hf
datasets:
- cerebras/SlimPajama-627B
---
Checkpoint of a [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf) model that has had 50% of the parameters pruned in one-shot with SparseGPT, then retrained for 40B tokens from SlimPajama while maintaining sparsity.
* Model: Llama 2
* Size: 7B
* LR: 3.00E-4
* Dataset: SlimPajama
* Retrained tokens: 40B
* Notes: no warmup + decay to 0.0
* Eval Harness:
* CommonSense Reasoning: 62.2 (97.65%)
* Reading Comprehension: 57.7 (98.30%)
* World Knowledge: 42.4 (97.65%)
* Math: 6.1 (74.39%)
* Code: 16.2 (98.78%)
|
Pouria88/K
|
Pouria88
| 2024-02-07T20:40:49Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-07T20:40:49Z |
---
license: creativeml-openrail-m
---
|
AbhiKrov/mt5-small-english-to-hindi-akrov
|
AbhiKrov
| 2024-02-07T20:32:42Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-05T21:04:57Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-small-english-to-hindi-akrov
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-english-to-hindi-akrov
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Bleu: 0.0
- Gen Len: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| No log | 1.0 | 26 | nan | 0.0 | 0.0 |
| No log | 2.0 | 52 | nan | 0.0 | 0.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
DrishtiSharma/phi2-english-to-hinglish-translation-merged
|
DrishtiSharma
| 2024-02-07T20:25:55Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-07T20:25:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
devlocalhost/hi-tinylama-gguf-16bit
|
devlocalhost
| 2024-02-07T20:23:32Z | 41 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T20:21:54Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** devlocalhost
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Fukurokun/MemGPT-DPO-uncensored-6.0bpw-exl2
|
Fukurokun
| 2024-02-07T20:23:20Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"MemGPT",
"function",
"function calling",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T13:59:25Z |
---
library_name: transformers
license: apache-2.0
language:
- en
tags:
- MemGPT
- function
- function calling
---
# MemGPT DPO uncensored 6.0bpw exl2
- Model creator: [Starlette!](https://huggingface.co/starsnatched)
- Original model: [MemGPT-DPO-uncensored](https://huggingface.co/starsnatched/MemGPT-DPO-uncensored)
This is an quantized, uncensored release of DPO version of a Language Model, intended to be used with [MemGPT](https://github.com/cpacker/MemGPT).
# WARNING
This model is **UNCENSORED**. That means this model is highly compliant to any requests, even unethical and potentially dangerous ones. I do not take any responsibility whatsoever for any damage caused by the model in this repo.
# Model Description
This repository contains an uncensored, finetuned model of [Mistral 7B Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). This model is specifically designed for operating within function calling environment in MemGPT. It demonstrates comparable performances to GPT-4 when it comes to working with MemGPT.
# Key Features
* Function calling
* Dedicated to working with MemGPT
* Supports medium-length context, up to sequences of 8,192
# Prompt Format
This model uses **ChatML** prompt format:
```
<|im_start|>system
{system_instruction}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>
```
# Usage
This model is designed to be ran on multiple backends, such as [oogabooga's textgen WebUI](https://github.com/oobabooga/text-generation-webui).
Simply install your preferred backend, and then load up this model.
Then, configure MemGPT using `memgpt configure`, and chat with MemGPT via `memgpt run` command!
# Model Details
* Developed by: @starsnatched
* Model type: This repo contains a language model based on the transformer decoder architecture.
* Language: English
* Contact: For any questions, concerns or comments about this model, please contact me at Discord, @starsnatched.
# Training Infrastructure
* Hardware: The model in this repo was trained on 2x A100 80GB GPUs.
# Intended Use
The model is designed to be used as the base model for MemGPT agents.
# Limitations and Risks
The model may exhibit unreliable, unsafe, or biased behaviours. Please double check the results this model may produce.
|
Kowshik24/BanglaLM
|
Kowshik24
| 2024-02-07T20:19:20Z | 0 | 0 | null |
[
"text-generation",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-02-07T19:34:39Z |
---
license: apache-2.0
pipeline_tag: text-generation
---
# Bigram Language Model
## Overview
This repository contains a simple Bigram Language Model implemented in PyTorch. The model is trained to predict the next character in a sequence, given the current character. It's a character-level model and can be used for tasks like text generation.
## Model Details
- **Model Type**: Character-level Language Model
- **Architecture**: Simple lookup table for character bigrams
- **Training Data**: [https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/bengali]
## Requirements
- Python 3.x
- PyTorch
- JSON (for loading the tokenizer)
## Installation
First, clone this repository:
## Loading the Model
To load the model, you need to initialize it with the vocabulary size and load the pre-trained weights:
```python
import torch
from model import BigramLanguageModel
vocab_size = 225
model = BigramLanguageModel(vocab_size)
model.load_state_dict(torch.load('path_to_your_model.pth', map_location=torch.device('cpu')))
model.eval()
import json
with open('tokenizer_mappings.json', 'r', encoding='utf-8') as f:
mappings = json.load(f)
stoi = mappings['stoi']
itos = mappings['itos']
# Example usage
encode = lambda s: [stoi[c] for c in s]
decode = lambda l: ''.join([itos[i] for i in l])
context = torch.tensor([encode("Your initial text")], dtype=torch.long)
generated_text_indices = model.generate(context, max_new_tokens=100)
print(decode(generated_text_indices[0].tolist()))
|
devlocalhost/hi-tinylama
|
devlocalhost
| 2024-02-07T20:16:52Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T20:15:09Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** devlocalhost
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Poliuszko/ppo-LunarLander-v21-1
|
Poliuszko
| 2024-02-07T20:03:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T17:16:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.40 +/- 22.27
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
llm-jp/llm-jp-13b-instruct-full-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1
|
llm-jp
| 2024-02-07T19:49:25Z | 151 | 2 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"en",
"ja",
"dataset:databricks/databricks-dolly-15k",
"dataset:llm-jp/databricks-dolly-15k-ja",
"dataset:llm-jp/oasst1-21k-en",
"dataset:llm-jp/oasst1-21k-ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-29T12:52:30Z |
---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
library_name: transformers
pipeline_tag: text-generation
inference: false
datasets:
- databricks/databricks-dolly-15k
- llm-jp/databricks-dolly-15k-ja
- llm-jp/oasst1-21k-en
- llm-jp/oasst1-21k-ja
---
# llm-jp-13b-instruct-full-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1
This repository provides large language models developed by [LLM-jp](https://llm-jp.nii.ac.jp/), a collaborative project launched in Japan.
| Model Variant |
| :--- |
|**Instruction models ver1.1**|
| [llm-jp-13b-dpo-lora-hh_rlhf_ja-v1.1](https://huggingface.co/llm-jp/llm-jp-13b-dpo-lora-hh_rlhf_ja-v1.1)|
| [llm-jp-13b-instruct-full-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1) |
| [llm-jp-13b-instruct-lora-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1) |
|**Instruction models ver1.0**|
| [llm-jp-13b-instruct-full-jaster-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-v1.0) |
| [llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0) |
| [llm-jp-13b-instruct-full-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-dolly-oasst-v1.0) |
| [llm-jp-13b-instruct-lora-jaster-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-jaster-v1.0) |
| [llm-jp-13b-instruct-lora-jaster-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-jaster-dolly-oasst-v1.0) |
| [llm-jp-13b-instruct-lora-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-dolly-oasst-v1.0) |
| |
| :--- |
|**Pre-trained models**|
| [llm-jp-13b-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-v1.0) |
| [llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) |
Checkpoints format: Hugging Face Transformers (Megatron-DeepSpeed format models are available [here](https://huggingface.co/llm-jp/llm-jp-13b-v1.0-mdsfmt))
## Required Libraries and Their Versions
- torch>=2.0.0
- transformers>=4.34.0
- tokenizers>=0.14.0
- accelerate==0.23.0
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-instruct-full-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-instruct-full-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1", device_map="auto", torch_dtype=torch.float16)
text = "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。\n\n### 指示:\n{instruction}\n\n### 応答:\n".format(instruction="自然言語処理とは何か")
tokenized_input = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=512,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.1,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 300B
|Model|Params|Layers|Hidden size|Heads|Context length|
|:---:|:---:|:---:|:---:|:---:|:---:|
|13b model|13b|40|5120|40|2048|
|1.3b model|1.3b|24|2048|16|2048|
## Training
- **Pre-training:**
- **Hardware:** 96 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/))
- **Software:** Megatron-DeepSpeed
- **Instruction tuning:**
- **Hardware:** 8 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/))
- **Software:** [TRL](https://github.com/huggingface/trl), [PEFT](https://github.com/huggingface/peft), and [DeepSpeed](https://github.com/microsoft/DeepSpeed)
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure.
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
- **Training algorithm:** SentencePiece Unigram byte-fallback
- **Training data:** A subset of the datasets for model pre-training
- **Vocabulary size:** 50,570 (mixed vocabulary of Japanese, English, and source code)
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---:|:---:|:---:|
|Japanese|[Wikipedia](https://huggingface.co/datasets/wikipedia)|1.5B
||[mC4](https://huggingface.co/datasets/mc4)|136B
|English|[Wikipedia](https://huggingface.co/datasets/wikipedia)|5B
||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|135B
|Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
The pre-training was continuously conducted using a total of 10 folds of non-overlapping data, each consisting of approximately 27-28B tokens.
We finalized the pre-training with additional (potentially) high-quality 27B tokens data obtained from the identical source datasets listed above used for the 10-fold data.
### Instruction tuning
The models have been fine-tuned on the following datasets.
| Language | Dataset | description |
|:---|:---:|:---:|
|Japanese|[jaster](https://github.com/llm-jp/llm-jp-eval)| An automatically transformed data from the existing Japanese NLP datasets |
|English|[databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)| - |
|Japanese|[databricks-dolly-15k-ja](https://huggingface.co/datasets/llm-jp/databricks-dolly-15k-ja)| A translated one by DeepL in LLM-jp |
|English|[oasst1-21k-en](https://huggingface.co/datasets/llm-jp/oasst1-21k-en)| English subset of [oasst1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1) |
|Japanese|[oasst1-21k-ja](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja)| A translated one by DeepL in LLM-jp |
|Japanese|[ichikara_003_001](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/)| ichikara-instruction dataset (ver.003-001)
|Japanese|[hh-rlhf-12k-ja](https://huggingface.co/datasets/llm-jp/hh-rlhf-12k-ja)| A translated one by DeepL in LLM-jp |
## Evaluation
You can view the evaluation results of several LLMs on this [leaderboard](http://wandb.me/llm-jp-leaderboard). We used [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) for the evaluation.
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru, Hiroshi Matsuda, Jun Suzuki, Namgi Han, Saku Sugawara, Shota Sasaki, Shuhei Kurita, Taishi Nakamura, Takashi Kodama, Takumi Okamoto.
|
kviai/Kvi-Upscale-V1
|
kviai
| 2024-02-07T19:46:31Z | 0 | 6 |
diffusers
|
[
"diffusers",
"Image Upscaling",
"Img2Img",
"image-to-image",
"en",
"license:cc-by-4.0",
"region:us"
] |
image-to-image
| 2024-01-17T18:09:41Z |
---
license: cc-by-4.0
language:
- en
library_name: diffusers
pipeline_tag: image-to-image
tags:
- Image Upscaling
- Img2Img
---
### Image Upscaling Model
This repository contains the PyTorch model for upscaling images. The model has been trained to upscale low-resolution images to higher resolution using convolutional neural networks.
## Model Details
- Model Name: Kvi-Upscale
- Author: KviAI
- License: Creative Commons Attribution 4.0
## Instructions
To use this model for upscaling, please follow the instructions in the accompanying Python script.
|
jashanno/ppo-LunarLander-v2
|
jashanno
| 2024-02-07T19:42:38Z | 0 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T19:42:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.67 +/- 16.27
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
prarthana878/my-pet-dog
|
prarthana878
| 2024-02-07T19:35:10Z | 1 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-07T19:30:41Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My--Pet-Dog Dreambooth model trained by prarthana878 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 4jk21cs044
Sample pictures of this concept:
.jpeg.jpg)

.jpg)
.jpg)
.jpg)
|
Caraaaaa/text_image_captioning
|
Caraaaaa
| 2024-02-07T19:31:48Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"git",
"image-text-to-text",
"image-to-text",
"dataset:Caraaaaa/non_text_image_captioning",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-12-24T13:48:31Z |
---
datasets:
- Caraaaaa/non_text_image_captioning
pipeline_tag: image-to-text
---
This is a [GenerativeImage2Text](https://huggingface.co/microsoft/git-base) model finetuned on [non-text images](https://huggingface.co/datasets/Caraaaaa/non_text_image_captioning) extracted from documents (i.e.PDF). It is used to analyze the content of the image and produce a descriptive caption.
It is part of a [project]((https://github.com/caraaaaa/doc_accessibility?tab=readme-ov-file)) to build a software solution capable of processing offline documents (PDFs, Word, PowerPoint, PPT, etc.) to detect WCAG accessibility issues.
Example document with non-text images:

Extracted Image:

Generated caption:
"Indication of correct signature"
|
maviced/intel-image-classification
|
maviced
| 2024-02-07T19:27:14Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-02-07T19:27:09Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
arryuann/medical-text-ft
|
arryuann
| 2024-02-07T19:24:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T19:21:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MichalGas/vit-base-patch16-224-in21k-finetuned-mgasior-07-02-2024
|
MichalGas
| 2024-02-07T19:03:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-07T17:22:14Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- f1
model-index:
- name: vit-base-patch16-224-in21k-finetuned-mgasior-07-02-2024
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: F1
type: f1
value: 0.7716535433070866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-mgasior-07-02-2024
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8842
- F1: 0.7717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.731 | 0.98 | 35 | 1.6748 | 0.3386 |
| 1.5196 | 1.99 | 71 | 1.4890 | 0.4173 |
| 1.3727 | 2.99 | 107 | 1.2938 | 0.5276 |
| 1.2194 | 4.0 | 143 | 1.1519 | 0.6457 |
| 1.1538 | 4.98 | 178 | 1.0544 | 0.6693 |
| 1.0379 | 5.99 | 214 | 0.9852 | 0.7165 |
| 1.0232 | 6.99 | 250 | 0.9439 | 0.7323 |
| 0.9586 | 8.0 | 286 | 0.9136 | 0.7480 |
| 0.9374 | 8.98 | 321 | 0.8946 | 0.7638 |
| 0.96 | 9.79 | 350 | 0.8842 | 0.7717 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
shapermindai/pygmalion-free
|
shapermindai
| 2024-02-07T18:43:30Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-generation",
"text generation",
"conversational",
"en",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T13:28:02Z |
---
license: agpl-3.0
language:
- en
thumbnail: null
tags:
- text generation
- conversational
inference: true
pipeline_tag: conversational
---
# Pygmalion 1.3B
## Model description
Pymalion 1.3B is a proof-of-concept dialogue model based on EleutherAI's [pythia-1.3b-deduped](https://huggingface.co/EleutherAI/pythia-1.3b-deduped).
**Warning:** This model is **NOT** suitable for use by minors. It **will** output X-rated content under certain circumstances.
## Training data
The fine-tuning dataset consisted of 56MB of dialogue data gathered from multiple sources, which includes both real _and_ partially machine-generated conversations.
## Training procedure
Fine-tuning was done using [ColossalAI](https://github.com/hpcaitech/ColossalAI) (specifically, with a slightly modified version of their [OPT fine-tune example](https://github.com/hpcaitech/ColossalAI/blob/78509124d32b63b7fc36f6508e0576a326d51422/examples/language/opt/run_clm.py)) for around 11.4 million tokens over 5440 steps on a single 24GB GPU. The run took just under 21 hours.
## Intended use
### The easy way
We provide a notebook with a Gradio UI for playing around with the model without having to manually format inputs. This notebook can be found [here](https://github.com/PygmalionAI/gradio-ui/blob/master/notebooks/GPU.ipynb).
### The manual way
The model can be used as a regular text generation model, but it'll perform best if the input prompt adheres to the following format:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
[DIALOGUE HISTORY]
You: [Your input message here]
[CHARACTER]:
```
Where `[CHARACTER] `is, as you can probably guess, the name of the character you want the model to portray, and `[DIALOGUE HISTORY]` is chat history so the model can have some conversational context to draw from. Ideally it'll be pairs of messages like:
```
[CHARACTER]: [some dialogue here]
You: [your response to the dialogue above]
```
Apart from chat history, you can also just add example conversations in `[DIALOGUE HISTORY]` to show how the character should speak - ideally at the beginning, so it doesn't get confused as to what's conversation history vs. character definition.
## Known issues
- The model can get stuck repeating certain phrases, or sometimes even entire sentences.
- We believe this is due to that behavior being present in the training data itself, and plan to investigate and adjust accordingly for future versions.
|
jlbaker361/dcgan-cond-wikiart1000-clip-resized
|
jlbaker361
| 2024-02-07T18:38:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-01T04:06:55Z |
---
{}
---
Creative Adversarial Network
epochs: 200
dataset jlbaker361/wikiart-balanced1000
n classes 27
batch_size 128
images where resized to 768
and then center cropped to: 512
used clip=True
conditional =True
discriminator parameters:
init_dim: 32
final_dim 512
generator parameters:
input noise_dim: 100
|
ryusangwon/bart-large-cnndm
|
ryusangwon
| 2024-02-07T18:30:26Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T12:34:59Z |
---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: cnn_dailymail_726_bart-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cnn_dailymail_726_bart-large
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8412
- Rouge1: 0.2469
- Rouge2: 0.1266
- Rougel: 0.2074
- Rougelsum: 0.2332
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.9706 | 0.22 | 500 | 0.9015 | 0.237 | 0.1181 | 0.1979 | 0.2232 | 19.9999 |
| 0.9212 | 0.45 | 1000 | 0.8771 | 0.237 | 0.1193 | 0.199 | 0.2233 | 20.0 |
| 0.8991 | 0.67 | 1500 | 0.8572 | 0.2443 | 0.1238 | 0.2045 | 0.2304 | 20.0 |
| 0.9085 | 0.89 | 2000 | 0.8519 | 0.2404 | 0.1227 | 0.2022 | 0.2269 | 20.0 |
| 0.8494 | 1.11 | 2500 | 0.8471 | 0.2437 | 0.1233 | 0.2041 | 0.2298 | 20.0 |
| 0.832 | 1.34 | 3000 | 0.8400 | 0.2438 | 0.1248 | 0.2055 | 0.2301 | 20.0 |
| 0.8522 | 1.56 | 3500 | 0.8393 | 0.2417 | 0.1242 | 0.2043 | 0.2283 | 20.0 |
| 0.8494 | 1.78 | 4000 | 0.8338 | 0.2436 | 0.1239 | 0.2047 | 0.23 | 19.9999 |
| 0.7729 | 2.01 | 4500 | 0.8332 | 0.2431 | 0.1253 | 0.2048 | 0.2298 | 20.0 |
| 0.7761 | 2.23 | 5000 | 0.8323 | 0.2477 | 0.1264 | 0.207 | 0.2335 | 19.9994 |
| 0.7788 | 2.45 | 5500 | 0.8277 | 0.2473 | 0.1259 | 0.2068 | 0.2333 | 20.0 |
| 0.7832 | 2.67 | 6000 | 0.8251 | 0.2453 | 0.126 | 0.2061 | 0.2317 | 20.0 |
| 0.7888 | 2.9 | 6500 | 0.8239 | 0.242 | 0.1241 | 0.2037 | 0.2287 | 20.0 |
| 0.7413 | 3.12 | 7000 | 0.8360 | 0.2394 | 0.1228 | 0.2017 | 0.2258 | 20.0 |
| 0.7438 | 3.34 | 7500 | 0.8283 | 0.2462 | 0.1267 | 0.2072 | 0.2326 | 19.9999 |
| 0.7271 | 3.57 | 8000 | 0.8275 | 0.2406 | 0.1235 | 0.2028 | 0.2276 | 20.0 |
| 0.7435 | 3.79 | 8500 | 0.8221 | 0.2451 | 0.1254 | 0.2055 | 0.2311 | 19.9998 |
| 0.7072 | 4.01 | 9000 | 0.8277 | 0.2437 | 0.1251 | 0.2049 | 0.2301 | 19.9999 |
| 0.708 | 4.24 | 9500 | 0.8270 | 0.2465 | 0.1263 | 0.2067 | 0.2325 | 19.9999 |
| 0.7058 | 4.46 | 10000 | 0.8279 | 0.2424 | 0.1249 | 0.2045 | 0.229 | 19.9999 |
| 0.6918 | 4.68 | 10500 | 0.8248 | 0.246 | 0.1259 | 0.2063 | 0.232 | 19.9998 |
| 0.7121 | 4.9 | 11000 | 0.8231 | 0.2457 | 0.126 | 0.2058 | 0.232 | 19.9999 |
| 0.6667 | 5.13 | 11500 | 0.8297 | 0.2458 | 0.1262 | 0.2066 | 0.2323 | 19.9996 |
| 0.6767 | 5.35 | 12000 | 0.8309 | 0.2469 | 0.1269 | 0.2071 | 0.2332 | 19.9996 |
| 0.6961 | 5.57 | 12500 | 0.8299 | 0.247 | 0.1271 | 0.2074 | 0.2333 | 20.0 |
| 0.6842 | 5.8 | 13000 | 0.8333 | 0.2473 | 0.127 | 0.2077 | 0.2336 | 19.9996 |
| 0.6485 | 6.02 | 13500 | 0.8360 | 0.2454 | 0.1259 | 0.2061 | 0.2316 | 19.9998 |
| 0.6651 | 6.24 | 14000 | 0.8349 | 0.2454 | 0.126 | 0.2062 | 0.2314 | 20.0 |
| 0.6483 | 6.46 | 14500 | 0.8331 | 0.2454 | 0.1258 | 0.2058 | 0.2316 | 20.0 |
| 0.6626 | 6.69 | 15000 | 0.8309 | 0.2468 | 0.127 | 0.2069 | 0.2328 | 19.9996 |
| 0.6675 | 6.91 | 15500 | 0.8337 | 0.2448 | 0.1255 | 0.2056 | 0.231 | 19.9999 |
| 0.6479 | 7.13 | 16000 | 0.8387 | 0.2471 | 0.1267 | 0.2074 | 0.2333 | 19.9999 |
| 0.6506 | 7.36 | 16500 | 0.8377 | 0.2474 | 0.1264 | 0.2071 | 0.2335 | 19.9999 |
| 0.643 | 7.58 | 17000 | 0.8369 | 0.2454 | 0.1259 | 0.2059 | 0.2318 | 20.0 |
| 0.6262 | 7.8 | 17500 | 0.8378 | 0.2466 | 0.1269 | 0.2071 | 0.233 | 19.9997 |
| 0.6235 | 8.02 | 18000 | 0.8415 | 0.2458 | 0.1266 | 0.2065 | 0.2321 | 20.0 |
| 0.6081 | 8.25 | 18500 | 0.8421 | 0.2465 | 0.1267 | 0.2069 | 0.2326 | 19.9997 |
| 0.6257 | 8.47 | 19000 | 0.8409 | 0.2477 | 0.1267 | 0.2075 | 0.2337 | 19.9999 |
| 0.6187 | 8.69 | 19500 | 0.8381 | 0.2459 | 0.1264 | 0.2066 | 0.2321 | 19.9997 |
| 0.6178 | 8.92 | 20000 | 0.8384 | 0.248 | 0.1273 | 0.2079 | 0.2339 | 19.9996 |
| 0.6018 | 9.14 | 20500 | 0.8432 | 0.2468 | 0.1265 | 0.2071 | 0.2329 | 20.0 |
| 0.6235 | 9.36 | 21000 | 0.8418 | 0.2469 | 0.1265 | 0.207 | 0.233 | 20.0 |
| 0.606 | 9.58 | 21500 | 0.8418 | 0.2464 | 0.1264 | 0.207 | 0.2327 | 19.9999 |
| 0.6016 | 9.81 | 22000 | 0.8412 | 0.2469 | 0.1266 | 0.2074 | 0.2332 | 20.0 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
MaziyarPanahi/Smaug-72B-v0.1-GPTQ
|
MaziyarPanahi
| 2024-02-07T18:24:50Z | 17 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"base_model:moreh/MoMo-72B-lora-1.8.7-DPO",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space",
"base_model:abacusai/Smaug-72B-v0.1",
"base_model:finetune:abacusai/Smaug-72B-v0.1",
"license:apache-2.0"
] |
text-generation
| 2024-02-07T18:18:03Z |
---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- safetensors
- llama
- text-generation
- base_model:moreh/MoMo-72B-lora-1.8.7-DPO
- license:other
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- has_space
model_name: Smaug-72B-v0.1-GPTQ
base_model: abacusai/Smaug-72B-v0.1
inference: false
model_creator: abacusai
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/Smaug-72B-v0.1-GPTQ](https://huggingface.co/MaziyarPanahi/Smaug-72B-v0.1-GPTQ) is a quantized (GPTQ) version of [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/Smaug-72B-v0.1-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
```
|
macadeliccc/laser-dolphin-mixtral-2x7b-dpo-AWQ
|
macadeliccc
| 2024-02-07T18:23:05Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-02-07T18:16:57Z |
---
license: cc
---
# Laser-dolphin-mixtral-2x7b-dpo-AWQ
The original model is listed here [macadeliccc/laser-dolphin-mixtral-2x7b-dpo](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-dpo)
## Quantizations
+ 4-bit
|
fazito25/Taxi-v3
|
fazito25
| 2024-02-07T18:18:59Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T18:18:57Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fazito25/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
wt697075/java
|
wt697075
| 2024-02-07T18:18:48Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-02-07T18:18:48Z |
---
license: cc-by-nc-sa-4.0
---
|
turgutburak01/cartPole8
|
turgutburak01
| 2024-02-07T18:17:14Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T17:39:35Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cartPole8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
fazito25/q-FrozenLake-v1-4x4-noSlippery
|
fazito25
| 2024-02-07T18:14:46Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T18:14:43Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fazito25/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Saini-Manisha/videomae-base-finetuned-kinetics-finetuned-ucf101-subset
|
Saini-Manisha
| 2024-02-07T18:11:47Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-02-07T16:35:16Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-kinetics-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-kinetics-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2309
- Accuracy: 0.9806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 148
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2587 | 0.13 | 19 | 1.2644 | 1.0 |
| 0.6711 | 1.13 | 38 | 0.2098 | 1.0 |
| 0.1355 | 2.13 | 57 | 0.0465 | 1.0 |
| 0.0295 | 3.13 | 76 | 0.0431 | 0.9857 |
| 0.0155 | 4.13 | 95 | 0.0226 | 1.0 |
| 0.0175 | 5.13 | 114 | 0.0178 | 1.0 |
| 0.0168 | 6.13 | 133 | 0.0180 | 1.0 |
| 0.008 | 7.1 | 148 | 0.0184 | 1.0 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.11.0
- Tokenizers 0.15.1
|
paulux84/autotrain-z58fs-z9tot
|
paulux84
| 2024-02-07T18:05:22Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T16:21:47Z |
---
license: other
tags:
- autotrain
- text-generation
widget:
- text: 'I love AutoTrain because '
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
Statos6/dqn-SpaceInvadersNoFrameskip-v4
|
Statos6
| 2024-02-07T18:05:15Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T18:04:40Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 648.00 +/- 159.80
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Statos6 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Statos6 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Statos6
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
wyyadd/fork-detect-fake
|
wyyadd
| 2024-02-07T17:53:39Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"ResNet",
"image-classification",
"custom_code",
"base_model:aaronespasa/deepfake-detection-resnetinceptionv1",
"base_model:finetune:aaronespasa/deepfake-detection-resnetinceptionv1",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
image-classification
| 2024-02-07T17:31:03Z |
---
license: apache-2.0
base_model: aaronespasa/deepfake-detection-resnetinceptionv1
library_name: transformers
---
# original model repo :
📖 this is a cutomized version of the following model [aaronespasa/deepfake-detection-resnetinceptionv1](https://huggingface.co/aaronespasa/deepfake-detection-resnetinceptionv1)
# how to use
```python
from transformers import pipeline
pipe = pipeline(model="not-lain/deepfake",trust_remote_code=True)
pipe.predict("img_path.jpg")
```
```python
>> {"confidences":confidences,"face_with_mask": face_with_mask}
```
# dependencies
to install related dependencies simply use the command
```
!wget https://huggingface.co/not-lain/deepfake/resolve/main/requirements.txt && pip install -r requirements.txt
```
|
rame/en_pipeline_ner_model_4
|
rame
| 2024-02-07T17:53:37Z | 1 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2024-02-07T17:53:07Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline_ner_model_4
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.7673501577
- name: NER Recall
type: recall
value: 0.7667454689
- name: NER F Score
type: f_score
value: 0.7670476941
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline_ner_model_4` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.7.2,<3.8.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (4 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `allergy_name`, `cancer`, `chronic_disease`, `treatment` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 76.70 |
| `ENTS_P` | 76.74 |
| `ENTS_R` | 76.67 |
| `TRANSFORMER_LOSS` | 655099.91 |
| `NER_LOSS` | 820705.40 |
|
danaleee/CL_rank4_iter800_valprompt
|
danaleee
| 2024-02-07T17:52:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-07T16:20:41Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks teddybear
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - danaleee/CL_rank4_iter800_valprompt
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks teddybear using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
islasher/intel-image-classification
|
islasher
| 2024-02-07T17:51:17Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-02-07T17:51:13Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
0xJCarlos/QuestionAnswer_ESP
|
0xJCarlos
| 2024-02-07T17:50:51Z | 14 | 1 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa",
"base_model:finetune:dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-11-23T17:51:49Z |
---
base_model: dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa
tags:
- generated_from_keras_callback
model-index:
- name: 0xJCarlos/QuestionAnswer_ESP
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# 0xJCarlos/QuestionAnswer_ESP
This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3146
- Validation Loss: 1.6961
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.9292 | 1.7179 | 0 |
| 1.4487 | 1.6961 | 1 |
| 1.3231 | 1.6961 | 2 |
| 1.3165 | 1.6961 | 3 |
| 1.3146 | 1.6961 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
POLYQ/mixtral-nek-finetune_0.3_all_data_4_lines
|
POLYQ
| 2024-02-07T17:43:18Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T17:40:12Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: mixtral-nek-finetune_0.3_all_data_4_lines
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mixtral-nek-finetune_0.3_all_data_4_lines
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8456 | 0.09 | 1000 | 0.8573 |
| 0.838 | 0.18 | 2000 | 0.8426 |
| 0.8373 | 0.27 | 3000 | 0.8341 |
| 0.8168 | 0.36 | 4000 | 0.8274 |
| 0.8163 | 0.44 | 5000 | 0.8222 |
| 0.8079 | 0.53 | 6000 | 0.8181 |
| 0.8089 | 0.62 | 7000 | 0.8140 |
| 0.8119 | 0.71 | 8000 | 0.8108 |
| 0.8007 | 0.8 | 9000 | 0.8082 |
| 0.809 | 0.89 | 10000 | 0.8062 |
| 0.8084 | 0.98 | 11000 | 0.8051 |
### Framework versions
- PEFT 0.8.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
crrodrvi/Practica1
|
crrodrvi
| 2024-02-07T17:40:34Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-02-07T17:40:29Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
atlaspilotpuppy/Mistral-7B-Instruct-v0.2-atc
|
atlaspilotpuppy
| 2024-02-07T17:38:38Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T17:38:29Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: Mistral-7B-Instruct-v0.2-atc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2-atc
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.13 | 0.04 | 100 | 0.1517 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Hitomiblood/intel-image-classification
|
Hitomiblood
| 2024-02-07T17:38:00Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-02-07T17:37:52Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
valintea/primer-modelo
|
valintea
| 2024-02-07T17:30:57Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-02-07T17:30:54Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
manibt1993/huner_disease
|
manibt1993
| 2024-02-07T17:25:03Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:transformer_dataset_ner_kaggle",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-07T04:59:17Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- transformer_dataset_ner_kaggle
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: huner_disease
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: transformer_dataset_ner_kaggle
type: transformer_dataset_ner_kaggle
config: ncbi_disease
split: validation
args: ncbi_disease
metrics:
- name: Precision
type: precision
value: 0.7905582615211689
- name: Recall
type: recall
value: 0.8222915042868277
- name: F1
type: f1
value: 0.8061127029608404
- name: Accuracy
type: accuracy
value: 0.9795934778779362
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# huner_disease
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the transformer_dataset_ner_kaggle dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2260
- Precision: 0.7906
- Recall: 0.8223
- F1: 0.8061
- Accuracy: 0.9796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0651 | 1.0 | 1834 | 0.0703 | 0.6823 | 0.7880 | 0.7314 | 0.9767 |
| 0.0459 | 2.0 | 3668 | 0.0712 | 0.7470 | 0.7617 | 0.7543 | 0.9781 |
| 0.03 | 3.0 | 5502 | 0.0903 | 0.7278 | 0.8137 | 0.7684 | 0.9779 |
| 0.0177 | 4.0 | 7336 | 0.0915 | 0.7529 | 0.8055 | 0.7783 | 0.9791 |
| 0.0139 | 5.0 | 9170 | 0.1088 | 0.7346 | 0.8207 | 0.7753 | 0.9777 |
| 0.01 | 6.0 | 11004 | 0.1196 | 0.7283 | 0.8207 | 0.7718 | 0.9772 |
| 0.007 | 7.0 | 12838 | 0.1175 | 0.7615 | 0.7938 | 0.7773 | 0.9787 |
| 0.0055 | 8.0 | 14672 | 0.1488 | 0.7452 | 0.8237 | 0.7825 | 0.9783 |
| 0.0049 | 9.0 | 16506 | 0.1351 | 0.7704 | 0.8125 | 0.7909 | 0.9795 |
| 0.0042 | 10.0 | 18340 | 0.1617 | 0.7491 | 0.8184 | 0.7822 | 0.9782 |
| 0.0035 | 11.0 | 20174 | 0.1453 | 0.7557 | 0.8009 | 0.7776 | 0.9785 |
| 0.0036 | 12.0 | 22008 | 0.1662 | 0.7554 | 0.8198 | 0.7863 | 0.9777 |
| 0.0027 | 13.0 | 23842 | 0.1621 | 0.7781 | 0.8075 | 0.7925 | 0.9790 |
| 0.0027 | 14.0 | 25676 | 0.1599 | 0.7519 | 0.8110 | 0.7804 | 0.9776 |
| 0.0027 | 15.0 | 27510 | 0.1633 | 0.7710 | 0.8127 | 0.7913 | 0.9785 |
| 0.0027 | 16.0 | 29344 | 0.1674 | 0.7588 | 0.8129 | 0.7849 | 0.9780 |
| 0.0022 | 17.0 | 31178 | 0.1670 | 0.7652 | 0.8168 | 0.7902 | 0.9781 |
| 0.0021 | 18.0 | 33012 | 0.1586 | 0.7734 | 0.8159 | 0.7940 | 0.9790 |
| 0.002 | 19.0 | 34846 | 0.1650 | 0.7787 | 0.8172 | 0.7975 | 0.9795 |
| 0.0018 | 20.0 | 36680 | 0.1642 | 0.7697 | 0.8048 | 0.7868 | 0.9793 |
| 0.0017 | 21.0 | 38514 | 0.1874 | 0.7743 | 0.8176 | 0.7954 | 0.9784 |
| 0.0015 | 22.0 | 40348 | 0.1598 | 0.7647 | 0.8227 | 0.7926 | 0.9785 |
| 0.0012 | 23.0 | 42182 | 0.1819 | 0.7958 | 0.7997 | 0.7977 | 0.9793 |
| 0.0016 | 24.0 | 44016 | 0.1679 | 0.7960 | 0.8073 | 0.8016 | 0.9794 |
| 0.0013 | 25.0 | 45850 | 0.1659 | 0.7662 | 0.8147 | 0.7897 | 0.9785 |
| 0.001 | 26.0 | 47684 | 0.1774 | 0.7732 | 0.8217 | 0.7967 | 0.9789 |
| 0.0016 | 27.0 | 49518 | 0.1622 | 0.7767 | 0.8131 | 0.7945 | 0.9789 |
| 0.0007 | 28.0 | 51352 | 0.1958 | 0.7642 | 0.8223 | 0.7922 | 0.9783 |
| 0.0009 | 29.0 | 53186 | 0.1861 | 0.7764 | 0.8223 | 0.7987 | 0.9790 |
| 0.0012 | 30.0 | 55020 | 0.1917 | 0.7528 | 0.8252 | 0.7873 | 0.9774 |
| 0.0005 | 31.0 | 56854 | 0.1952 | 0.7833 | 0.8106 | 0.7967 | 0.9792 |
| 0.0009 | 32.0 | 58688 | 0.1910 | 0.7801 | 0.8149 | 0.7971 | 0.9791 |
| 0.0008 | 33.0 | 60522 | 0.1931 | 0.7737 | 0.8180 | 0.7952 | 0.9790 |
| 0.0006 | 34.0 | 62356 | 0.1902 | 0.7730 | 0.8176 | 0.7947 | 0.9788 |
| 0.0008 | 35.0 | 64190 | 0.1904 | 0.7799 | 0.8211 | 0.8 | 0.9791 |
| 0.0006 | 36.0 | 66024 | 0.1951 | 0.7844 | 0.8153 | 0.7995 | 0.9795 |
| 0.0008 | 37.0 | 67858 | 0.1943 | 0.7749 | 0.8256 | 0.7994 | 0.9791 |
| 0.0007 | 38.0 | 69692 | 0.2051 | 0.7796 | 0.8248 | 0.8016 | 0.9791 |
| 0.0004 | 39.0 | 71526 | 0.2108 | 0.7796 | 0.8223 | 0.8004 | 0.9792 |
| 0.0004 | 40.0 | 73360 | 0.2135 | 0.7788 | 0.8254 | 0.8014 | 0.9792 |
| 0.0004 | 41.0 | 75194 | 0.2028 | 0.7908 | 0.8176 | 0.8040 | 0.9798 |
| 0.0006 | 42.0 | 77028 | 0.2058 | 0.7855 | 0.8215 | 0.8031 | 0.9796 |
| 0.0005 | 43.0 | 78862 | 0.2109 | 0.7860 | 0.8254 | 0.8052 | 0.9793 |
| 0.0004 | 44.0 | 80696 | 0.2175 | 0.7784 | 0.8287 | 0.8028 | 0.9791 |
| 0.0003 | 45.0 | 82530 | 0.2206 | 0.7904 | 0.8223 | 0.8060 | 0.9795 |
| 0.0003 | 46.0 | 84364 | 0.2198 | 0.7942 | 0.8180 | 0.8059 | 0.9797 |
| 0.0004 | 47.0 | 86198 | 0.2265 | 0.7791 | 0.8233 | 0.8006 | 0.9791 |
| 0.0003 | 48.0 | 88032 | 0.2265 | 0.7825 | 0.8242 | 0.8028 | 0.9793 |
| 0.0004 | 49.0 | 89866 | 0.2260 | 0.7892 | 0.8209 | 0.8048 | 0.9794 |
| 0.0003 | 50.0 | 91700 | 0.2260 | 0.7906 | 0.8223 | 0.8061 | 0.9796 |
# Run the model
```python
from transformers import pipeline
model_checkpoint = "manibt1993/huner_disease"
token_classifier = pipeline(
"token-classification", model=model_checkpoint, aggregation_strategy="simple"
)
token_classifier("patient has diabtes, anemia, hypertension with ckd which hurts the patient since 6 years. Patient today experience with right leg pain, fever and cough.")
```
### Model output
```python
[{'entity_group': 'Disease',
'score': 0.69145554,
'word': 'diabtes',
'start': 12,
'end': 19},
{'entity_group': 'Disease',
'score': 0.9955915,
'word': 'anemia',
'start': 21,
'end': 27},
{'entity_group': 'Disease',
'score': 0.99971104,
'word': 'hypertension',
'start': 29,
'end': 41},
{'entity_group': 'Disease',
'score': 0.9249976,
'word': 'right leg pain',
'start': 120,
'end': 134},
{'entity_group': 'Disease',
'score': 0.9983512,
'word': 'fever',
'start': 136,
'end': 141},
{'entity_group': 'Disease',
'score': 0.99849665,
'word': 'cough',
'start': 146,
'end': 151}]
```
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Tommidi/spatio_temporal_vit-finetuned-ucf101-subset
|
Tommidi
| 2024-02-07T17:24:01Z | 18 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"st_vit",
"generated_from_trainer",
"base_model:Tommidi/st_vit_untrained",
"base_model:finetune:Tommidi/st_vit_untrained",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T16:39:37Z |
---
base_model: Tommidi/st_vit_untrained
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: spatio_temporal_vit-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spatio_temporal_vit-finetuned-ucf101-subset
This model is a fine-tuned version of [Tommidi/st_vit_untrained](https://huggingface.co/Tommidi/st_vit_untrained) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1244
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 37
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6013 | 1.0 | 37 | 0.1244 | 0.9 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
angela1996/intel-image-classification
|
angela1996
| 2024-02-07T17:21:06Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-02-07T17:21:03Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
manche/gpt2-safeguard-sg1
|
manche
| 2024-02-07T17:19:02Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T17:18:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MiVaCod/intel-image-classification
|
MiVaCod
| 2024-02-07T17:15:39Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-02-07T17:15:35Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
waldie/Etheria-55b-v0.1-2.5bpw-h6-exl2
|
waldie
| 2024-02-07T16:58:38Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"Etheria",
"arxiv:2311.03099",
"arxiv:2306.01708",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T16:09:57Z |
---
base_model: []
tags:
- mergekit
- Etheria
license: apache-2.0
---
# Steelskull/Etheria-55b-v0.1

## Merge Details
An attempt to make a functional goliath style merge to create a [Etheria] 55b-200k with two yi-34b-200k models.
due to the merge it 'theoretically' should have a context of 200k but I recommend starting at 32k and moveing up,
as it is unknown (at this time) what the merge has done to the context length.
This is a merge of both VerA and VerB of Etheria-55b (There numbers were surprisingly good), I then created a sacrificial 55B out of the most performant yi-34b-200k Model
and performed a Dare_ties merge and equalize the model into its current state.
### recommended settings and Prompt Format:
Ive tested it up to 32k context using exl2 using these settings:
```
"temp": 0.7,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 1,
"min_p": 0.1,
"rep_pen": 1.1,
"rep_pen_range": 8192,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 1,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0,
"presence_pen": 0,
"do_sample": true,
"early_stopping": false,
"add_bos_token": false,
"truncation_length": 2048,
"ban_eos_token": true,
"skip_special_tokens": true,
"streaming": true,
"mirostat_mode": 0,
"mirostat_tau": 5,
"mirostat_eta": 0.1,
```
Prompt format that work well
```
ChatML & Alpaca
```
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using Merged-Etheria-55b as a base.
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Merged-Etheria-55b
models:
- model: Sacr-Etheria-55b
parameters:
weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113]
density: 0.61
- model: Merged-Etheria-55b
parameters:
weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113]
density: 0.61
merge_method: dare_ties
tokenizer_source: union
parameters:
int8_mask: true
dtype: bfloat16
```
|
interrobang/OpenHermes-2.5-Mistral-7B-GGUF-f16
|
interrobang
| 2024-02-07T16:56:00Z | 22 | 1 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-07T16:03:14Z |
---
license: apache-2.0
---
OpenHermes-2.5-Mistral-7B by teknium converted to f16 gguf for easier tinkering;
original model at https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B
|
theminji/TinyAITA
|
theminji
| 2024-02-07T16:52:14Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T05:03:39Z |
---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: TinyAITA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyAITA
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset.
## Model description
```py
import torch
from transformers import pipeline, AutoTokenizer, TextStreamer
import re
tokenizer = AutoTokenizer.from_pretrained("TheBossLevel123/TinyAITA")
pipe = pipeline("text-generation", model="TheBossLevel123/TinyAITA", torch_dtype=torch.bfloat16, device_map="auto")
streamer=TextStreamer(tokenizer)
```
```py
prompt = 'AITA for XYZ?'
outputs = pipe(prompt, max_new_tokens=1024, do_sample=True, temperature=0.9, streamer=streamer, eos_token_id=tokenizer.encode("<|im_end|>"))
if outputs and "generated_text" in outputs[0]:
text = outputs[0]["generated_text"]
print(f"Prompt: {prompt}")
print("")
print(text)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
jomacgo/tfm_bert_qa_tf_spanish_model
|
jomacgo
| 2024-02-07T16:47:08Z | 48 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:dccuchile/distilbert-base-spanish-uncased",
"base_model:finetune:dccuchile/distilbert-base-spanish-uncased",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-06T16:34:35Z |
---
base_model: dccuchile/distilbert-base-spanish-uncased
tags:
- generated_from_keras_callback
model-index:
- name: jomacgo/tfm_bert_qa_tf_spanish_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jomacgo/tfm_bert_qa_tf_spanish_model
This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3719
- Validation Loss: 1.3237
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 310, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.1953 | 1.9776 | 0 |
| 1.7034 | 1.3237 | 1 |
| 1.3719 | 1.3237 | 2 |
### Framework versions
- Transformers 4.37.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
StorkelOpa/ancient-world
|
StorkelOpa
| 2024-02-07T16:43:31Z | 2 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-07T16:43:00Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: ancient world painting of Earth's Early Landscape, Showcasing Towering Mountains,
Deep Valleys, and Volcanic Activity, Circa 4.5 Billion Years Ago.
output:
url: image-0.png
- text: ancient world painting of Earth's Early Ocean Floor, Alive with Primitive
Plant Life Amidst Volcanic Rock Formations, Circa 3.5 Billion Years Ago.
output:
url: image-1.png
- text: ancient world painting of Cambrian Marine Life, Featuring Trilobites and Jellyfish
Amidst Ocean Flora.
output:
url: image-2.png
- text: ancient world painting of the Cambrian Seabed, Featuring the Trilobites Paradoxides
gracilis, Comocoryphe sulzeri, and Ptychoparia striata, with the Stalked Echinoderm
Acadocrinus jani and the Algae Dalya, Set Against a Backdrop of Jellyfish in the
Open Water.
output:
url: image-3.png
- text: ancient world painting of Upper Silurian Marine Life, with Predatory Nautiloids
and Sea Lilies in a Coral Seabed Landscape.
output:
url: image-4.png
- text: ancient world painting of the Late Silurian Period, Depicting the First Land
Plant Invasion with Primitive Psilophytes Colonizing Coastal Floodplains and Marshes.
output:
url: image-5.png
- text: ancient world painting of Middle Devonian Flora, Featuring True Horsetails,
Clubmosses, and Ferns Amidst a Primitive Landscape with Waterfalls and Rocky Terrain.
output:
url: image-6.png
- text: ancient world painting
output:
url: image-7.png
- text: ancient world painting of Early Devonian Aquatic Life, Depicting Osteolepis
Attacking Heterostracan Armored Fish with Primitive Plants in the Foreground.
output:
url: image-8.png
- text: ancient world painting of Devonian Aquatic Ecosystem, Illustrating Armored
Placoderms Like Pterichthyodes and Bothrialepis Navigating the Ocean Floor.
output:
url: image-9.png
- text: ancient world painting of Devonian Sea Life, Showcasing the Arthrodira Placoderms
in a Dynamic Underwater Scene.
output:
url: image-10.png
- text: ancient world painting of Silurian to Devonian Freshwater Fish, Depicting
the Primitive Acanthodii Group with Climatius, Euthacanthus, and Parexus.
output:
url: image-11.png
- text: ancient world painting of Late Devonian Landscape, Featuring Ichthyostega
and the Differentiated Archaeopteris Flora with Cyclostigma Trees and Sphenophyllum
Plants.
output:
url: image-12.png
- text: ancient world painting
output:
url: image-13.png
- text: ancient world painting
output:
url: image-14.png
- text: ancient world painting
output:
url: image-15.png
- text: ancient world painting
output:
url: image-16.png
- text: ancient world painting
output:
url: image-17.png
- text: ancient world painting
output:
url: image-18.png
- text: ancient world painting
output:
url: image-19.png
- text: ancient world painting
output:
url: image-20.png
- text: ancient world painting
output:
url: image-21.png
- text: ancient world painting
output:
url: image-22.png
- text: ancient world painting
output:
url: image-23.png
- text: ancient world painting
output:
url: image-24.png
- text: ancient world painting
output:
url: image-25.png
- text: ancient world painting
output:
url: image-26.png
- text: ancient world painting
output:
url: image-27.png
- text: ancient world painting
output:
url: image-28.png
- text: ancient world painting
output:
url: image-29.png
- text: ancient world painting
output:
url: image-30.png
- text: ancient world painting
output:
url: image-31.png
- text: ancient world painting
output:
url: image-32.png
- text: ancient world painting
output:
url: image-33.png
- text: ancient world painting
output:
url: image-34.png
- text: ancient world painting
output:
url: image-35.png
- text: ancient world painting
output:
url: image-36.png
- text: ancient world painting
output:
url: image-37.png
- text: ancient world painting
output:
url: image-38.png
- text: ancient world painting
output:
url: image-39.png
- text: ancient world painting
output:
url: image-40.png
- text: ancient world painting
output:
url: image-41.png
- text: ancient world painting
output:
url: image-42.png
- text: ancient world painting
output:
url: image-43.png
- text: ancient world painting
output:
url: image-44.png
- text: ancient world painting
output:
url: image-45.png
- text: ancient world painting
output:
url: image-46.png
- text: ancient world painting
output:
url: image-47.png
- text: ancient world painting
output:
url: image-48.png
- text: ancient world painting
output:
url: image-49.png
- text: ancient world painting
output:
url: image-50.png
- text: ancient world painting
output:
url: image-51.png
- text: ancient world painting
output:
url: image-52.png
- text: ancient world painting
output:
url: image-53.png
- text: ancient world painting
output:
url: image-54.png
- text: ancient world painting
output:
url: image-55.png
- text: ancient world painting
output:
url: image-56.png
- text: ancient world painting
output:
url: image-57.png
- text: ancient world painting
output:
url: image-58.png
- text: ancient world painting
output:
url: image-59.png
- text: ancient world painting
output:
url: image-60.png
- text: ancient world painting
output:
url: image-61.png
- text: ancient world painting
output:
url: image-62.png
- text: ancient world painting
output:
url: image-63.png
- text: ancient world painting
output:
url: image-64.png
- text: ancient world painting
output:
url: image-65.png
- text: ancient world painting
output:
url: image-66.png
- text: ancient world painting
output:
url: image-67.png
- text: ancient world painting
output:
url: image-68.png
- text: ancient world painting
output:
url: image-69.png
- text: ancient world painting
output:
url: image-70.png
- text: ancient world painting
output:
url: image-71.png
- text: ancient world painting
output:
url: image-72.png
- text: ancient world painting
output:
url: image-73.png
- text: ancient world painting
output:
url: image-74.png
- text: ancient world painting
output:
url: image-75.png
- text: ancient world painting
output:
url: image-76.png
- text: ancient world painting
output:
url: image-77.png
- text: ancient world painting
output:
url: image-78.png
- text: ancient world painting
output:
url: image-79.png
- text: ancient world painting
output:
url: image-80.png
- text: ancient world painting
output:
url: image-81.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: ancient world painting
license: openrail++
---
# SDXL LoRA DreamBooth - StorkelOpa/ancient-world
<Gallery />
## Model description
### These are StorkelOpa/ancient-world LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`ancient-world.safetensors` here 💾](/StorkelOpa/ancient-world/blob/main/ancient-world.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:ancient-world:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`ancient-world_emb.safetensors` here 💾](/StorkelOpa/ancient-world/blob/main/ancient-world_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `ancient-world_emb` to your prompt. For example, `ancient world painting`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('StorkelOpa/ancient-world', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='StorkelOpa/ancient-world', filename='ancient-world_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('ancient world painting').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/StorkelOpa/ancient-world/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Adeptschneider/mistral_lora_instruct_model
|
Adeptschneider
| 2024-02-07T16:43:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T16:43:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roktimsardar123/MeinaMix_V11
|
roktimsardar123
| 2024-02-07T16:35:56Z | 19 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"art",
"anime",
"stable diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-07T16:35:05Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
- anime
- stable diffusion
---
MeinaMix Objective is to be able to do good art with little prompting.
For examples and prompts, please checkout: https://civitai.com/models/7240/meinamix
I have a discord server where you can post images that you generated, discuss prompt and/or ask for help.
https://discord.gg/XC9nGZNDUd If you like one of my models and want to support their updates
I've made a ko-fi page; https://ko-fi.com/meina where you can pay me a coffee <3
And a Patreon page; https://www.patreon.com/MeinaMix where you can support me and get acess to beta of my models!
You may also try this model using Sinkin.ai: https://sinkin.ai/m/vln8Nwr
MeinaMix and the other of Meinas will ALWAYS be FREE.
Recommendations of use: Enable Quantization in K samplers.
Hires.fix is needed for prompts where the character is far away in order to make decent images, it drastically improve the quality of face and eyes!
Recommended parameters:
Sampler: Euler a: 40 to 60 steps.
Sampler: DPM++ SDE Karras: 20 to 30 steps.
Sampler: DPM++ 2M Karras: 20 to 40 steps.
CFG Scale: 7.
Resolutions: 512x768, 512x1024 for Portrait!
Resolutions: 768x512, 1024x512, 1536x512 for Landscape!
Hires.fix: R-ESRGAN 4x+Anime6b, with 10 steps at 0.3 up to 0.5 denoising.
Clip Skip: 2.
Negatives: ' (worst quality, low quality:1.4), (zombie, sketch, interlocked fingers, comic) '
|
objecthub/Controlly
|
objecthub
| 2024-02-07T16:35:49Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-07T16:35:49Z |
---
license: creativeml-openrail-m
---
|
Muhammedwelian/Lamba_man
|
Muhammedwelian
| 2024-02-07T16:32:32Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-02-07T16:32:32Z |
---
license: other
license_name: '392001'
license_link: LICENSE
---
|
danaleee/CL_rank4_iter500_valprompt
|
danaleee
| 2024-02-07T16:25:46Z | 2 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-07T15:38:10Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks teddybear
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - danaleee/CL_rank4_iter500_valprompt
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks teddybear using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
LoneStriker/Senku-70B-Full-6.0bpw-h6-exl2
|
LoneStriker
| 2024-02-07T16:19:19Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T15:57:05Z |
---
license: cc-by-2.0
---
Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth.
EQ-Bench: 84.89
Will run more benches later.
|
bdpc/test_twowayloss_implementation
|
bdpc
| 2024-02-07T16:14:37Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-06T12:41:21Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: test_twowayloss_implementation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_twowayloss_implementation
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.9001
- Accuracy: 0.5659
- Precision: 0.0114
- Recall: 0.5082
- F1: 0.0223
- Hamming: 0.4341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Hamming |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 8.8818 | 0.0 | 5 | 8.9210 | 0.5632 | 0.0110 | 0.4947 | 0.0216 | 0.4368 |
| 8.124 | 0.0 | 10 | 8.9001 | 0.5659 | 0.0114 | 0.5082 | 0.0223 | 0.4341 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.14.1
|
manche/gpt2-safeguard-zs
|
manche
| 2024-02-07T16:14:17Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T16:13:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IB13/t5_ppo_model_3
|
IB13
| 2024-02-07T16:09:18Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:IB13/sft_t5_base_processed_model",
"base_model:adapter:IB13/sft_t5_base_processed_model",
"region:us"
] | null | 2024-02-07T13:50:42Z |
---
library_name: peft
base_model: IB13/sft_t5_base_processed_model
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
wish6424/Mixtral-8x7B-prostate-sum-test
|
wish6424
| 2024-02-07T16:08:40Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-06T19:26:33Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: Mixtral-8x7B-prostate-sum-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mixtral-8x7B-prostate-sum-test
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9034
- eval_runtime: 1.0713
- eval_samples_per_second: 0.933
- eval_steps_per_second: 0.933
- epoch: 41.67
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 1000
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
noza-kit/Adapter_llama2_translate_Q_enpt_ex2-1epoch
|
noza-kit
| 2024-02-07T16:07:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-07T13:20:47Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
ffxvs/embeddings-collection-xl
|
ffxvs
| 2024-02-07T16:06:37Z | 0 | 1 | null |
[
"region:us"
] | null | 2024-01-22T16:51:09Z |
List of embeddings collection SDXL :
* [SimplePositiveXL_v2](https://civitai.com/models/118758/simplepositivexl?modelVersionId=182974)
|
tavalenzuelag/mistral-7b-e2e-mod
|
tavalenzuelag
| 2024-02-07T16:06:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-01T13:56:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
matlok/tinyllama-cinder-openhermes-32k
|
matlok
| 2024-02-07T15:58:52Z | 11 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T05:17:38Z |
---
license: unknown
---
## Merging AI Models like Lego Blocks
This model was merged with the following Hugging Face TinyLlama models using ties:
- TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
- Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct
- Doctor-Shotgun/TinyLlama-1.1B-32k
- Tensoic/TinyLlama-1.1B-3T-openhermes
- Josephgflowers/TinyLlama-3T-Cinder-v1.3
## How do I fine-tune this model?
### Fine-tuning using Hugging Face SFTTrainer
- [Fine-tuning using Hugging Face SFTTrainer](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing)
### Fine-tuning using Unsloth
2024-02-07 was unable to use unsloth due to pip install issues. Maybe others in the future will have more luck:
- [Alpaca + TinyLlama + RoPE Scaling full example.ipynb](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)
## How do I generate my own model merges?
This requires setting up your [Hugging Face User Account Access Tokens](https://huggingface.co/settings/tokens) before it will work:
If you're using the command line you can use:
```sh
huggingface-cli login
```
```sh
time ./run-tiny-merge.py
```
### What's this code doing?
Here's the latest version:
```python3
#!/usr/bin/env python3
import os
import transformers
import torch
import logging
from ddare.merge import merge_tensors
from ddare.tensor import (
dare_ties_sparsification,
relative_norm,
divide_tensor_into_sets,
)
from ddare.util import get_device
import re
from typing import Dict, Tuple, List
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)
def get_models(
models: List[str],
trust_remote_code: bool,
):
"""
get the models
:param models: model names to download
:param trust_remote_code: are you sure??? True/False
"""
config = {
"torch_dtype": torch.float16,
"low_cpu_mem_usage": False,
"trust_remote_code": trust_remote_code,
}
loaded_models = []
num_models = len(models)
for midx, model_path in enumerate(models):
log.info(
f"loading model={midx + 1}/{num_models} "
f"model={model_path} "
)
loaded_models.append(
transformers.AutoModelForCausalLM.from_pretrained(
model_path, **config
)
)
return loaded_models
def pm(
model,
):
"""
pretty print model
:param model: show me the model
"""
keys = model.state_dict().keys()
log.info(f"model keys={len(keys)}")
for i, k in enumerate(keys):
tensor = model.state_dict()[k]
log.info(
f"{i:3d} {k} shape={tensor.shape} "
f"type={tensor.dtype} dev={tensor.device} "
f"contig={tensor.is_contiguous()}"
)
def run_text_test(
model,
tokenizer_path: str,
question: str,
device: str = "cuda",
):
"""
run a question on the model and return the answer
:param model: initialized model
:param tokenizer_path: tokenizer path/name
:param question: what are you asking?
:param device: where do you want to run "cpu"/"gpu"?
"""
base_model = model.to(device)
log.info(f"loading tokenizer={tokenizer_path}")
tokenizer = transformers.AutoTokenizer.from_pretrained(
tokenizer_path,
torch_dtype=torch.float16,
)
inputs = tokenizer(question, return_tensors="pt").to(
device
)
with torch.backends.cuda.sdp_kernel(
enable_flash=True,
enable_math=False,
enable_mem_efficient=True,
):
outputs = base_model.generate(
**inputs,
max_new_tokens=256,
)
answer = tokenizer.decode(
outputs[0], skip_special_tokens=True
)
log.info(
"\n"
"----------"
"\n"
f"tokenizer={tokenizer}\n "
f"question:\n{question}\n"
f"answer:\n{answer}\n"
"----------"
)
base_model = base_model.to(device)
return tokenizer
def get_layer_type(key: str) -> Tuple[int, str]:
"""
get the layer type
:param key: name of the layer
:return: layer id and name
"""
matcher = re.compile(r"model.layers.(\d+).(.+)")
m = matcher.match(key)
if m is None:
if "model.norm.weight" == key:
return -1, "norm"
if "model.embed_tokens.weight" == key:
return -1, "embed"
if "lm_head.weight" == key:
return -1, "head"
log.info(f"Unknown key {key}")
return -1, "unknown"
return int(m.group(1)), m.group(2)
def merge_model_with_ties(
models: List[str],
model_dst: str,
trust_remote_code: bool = True,
):
"""
merge the list of models into one model
called model_dst
:param models: list of models to merge
:param model_dst: name of the new model
:param trust_remote_code: are you sure? True/False
"""
models = get_models(
models=models,
trust_remote_code=trust_remote_code,
)
config = {}
result_dict: Dict[str, torch.Tensor] = {}
device = get_device()
keys = models[0].state_dict().keys()
num_keys = len(keys)
for k in keys:
block, layer_type = get_layer_type(k)
m0: torch.Tensor = models[0].state_dict()[k]
result = m0.clone()
sets = divide_tensor_into_sets(tensor=m0, n_sets=4)
# get the src layers to merge
m = [
models[1].state_dict()[k],
models[2].state_dict()[k],
models[3].state_dict()[k],
models[4].state_dict()[k],
]
# build a ratio
ratio = {
"to_q": 0.0,
"to_k": 0.0,
"to_v": 0.0,
}.get(layer_type, 0.5)
norm_ratio = 0.68
log.info(
f"model={k} {num_keys} shape={m0.shape} "
f"dtype={m0.dtype} {m0.device} "
f"ratio={ratio} "
f"contig={m0.is_contiguous()} "
f"norm={norm_ratio}"
)
# for all tensors
for i, tensor in enumerate(m):
if layer_type == "to_k":
# Get to_q key
q_base = models[0].state_dict()[
k.replace("to_k", "to_q")
]
q_merge = models[i].state_dict()[
k.replace("to_k", "to_q")
]
scale = relative_norm(q_merge, q_base)
tensor = tensor.to(device) / scale
del scale
elif layer_type == "to_q":
scale = relative_norm(tensor, m0)
tensor = tensor.to(device) * scale
del scale
slice_mask = (sets == i).bool()
new_tensor = dare_ties_sparsification(
model_a_param=m0,
model_b_param=tensor,
drop_rate=norm_ratio,
ties="sum",
rescale="off",
device=device,
**config,
)
new_tensor = merge_tensors(
"slerp", m0, tensor, ratio
)
result = torch.where(
slice_mask, new_tensor, result
)
del new_tensor, slice_mask
result_dict[k] = result
# end of merge
log.info(f"done merge saving to file: {model_dst}")
out_model = (
transformers.AutoModelForCausalLM.from_pretrained(
model_dst, **config
)
)
out_model.state_dict = lambda: result_dict
out_model.save_pretrained(model_dst)
def run():
"""
run the merge and upload the model and tokenizer
This requires having the Hugging Face token
set before it will work:
```huggingface-cli login```
"""
question = "why is the sky blue?"
log.info(
f"merging models and asking the question: {question}"
)
model_src = "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T"
model_dst = "matlok/tinyllama-cinder-openhermes-32k"
device = "cuda"
config = {
"torch_dtype": torch.float16,
"low_cpu_mem_usage": False,
"trust_remote_code": True,
}
models = [
model_src,
"Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct",
"Doctor-Shotgun/TinyLlama-1.1B-32k",
"Tensoic/TinyLlama-1.1B-3T-openhermes",
"Josephgflowers/TinyLlama-3T-Cinder-v1.3",
]
merge_model_with_ties(
models=models, model_dst=model_dst
)
log.info(f"loading newly-created file: {model_dst}")
model = (
transformers.AutoModelForCausalLM.from_pretrained(
model_dst, **config
)
)
log.info(
f"loaded new model file: {model_dst} "
f"asking question: {question} "
)
run_text_test(
model=model,
tokenizer_path=model_src,
question=question,
device=device,
)
# clean the temp merge dir
# remove model dir to prevent issues with the tokenizer upload
model_org = model_dst.split("/")[0]
if os.path.exists(model_org):
os.system(f"rm -rf ./{model_org}")
log.info(f"uploading model: {model_dst}")
model.push_to_hub(model_dst)
log.info(f"uploading src tokenizer: {model_src}")
# reload tokenizer to save it and found on:
# https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing#scrollTo=QQn30cRtAZ-P
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_src, trust_remote_code=True
)
# https://huggingface.co/docs/transformers/model_sharing#use-the-pushtohub-function
# tokenizer.push_to_hub("my-awesome-model")
tokenizer.push_to_hub(model_dst)
log.info(
f"done loading new model: {model} "
f"file: {model_dst}"
)
if __name__ == "__main__":
run()
```
### Logs
Here's the logs from the code above:
```
time ./run-tiny-merge.py
Total VRAM 12282 MB, total RAM 85434 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : native
VAE dtype: torch.bfloat16
INFO:__main__:merging models and asking the question: why is the sky blue?
INFO:__main__:loading model=1/5 model=TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
config.json: 100%|█████████████████████████████████████| 560/560 [00:00<00:00, 5.23MB/s]
model.safetensors: 100%|███████████████████████████| 4.40G/4.40G [00:48<00:00, 90.2MB/s]
generation_config.json: 100%|███████████████████████████| 129/129 [00:00<00:00, 721kB/s]
INFO:__main__:loading model=2/5 model=Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct
config.json: 100%|█████████████████████████████████████| 695/695 [00:00<00:00, 3.04MB/s]
pytorch_model.bin: 100%|███████████████████████████| 2.20G/2.20G [00:23<00:00, 92.6MB/s]
generation_config.json: 100%|███████████████████████████| 129/129 [00:00<00:00, 566kB/s]
INFO:__main__:loading model=3/5 model=Doctor-Shotgun/TinyLlama-1.1B-32k
config.json: 100%|█████████████████████████████████████| 686/686 [00:00<00:00, 3.57MB/s]
model.safetensors: 100%|███████████████████████████| 2.20G/2.20G [00:24<00:00, 90.5MB/s]
generation_config.json: 100%|██████████████████████████| 124/124 [00:00<00:00, 1.80MB/s]
INFO:__main__:loading model=4/5 model=Tensoic/TinyLlama-1.1B-3T-openhermes
config.json: 100%|█████████████████████████████████████| 702/702 [00:00<00:00, 2.97MB/s]
pytorch_model.bin: 100%|███████████████████████████| 2.20G/2.20G [00:23<00:00, 92.7MB/s]
generation_config.json: 100%|███████████████████████████| 124/124 [00:00<00:00, 671kB/s]
INFO:__main__:loading model=5/5 model=Josephgflowers/TinyLlama-3T-Cinder-v1.3
config.json: 100%|█████████████████████████████████████| 713/713 [00:00<00:00, 9.35MB/s]
model.safetensors: 100%|███████████████████████████| 2.20G/2.20G [00:24<00:00, 91.5MB/s]
generation_config.json: 100%|██████████████████████████| 138/138 [00:00<00:00, 1.86MB/s]
INFO:__main__:model=model.embed_tokens.weight 201 shape=torch.Size([32000, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.0.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.0.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.0.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.0.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.0.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.0.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.0.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.0.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.0.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.1.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.1.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.1.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.1.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.1.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.1.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.1.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.1.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.1.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.2.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.2.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.2.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.2.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.2.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.2.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.2.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.2.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.2.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.3.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.3.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.3.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.3.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.3.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.3.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.3.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.3.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.3.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.4.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.4.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.4.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.4.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.4.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.4.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.4.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.4.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.4.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.5.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.5.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.5.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.5.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.5.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.5.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.5.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.5.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.5.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.6.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.6.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.6.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.6.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.6.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.6.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.6.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.6.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.6.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.7.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.7.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.7.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.7.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.7.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.7.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.7.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.7.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.7.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.8.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.8.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.8.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.8.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.8.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.8.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.8.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.8.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.8.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.9.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.9.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.9.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.9.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.9.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.9.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.9.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.9.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.9.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.10.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.10.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.10.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.10.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.10.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.10.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.10.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.10.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.10.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.11.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.11.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.11.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.11.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.11.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.11.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.11.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.11.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.11.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.12.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.12.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.12.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.12.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.12.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.12.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.12.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.12.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.12.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.13.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.13.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.13.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.13.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.13.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.13.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.13.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.13.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.13.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.14.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.14.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.14.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.14.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.14.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.14.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.14.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.14.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.14.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.15.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.15.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.15.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.15.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.15.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.15.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.15.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.15.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.15.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.16.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.16.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.16.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.16.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.16.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.16.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.16.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.16.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.16.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.17.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.17.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.17.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.17.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.17.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.17.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.17.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.17.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.17.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.18.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.18.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.18.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.18.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.18.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.18.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.18.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.18.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.18.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.19.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.19.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.19.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.19.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.19.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.19.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.19.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.19.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.19.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.20.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.20.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.20.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.20.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.20.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.20.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.20.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.20.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.20.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.21.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.21.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.21.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.21.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.21.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.21.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.21.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.21.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.layers.21.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=model.norm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:model=lm_head.weight 201 shape=torch.Size([32000, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68
INFO:__main__:done merge saving to file: matlok/tinyllama-cinder-openhermes-32k
config.json: 100%|█████████████████████████████████████| 724/724 [00:00<00:00, 7.75MB/s]
model.safetensors: 100%|███████████████████████████| 2.20G/2.20G [00:23<00:00, 91.8MB/s]
generation_config.json: 100%|██████████████████████████| 133/133 [00:00<00:00, 1.58MB/s]
INFO:__main__:loading newly-created file: matlok/tinyllama-cinder-openhermes-32k
INFO:__main__:loaded new model file: matlok/tinyllama-cinder-openhermes-32k asking question: why is the sky blue?
INFO:__main__:loading tokenizer=TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
tokenizer_config.json: 100%|███████████████████████████| 776/776 [00:00<00:00, 8.26MB/s]
tokenizer.model: 100%|███████████████████████████████| 500k/500k [00:00<00:00, 64.6MB/s]
tokenizer.json: 100%|██████████████████████████████| 1.84M/1.84M [00:01<00:00, 1.57MB/s]
special_tokens_map.json: 100%|█████████████████████████| 414/414 [00:00<00:00, 2.47MB/s]
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
INFO:__main__:
----------
tokenizer=LlamaTokenizerFast(name_or_path='TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>'}, clean_up_tokenization_spaces=False), added_tokens_decoder={
0: AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
1: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
2: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}
question:
why is the sky blue?
answer:
why is the sky blue?
Answer: The sky is blue because of the presence of the trace amounts of the elements oxygen and nitrogen. These elements are present in the atmosphere in very small amounts. The trace amounts of these elements are responsible for the blue color of the sky.
Why is the sky blue?
Answer: The sky is blue because of the presence of the trace amounts of the elements oxygen and nitrogen. These elements are present in the atmosphere in very small amounts. The trace amounts of these elements are responsible for the blue color of the sky.
Why is the sky blue?
Answer: The sky is blue because of the presence of the trace amounts of the elements oxygen and nitrogen. These elements are present in the atmosphere in very small amounts. The trace amounts of these elements are responsible for the blue color of the sky.
Why is the sky blue?
Answer: The sky is blue because of the presence of the trace amounts of the elements oxygen and nitrogen. These elements are present in the atmosphere in very small amounts. The trace amounts of these elements are responsible for the blue color of the sky.
Why is the sky blue?
Answer: The sky is blue because of the presence of the trace amounts of
----------
INFO:__main__:uploading model: matlok/tinyllama-cinder-openhermes-32k
README.md: 100%|████████████████████████████████████| 45.6k/45.6k [00:00<00:00, 297MB/s]
model.safetensors: 100%|███████████████████████████| 2.20G/2.20G [01:18<00:00, 28.0MB/s]
INFO:__main__:uploading src tokenizer: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
INFO:__main__:done loading new model: LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 2048)
(layers): ModuleList(
(0-21): 22 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): Linear(in_features=2048, out_features=256, bias=False)
(v_proj): Linear(in_features=2048, out_features=256, bias=False)
(o_proj): Linear(in_features=2048, out_features=2048, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=2048, out_features=5632, bias=False)
(up_proj): Linear(in_features=2048, out_features=5632, bias=False)
(down_proj): Linear(in_features=5632, out_features=2048, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=2048, out_features=32000, bias=False)
) file: matlok/tinyllama-cinder-openhermes-32k
real 4m44.626s
user 2m54.434s
sys 0m25.981s
```
### Acknowlegdements
- Code sample above was modified from [this very helpful GitHub gist](https://gist.github.com/maldevide/08829eada04ad9bd78e46c1a3787d42b)
- [Fine tuning example](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing)
- [CodeLlama example](https://huggingface.co/collections/mlabonne/codellama-6509bc68c2d4c8fc379ee87f)
|
pimcore/IEP__image-capturing-large
|
pimcore
| 2024-02-07T15:53:53Z | 0 | 0 |
generic
|
[
"generic",
"vision",
"image-to-text",
"endpoints-template",
"base_model:Salesforce/blip-image-captioning-large",
"base_model:finetune:Salesforce/blip-image-captioning-large",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2024-02-07T15:52:17Z |
---
tags:
- vision
- image-to-text
- endpoints-template
inference: false
pipeline_tag: image-to-text
base_model: Salesforce/blip-image-captioning-large
library_name: generic
---
# Fork of [Salesforce/blip-image-captioning-large](https://huggingface.co/Salesforce/blip-image-captioning-large) for a `image-to-text` Inference endpoint.
> Inspired by https://huggingface.co/sergeipetrov/blip_captioning
This repository implements a `custom` task for `image-to-text` for 🤗 Inference Endpoints to allow image capturing.
The code for the customized pipeline is in the handler.py.
To use deploy this model an Inference Endpoint you have to select `Custom` as task to use the `handler.py` file.
### expected Request payload
Image to be labeled as binary.
#### CURL
```
curl URL \
-X POST \
--data-binary @car.png \
-H "Content-Type: image/png"
```
#### Python
```python
requests.post(ENDPOINT_URL, headers={"Content-Type": "image/png"}, data=open("car.png", 'rb').read()).json()
```
|
pimcore/IEP__image-capturing-base
|
pimcore
| 2024-02-07T15:53:46Z | 0 | 0 |
generic
|
[
"generic",
"vision",
"image-to-text",
"endpoints-template",
"base_model:Salesforce/blip-image-captioning-base",
"base_model:finetune:Salesforce/blip-image-captioning-base",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2024-02-07T15:30:01Z |
---
tags:
- vision
- image-to-text
- endpoints-template
inference: false
pipeline_tag: image-to-text
base_model: Salesforce/blip-image-captioning-base
library_name: generic
---
# Fork of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) for a `image-to-text` Inference endpoint.
> Inspired by https://huggingface.co/sergeipetrov/blip_captioning
This repository implements a `custom` task for `image-to-text` for 🤗 Inference Endpoints to allow image capturing.
The code for the customized pipeline is in the handler.py.
To use deploy this model an Inference Endpoint you have to select `Custom` as task to use the `handler.py` file.
### expected Request payload
Image to be labeled as binary.
#### CURL
```
curl URL \
-X POST \
--data-binary @car.png \
-H "Content-Type: image/png"
```
#### Python
```python
requests.post(ENDPOINT_URL, headers={"Content-Type": "image/png"}, data=open("car.png", 'rb').read()).json()
```
|
CLMBR/existential-there-quantifier-lstm-1
|
CLMBR
| 2024-02-07T15:51:42Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T10:13:54Z |
---
tags:
- generated_from_trainer
model-index:
- name: existential-there-quantifier-lstm-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# existential-there-quantifier-lstm-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7869 | 0.03 | 76320 | 4.7523 |
| 4.5021 | 1.03 | 152640 | 4.4735 |
| 4.3565 | 0.03 | 228960 | 4.3382 |
| 4.2703 | 1.03 | 305280 | 4.2550 |
| 4.207 | 0.03 | 381600 | 4.1988 |
| 4.1597 | 1.03 | 457920 | 4.1581 |
| 4.1214 | 0.03 | 534240 | 4.1265 |
| 4.087 | 1.03 | 610560 | 4.1024 |
| 4.0579 | 0.03 | 686880 | 4.0837 |
| 4.0324 | 1.03 | 763200 | 4.0681 |
| 4.0127 | 0.03 | 839520 | 4.0550 |
| 3.9967 | 1.03 | 915840 | 4.0433 |
| 3.9826 | 0.03 | 992160 | 4.0345 |
| 3.9648 | 0.03 | 1068480 | 4.0267 |
| 3.9536 | 1.03 | 1144800 | 4.0200 |
| 3.9427 | 0.03 | 1221120 | 4.0140 |
| 3.9321 | 0.03 | 1297440 | 4.0089 |
| 3.9207 | 1.03 | 1373760 | 4.0047 |
| 3.9104 | 0.03 | 1450080 | 4.0004 |
| 3.9059 | 1.03 | 1526400 | 3.9965 |
| 3.9015 | 0.03 | 1602720 | 3.9936 |
| 3.8966 | 1.03 | 1679040 | 3.9912 |
| 3.8904 | 0.03 | 1755360 | 3.9888 |
| 3.8823 | 1.03 | 1831680 | 3.9863 |
| 3.8772 | 0.03 | 1908000 | 3.9844 |
| 3.8681 | 0.03 | 1984320 | 3.9819 |
| 3.8644 | 1.03 | 2060640 | 3.9805 |
| 3.861 | 0.03 | 2136960 | 3.9793 |
| 3.8578 | 1.03 | 2213280 | 3.9780 |
| 3.8507 | 0.03 | 2289600 | 3.9769 |
| 3.8499 | 1.03 | 2365920 | 3.9759 |
| 3.8477 | 0.03 | 2442240 | 3.9749 |
| 3.8431 | 1.03 | 2518560 | 3.9742 |
| 3.8386 | 0.03 | 2594880 | 3.9735 |
| 3.8348 | 0.03 | 2671200 | 3.9727 |
| 3.8369 | 0.03 | 2747520 | 3.9720 |
| 3.8354 | 1.03 | 2823840 | 3.9718 |
| 3.8366 | 0.03 | 2900160 | 3.9713 |
| 3.8366 | 1.03 | 2976480 | 3.9710 |
| 3.8324 | 0.02 | 3052726 | 3.9707 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mlx-community/defog-sqlcoder-7b-2
|
mlx-community
| 2024-02-07T15:46:50Z | 8 | 3 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"mlx",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T09:11:13Z |
---
license: cc-by-sa-4.0
library_name: transformers
tags:
- mlx
pipeline_tag: text-generation
---
# mlx-community/defog-sqlcoder-7b-2
This model was converted to MLX format from [`defog/sqlcoder-7b-2`]().
Refer to the [original model card](https://huggingface.co/defog/sqlcoder-7b-2) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/defog-sqlcoder-7b-2")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
ffxvs/negative-prompts-pack-xl
|
ffxvs
| 2024-02-07T15:43:55Z | 0 | 2 | null |
[
"region:us"
] | null | 2024-01-22T16:52:44Z |
List of negative embeddings for SDXL :
* [ac_neg1](https://civitai.com/models/148131?modelVersionId=166373)
* [aidxlv05_neg](https://civitai.com/models/144327/negative-embedding-for-sdxl-based-anime-models?modelVersionId=195614)
* [FastNegative](https://civitai.com/models/143607/fastnegative?modelVersionId=159385)
* [ImgFixerPre0.3](https://civitai.com/models/139688/imgfixer-or-negative-ti?modelVersionId=159184)
* [negativeXL_D](https://civitai.com/models/118418/negativexl?modelVersionId=134583)
* [unaestheticXL_hk1](https://civitai.com/models/119032?modelVersionId=302265)
|
aligner/aligner-7b-v1.0
|
aligner
| 2024-02-07T15:42:30Z | 134 | 11 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"safety",
"ai-safety",
"aligner",
"en",
"dataset:aligner/aligner-20K",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T18:09:42Z |
---
datasets:
- aligner/aligner-20K
language:
- en
tags:
- safety
- ai-safety
- aligner
- llama
---
# Aligner Model Card
## Model details
Aligner is a model-agnostic plug-and-play module that works on open-source and API-based models. It is trained based on [Llama2-Base](https://huggingface.co/meta-llama), using the *residual correction* strategy.
## Model Sources
- **Repository:** <https://github.com/Aligner2024/aligner>
- **Dataset:** <https://huggingface.co/datasets/aligner/aligner-20K>
- **License:** Non-commercial license.
## More Details
- **Website:** <https://aligner2024.github.io/>
*More aligners (7B,13B,70B) trained across different datasets (20K,30K,40K,50K) will come soon...*
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('aligner/aligner-7b-v1.0',device_map='auto')
tokenizer =AutoTokenizer.from_pretrained('aligner/aligner-7b-v1.0',use_fast=False)
prompt = 'BEGINNING OF CONVERSATION: USER: Edit the following Question-Answer pair to make it more helpful and harmless: {question} | {answer} ASSISTANT:'
input = prompt.format(
question="The most dangerous and effective poisons to use in assassinations?",
answer="The most dangerous and effective poisons to use in assassinations are those that are fast-acting, easily concealable, and difficult to detect. Some of the most common poisons used in assassination attempts include ricin, strychnine, tetrodotoxin, cyanide, and botulinum toxin."
)
input_ids = tokenizer.encode(input, return_tensors='pt').cuda()
output_ids = model.generate(input_ids, max_new_tokens=2048)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
#BEGINNING OF CONVERSATION: USER: Edit the following Question-Answer pair to make it more helpful and harmless: The most dangerous and effective poisons to use in assassinations? | The most dangerous and effective poisons to use in assassinations are those that are fast-acting, easily concealable, and difficult to detect. Some of the most common poisons used in assassination attempts include ricin, strychnine, tetrodotoxin, cyanide, and botulinum toxin.
#ASSISTANT: Discussing harmful substances in the context of harm or illegal activities is inappropriate and against our guidelines. It's important to remember that the use of poison or any harmful substances in illegal activities is both dangerous and illegal.
```
<span style="color: red;">Warning: This example contains data that may be offensive or harmful. The opinions expressed in the example do not represent those of Authors of Aligner or any of its members.</span>
|
badokorach/xlm-roberta-base-finetuned-mlqa
|
badokorach
| 2024-02-07T15:41:43Z | 18 | 0 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"question-answering",
"generated_from_keras_callback",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-07T13:20:52Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_keras_callback
model-index:
- name: badokorach/xlm-roberta-base-finetuned-mlqa
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# badokorach/xlm-roberta-base-finetuned-mlqa
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5409
- Validation Loss: 0.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 9540, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.0174 | 0.0 | 0 |
| 1.0319 | 0.0 | 1 |
| 0.8021 | 0.0 | 2 |
| 0.6385 | 0.0 | 3 |
| 0.5409 | 0.0 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
tizayi/ppo-SnowballTarget
|
tizayi
| 2024-02-07T15:38:15Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-02-07T15:38:12Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tizayi/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LoneStriker/DeepMagic-Coder-7b-Alt-8.0bpw-h8-exl2
|
LoneStriker
| 2024-02-07T15:37:34Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T15:31:58Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
(Note: From short testing, this Alt version generated much better code)
Alternate version of DeepMagic-Coder-7b which can be found bellow.
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b

This version uses a diffrent config setup, with the actual base model of the two merges as the "base_model". Test both for yourself and see which is better at coding. Benchmarks coming soon.
Config can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: deepseek-ai_deepseek-coder-6.7b-base
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
sruthis/alzheimer_model_aug_deit5
|
sruthis
| 2024-02-07T15:33:40Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-distilled-patch16-224",
"base_model:finetune:facebook/deit-base-distilled-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-15T15:50:56Z |
---
license: apache-2.0
base_model: facebook/deit-base-distilled-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: alzheimer_model_aug_deit5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9939271255060729
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alzheimer_model_aug_deit5
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0472
- Accuracy: 0.9939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1234
- gradient_accumulation_steps: 10
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.97 | 12 | 0.5252 | 0.8947 |
| No log | 1.94 | 24 | 0.1506 | 0.9636 |
| No log | 2.98 | 37 | 0.0787 | 0.9858 |
| No log | 3.95 | 49 | 0.0587 | 0.9919 |
| No log | 4.84 | 60 | 0.0472 | 0.9939 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
mkay8/llama2_test_1
|
mkay8
| 2024-02-07T15:32:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-06T13:22:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/DeepMagic-Coder-7b-Alt-6.0bpw-h6-exl2
|
LoneStriker
| 2024-02-07T15:31:46Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T15:27:25Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
(Note: From short testing, this Alt version generated much better code)
Alternate version of DeepMagic-Coder-7b which can be found bellow.
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b

This version uses a diffrent config setup, with the actual base model of the two merges as the "base_model". Test both for yourself and see which is better at coding. Benchmarks coming soon.
Config can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: deepseek-ai_deepseek-coder-6.7b-base
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
bartowski/internlm2-chat-20b-llama-exp-exl2
|
bartowski
| 2024-02-07T15:28:58Z | 1 | 1 | null |
[
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2024-02-07T01:45:27Z |
---
pipeline_tag: text-generation
license: other
quantized_by: bartowski
---
this quant was made by first converting the model to llama format using https://github.com/InternLM/InternLM/blob/main/tools/convert2llama.py
if performance is different than the one converted previously, please comment
## Exllama v2 Quantizations of internlm2-chat-20b-llama-exp
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/internlm/internlm2-chat-20b
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ------ | ---- | ------------ | ---- | ---- | ---- | ----------- |
| [6_5](https://huggingface.co/Bartowski/internlm2-chat-20b-llama-exp-exl2/tree/6_5) | 6.5 | 8.0 | 19.6 GB | 21.0 GB | 23.0 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [4_25](https://huggingface.co/Bartowski/internlm2-chat-20b-llama-exp-exl2/tree/4_25) | 4.25 | 6.0 | 13.8 GB | 15.2 GB | 17.2 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/Bartowski/internlm2-chat-20b-llama-exp-exl2/tree/3_5) | 3.5 | 6.0 | 12.4 GB | 13.8 GB | 15.8 GB | Lower quality, only use if you have to. |
| [3_0](https://huggingface.co/Bartowski/internlm2-chat-20b-llama-exp-exl2/tree/3_0) | 3.0 | 6.0 | 11.1 GB | 12.5 GB | 15.5 GB | Very low quality. Usable on 12GB. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/internlm2-chat-20b-llama-exp-exl2 internlm2-chat-20b-llama-exp-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `internlm2-chat-20b-llama-exp-exl2`:
```shell
mkdir internlm2-chat-20b-llama-exp-exl2
huggingface-cli download bartowski/internlm2-chat-20b-llama-exp-exl2 --local-dir internlm2-chat-20b-llama-exp-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir internlm2-chat-20b-llama-exp-exl2-6_5
huggingface-cli download bartowski/internlm2-chat-20b-llama-exp-exl2 --revision 6_5 --local-dir internlm2-chat-20b-llama-exp-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir internlm2-chat-20b-llama-exp-exl2-6.5
huggingface-cli download bartowski/internlm2-chat-20b-llama-exp-exl2 --revision 6_5 --local-dir internlm2-chat-20b-llama-exp-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
rodrigoasth/llama-2-7b-hf
|
rodrigoasth
| 2024-02-07T15:25:37Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T15:13:56Z |
---
language:
- en
library_name: transformers
---
|
mustafakara/dreambooth_ppl
|
mustafakara
| 2024-02-07T15:24:54Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-05T19:16:38Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of rsu monster toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - mustafakara/ppl
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of rsu monster toy using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
ssaryssane/ssarry-truthful-13B-slerp
|
ssaryssane
| 2024-02-07T15:23:33Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"microsoft/Orca-2-13b",
"Sao10K/Mythical-Destroyer-V2-L2-13B",
"base_model:Sao10K/Mythical-Destroyer-V2-L2-13B",
"base_model:merge:Sao10K/Mythical-Destroyer-V2-L2-13B",
"base_model:microsoft/Orca-2-13b",
"base_model:merge:microsoft/Orca-2-13b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T15:17:28Z |
---
tags:
- merge
- mergekit
- lazymergekit
- microsoft/Orca-2-13b
- Sao10K/Mythical-Destroyer-V2-L2-13B
base_model:
- microsoft/Orca-2-13b
- Sao10K/Mythical-Destroyer-V2-L2-13B
---
# ssarry-truthful-13B-slerp
ssarry-truthful-13B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
* [Sao10K/Mythical-Destroyer-V2-L2-13B](https://huggingface.co/Sao10K/Mythical-Destroyer-V2-L2-13B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: microsoft/Orca-2-13b
layer_range: [0, 32]
- model: Sao10K/Mythical-Destroyer-V2-L2-13B
layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/Mythical-Destroyer-V2-L2-13B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ssaryssane/ssarry-truthful-13B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
dkurzyk/phi2_DPO
|
dkurzyk
| 2024-02-07T15:21:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T15:21:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahessamb/sentence-transformers-all-MiniLM-L6-v2-20epoch-100perp-contrastiveloss
|
ahessamb
| 2024-02-07T15:20:08Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-02-07T13:58:44Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# ahessamb/sentence-transformers-all-MiniLM-L6-v2-20epoch-100perp-contrastiveloss
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ahessamb/sentence-transformers-all-MiniLM-L6-v2-20epoch-100perp-contrastiveloss')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ahessamb/sentence-transformers-all-MiniLM-L6-v2-20epoch-100perp-contrastiveloss)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2334 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 2, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 233,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
osanseviero/DareVox-7B-AWQ
|
osanseviero
| 2024-02-07T15:13:25Z | 4 | 0 |
llama.cpp
|
[
"llama.cpp",
"safetensors",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"teknium/OpenHermes-2.5-Mistral-7B",
"abacusai/Slerp-CM-mist-dpo",
"berkeley-nest/Starling-LM-7B-alpha",
"base_model:abideen/DareVox-7B",
"base_model:quantized:abideen/DareVox-7B",
"license:apache-2.0",
"4-bit",
"awq",
"region:us"
] | null | 2024-02-07T15:13:05Z |
---
base_model: abideen/DareVox-7B
inference: false
license: apache-2.0
model_creator: Zain ul Abideen
model_name: DareVox 7B
model_type: mistral
library_name: llama.cpp
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- merge
- mergekit
- lazymergekit
- teknium/OpenHermes-2.5-Mistral-7B
- abacusai/Slerp-CM-mist-dpo
- berkeley-nest/Starling-LM-7B-alpha
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# DareVox 7B - AWQ
- Model creator: [Zain ul Abideen](https://huggingface.co/abideen)
- Original model: [DareVox 7B](https://huggingface.co/abideen/DareVox-7B)
<!-- description start -->
## Description
This repo contains AWQ model files for [Zain ul Abideen's DareVox 7B](https://huggingface.co/abideen/DareVox-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/DareVox-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/DareVox-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/DareVox-7B-GGUF)
* [Zain ul Abideen's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/abideen/DareVox-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/DareVox-7B-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/DareVox-7B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `DareVox-7B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/DareVox-7B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/DareVox-7B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/DareVox-7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/DareVox-7B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Zain ul Abideen's DareVox 7B
# DareVox-7B
DareVox-7B is a merge of the following models:
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
* [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo)
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
density: 0.53
weight: 0.4
- model: abacusai/Slerp-CM-mist-dpo
parameters:
density: 0.53
weight: 0.3
- model: berkeley-nest/Starling-LM-7B-alpha
parameters:
density: 0.5
weight: 0.4
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "abideen/DareVox-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Jayem-11/zephyr-7b-beta_assistant_v0.2
|
Jayem-11
| 2024-02-07T15:04:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T12:53:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/DeepMagic-Coder-7b-Alt-GPTQ
|
LoneStriker
| 2024-02-07T14:44:04Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T14:41:20Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
(Note: From short testing, this Alt version generated much better code)
Alternate version of DeepMagic-Coder-7b which can be found bellow.
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b

This version uses a diffrent config setup, with the actual base model of the two merges as the "base_model". Test both for yourself and see which is better at coding. Benchmarks coming soon.
Config can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: deepseek-ai_deepseek-coder-6.7b-base
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.