modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-22 18:29:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 570
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-22 18:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
alexgrigore/videomae-base-finetuned-gesturePhasev2
|
alexgrigore
| 2024-05-31T13:07:10Z | 62 | 0 |
transformers
|
[
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-05-31T11:24:50Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-gesturePhasev2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-gesturePhasev2
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8172
- Accuracy: 0.7633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.2
- training_steps: 376
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.1665 | 0.125 | 47 | 1.0642 | 0.775 |
| 0.7316 | 1.1263 | 95 | 0.7826 | 0.7875 |
| 0.7259 | 2.125 | 142 | 0.8042 | 0.7875 |
| 0.6643 | 3.1263 | 190 | 0.8023 | 0.7875 |
| 0.761 | 4.125 | 237 | 0.8077 | 0.7875 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Wartortle/npine0055_2
|
Wartortle
| 2024-05-31T12:59:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-05-31T12:01:14Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: CompVis/stable-diffusion-v1-4
inference: true
instance_prompt: an illustration of npine0055
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - Wartortle/npine0055_2
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on an illustration of npine0055 using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Amna100/fold_4
|
Amna100
| 2024-05-31T12:53:47Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta",
"token-classification",
"generated_from_trainer",
"base_model:Amna100/PreTraining-MLM",
"base_model:finetune:Amna100/PreTraining-MLM",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-04T13:01:22Z |
---
license: mit
base_model: Amna100/PreTraining-MLM
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: fold_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-change2/runs/zkyqf4w8)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-change2/runs/n6lnsbeg)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-change2/runs/k9jhon43)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-change2/runs/67sviuwh)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-change2/runs/e4zmtw0z)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-change2/runs/ykmsii48)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-change2/runs/hrdcpnd9)
# fold_4
This model is a fine-tuned version of [Amna100/PreTraining-MLM](https://huggingface.co/Amna100/PreTraining-MLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0104
- Precision: 0.6792
- Recall: 0.5870
- F1: 0.6297
- Accuracy: 0.9993
- Roc Auc: 0.9967
- Pr Auc: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Roc Auc | Pr Auc |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------:|:------:|
| 0.0252 | 1.0 | 711 | 0.0159 | 0.4538 | 0.6413 | 0.5315 | 0.9988 | 0.9944 | 0.9998 |
| 0.0095 | 2.0 | 1422 | 0.0104 | 0.6792 | 0.5870 | 0.6297 | 0.9993 | 0.9967 | 0.9999 |
| 0.003 | 3.0 | 2133 | 0.0106 | 0.6432 | 0.6957 | 0.6684 | 0.9993 | 0.9973 | 0.9999 |
| 0.0024 | 4.0 | 2844 | 0.0126 | 0.7006 | 0.6739 | 0.6870 | 0.9994 | 0.9960 | 0.9999 |
| 0.0004 | 5.0 | 3555 | 0.0148 | 0.7239 | 0.6413 | 0.6801 | 0.9994 | 0.9954 | 0.9999 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
slavamarcin/saiga3-8b-yuraz28-dataset-IA3
|
slavamarcin
| 2024-05-31T12:51:13Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:IlyaGusev/saiga_llama3_8b",
"base_model:adapter:IlyaGusev/saiga_llama3_8b",
"license:other",
"region:us"
] | null | 2024-05-31T12:51:10Z |
---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: IlyaGusev/saiga_llama3_8b
model-index:
- name: saiga3-8b-yuraz28-dataset-IA3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# saiga3-8b-yuraz28-dataset-IA3
This model is a fine-tuned version of [IlyaGusev/saiga_llama3_8b](https://huggingface.co/IlyaGusev/saiga_llama3_8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.11.2.dev0
- Transformers 4.38.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
Essacheez/Phi-3-mini-4k-instruct-finetune-translation-10k-system-prompt-style
|
Essacheez
| 2024-05-31T12:50:21Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-30T12:16:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MohamedAcadys/PointConImageModelV1-4
|
MohamedAcadys
| 2024-05-31T12:48:48Z | 5 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-05-20T12:16:11Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
base_model: CompVis/stable-diffusion-v1-4
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - MohamedAcadys/PointConImageModelV1-4
This pipeline was finetuned from **CompVis/stable-diffusion-v1-4** on the **Acadys/PointConImagesV2** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ["Un patron en costume donne un dossier à un employé dans le style 'Edition point Con'"]:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("MohamedAcadys/PointConImageModelV1-4", torch_dtype=torch.float16)
prompt = "Un patron en costume donne un dossier à un employé dans le style 'Edition point Con'"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 200
* Learning rate: 1e-05
* Batch size: 2
* Gradient accumulation steps: 4
* Image resolution: 512
* Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/acadys-sadadou/text2image-fine-tune/runs/hflztcbt).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Sudipta1995/Llama-2-13b-acadftacadreviews
|
Sudipta1995
| 2024-05-31T12:41:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-05T12:08:05Z |
This is a conference review model.
|
TheSleepyJo/31052024_p25_class_model
|
TheSleepyJo
| 2024-05-31T12:41:09Z | 168 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-05-31T12:40:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jinaai/jina-bert-flash-implementation
|
jinaai
| 2024-05-31T12:41:05Z | 233,407 | 5 |
transformers
|
[
"transformers",
"bert",
"custom_code",
"endpoints_compatible",
"region:eu"
] | null | 2024-02-21T11:19:46Z |
# BERT with Flash-Attention
### Installing dependencies
To run the model on GPU, you need to install Flash Attention.
You may either install from pypi (which may not work with fused-dense), or from source.
To install from source, clone the GitHub repository:
```console
git clone [email protected]:Dao-AILab/flash-attention.git
```
The code provided here should work with commit `43950dd`.
Change to the cloned repo and install:
```console
cd flash-attention && python setup.py install
```
This will compile the flash-attention kernel, which will take some time.
If you would like to use fused MLPs (e.g. to use activation checkpointing),
you may install fused-dense also from source:
```console
cd csrc/fused_dense_lib && python setup.py install
```
### Configuration
The config adds some new parameters:
- `use_flash_attn`: If `True`, always use flash attention. If `None`, use flash attention when GPU is available. If `False`, never use flash attention (works on CPU).
- `window_size`: Size (left and right) of the local attention window. If `(-1, -1)`, use global attention
- `dense_seq_output`: If true, we only need to pass the hidden states for the masked out token (around 15%) to the classifier heads. I set this to true for pretraining.
- `fused_mlp`: Whether to use fused-dense. Useful to reduce VRAM in combination with activation checkpointing
- `mlp_checkpoint_lvl`: One of `{0, 1, 2}`. Increasing this increases the amount of activation checkpointing within the MLP. Keep this at 0 for pretraining and use gradient accumulation instead. For embedding training, increase this as much as needed.
- `last_layer_subset`: If true, we only need the compute the last layer for a subset of tokens. I left this to false.
- `use_qk_norm`: Whether or not to use QK-normalization
- `num_loras`: Number of LoRAs to use when initializing a `BertLoRA` model. Has no effect on other models.
|
javidanaslanli/tiny-az-tokenizer-22k
|
javidanaslanli
| 2024-05-31T12:40:23Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T12:40:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
taoki/deepseek-coder-6.7b-it-jmultiwoz-dolly-amenokaku-alpaca_jp_python
|
taoki
| 2024-05-31T12:39:13Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"trl",
"deepseek",
"conversational",
"ja",
"dataset:sakusakumura/databricks-dolly-15k-ja-scored",
"dataset:nu-dialogue/jmultiwoz",
"dataset:kunishou/amenokaku-code-instruct",
"dataset:HachiML/alpaca_jp_python",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:finetune:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-30T15:41:55Z |
---
language:
- ja
license: other
tags:
- text-generation-inference
- transformers
- trl
- deepseek
datasets:
- sakusakumura/databricks-dolly-15k-ja-scored
- nu-dialogue/jmultiwoz
- kunishou/amenokaku-code-instruct
- HachiML/alpaca_jp_python
license_name: deepseek
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
---
# Uploaded model
- **Developed by:** taoki
- **License:** deepseek
- **Finetuned from model :** deepseek-ai/deepseek-coder-6.7b-instruct
# Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained(
"taoki/deepseek-coder-6.7b-it-jmultiwoz-dolly-amenokaku-alpaca_jp_python"
)
model = AutoModelForCausalLM.from_pretrained(
"taoki/deepseek-coder-6.7b-it-jmultiwoz-dolly-amenokaku-alpaca_jp_python"
)
if torch.cuda.is_available():
model = model.to("cuda")
prompt="""あなたはプログラミングアシスタントです。
### Instruction:
OpenCVを用いて定点カメラから画像を保存するコードを示してください。
### Response:
"""
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**input_ids,
max_new_tokens=512,
do_sample=True,
top_p=0.9,
temperature=0.2,
repetition_penalty=1.1,
)
print(tokenizer.decode(outputs[0]))
```
# Output
````
<|begin▁of▁sentence|>あなたはプログラミングアシスタントです。
### Instruction:
OpenCVを用いて定点カメラから画像を保存するコードを示してください。
### Response:
以下のPythonコードは、OpenCVを使用して定点カメラから画像を取得し、保存する方法を示します。
```python
import cv2
# カメラ設定
camera_id = 0 # 通常のカメラIDは0
cap = cv2.VideoCapture(camera_id)
if not cap.isOpened():
print("Failed to open camera")
else:
ret, frame = cap.read()
if ret:
# 画像を保存する
filename = 'image.jpg'
cv2.imwrite(filename, frame)
print('Image saved as', filename)
else:
print('Failed to capture image')
cap.release()
cv2.destroyAllWindows()
```
上記のコードでは、`cv2.VideoCapture()`関数でカメラを指定し、`cap.read()`で画像を読み込みます。成功した場合はTrueが返り、画像がframeとして返されます。
次に、`cv2.imwrite()`関数で画像を保存します。第一引数には保存先のパスを指定し、第二引数には保存する画像を指定します。
最後に、`cap.release()`でカメラを解放し、`cv2.destroyAllWindows()`で全てのウィンドウを破棄します。
<|EOT|>
````
|
kamilmelloukepfl/ref-pythia160m-dpo
|
kamilmelloukepfl
| 2024-05-31T12:35:46Z | 152 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-30T15:58:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
harveybro/molt5-augmented-default-0-large-caption2smiles
|
harveybro
| 2024-05-31T12:35:41Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-31T12:33:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
worldboss/ft-phi-3-on-linux-orpo
|
worldboss
| 2024-05-31T12:33:03Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"custom_code",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T04:22:37Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- argilla/distilabel-capybara-dpo-7k-binarized
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
Ramikan-BR/tinyllama-coder-py-v15
|
Ramikan-BR
| 2024-05-31T12:30:52Z | 186 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T12:09:56Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ParZiVal04/gemma-2b-patch-gen
|
ParZiVal04
| 2024-05-31T12:24:32Z | 152 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T12:17:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
legraphista/neo_7b-IMat-GGUF
|
legraphista
| 2024-05-31T12:21:47Z | 391 | 0 |
gguf
|
[
"gguf",
"quantized",
"GGUF",
"imatrix",
"quantization",
"imat",
"static",
"16bit",
"8bit",
"6bit",
"5bit",
"4bit",
"3bit",
"2bit",
"1bit",
"text-generation",
"base_model:m-a-p/neo_7b",
"base_model:quantized:m-a-p/neo_7b",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2024-05-31T11:11:07Z |
---
base_model: m-a-p/neo_7b
inference: false
library_name: gguf
license: apache-2.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- imatrix
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
---
# neo_7b-IMat-GGUF
_Llama.cpp imatrix quantization of m-a-p/neo_7b_
Original Model: [m-a-p/neo_7b](https://huggingface.co/m-a-p/neo_7b)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3051](https://github.com/ggerganov/llama.cpp/releases/tag/b3051)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [neo_7b.Q8_0.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q8_0.gguf) | Q8_0 | 8.28GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b.Q6_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q6_K.gguf) | Q6_K | 6.40GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b.Q4_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q4_K.gguf) | Q4_K | 4.74GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.Q3_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q3_K.gguf) | Q3_K | 3.79GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.Q2_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q2_K.gguf) | Q2_K | 2.92GB | ✅ Available | 🟢 IMatrix | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [neo_7b.BF16.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.BF16.gguf) | BF16 | 15.59GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b.FP16.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.FP16.gguf) | F16 | 15.59GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b.Q8_0.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q8_0.gguf) | Q8_0 | 8.28GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b.Q6_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q6_K.gguf) | Q6_K | 6.40GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b.Q5_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q5_K.gguf) | Q5_K | 5.54GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b.Q5_K_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q5_K_S.gguf) | Q5_K_S | 5.39GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b.Q4_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q4_K.gguf) | Q4_K | 4.74GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.Q4_K_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q4_K_S.gguf) | Q4_K_S | 4.47GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.IQ4_NL.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ4_NL.gguf) | IQ4_NL | 4.44GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.IQ4_XS.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ4_XS.gguf) | IQ4_XS | 4.20GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.Q3_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q3_K.gguf) | Q3_K | 3.79GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.Q3_K_L.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q3_K_L.gguf) | Q3_K_L | 4.11GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.Q3_K_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q3_K_S.gguf) | Q3_K_S | 3.43GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.IQ3_M.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ3_M.gguf) | IQ3_M | 3.53GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.IQ3_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ3_S.gguf) | IQ3_S | 3.43GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.IQ3_XS.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ3_XS.gguf) | IQ3_XS | 3.25GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.IQ3_XXS.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ3_XXS.gguf) | IQ3_XXS | 3.03GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.Q2_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q2_K.gguf) | Q2_K | 2.92GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.Q2_K_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q2_K_S.gguf) | Q2_K_S | 2.71GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.IQ2_M.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ2_M.gguf) | IQ2_M | 2.68GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.IQ2_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ2_S.gguf) | IQ2_S | 2.47GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.IQ2_XS.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ2_XS.gguf) | IQ2_XS | 2.36GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.IQ2_XXS.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ2_XXS.gguf) | IQ2_XXS | 2.14GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.IQ1_M.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ1_M.gguf) | IQ1_M | 1.89GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b.IQ1_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ1_S.gguf) | IQ1_S | 1.73GB | ✅ Available | 🟢 IMatrix | 📦 No
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/neo_7b-IMat-GGUF --include "neo_7b.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/neo_7b-IMat-GGUF --include "neo_7b.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{user_prompt} [/INST]{assistant_response}</s><s>[INST] {next_user_prompt} [/INST]
```
### Chat template with system prompt
```
<s>[INST] {user_prompt} [/INST]{assistant_response}</s><s>[INST] {next_user_prompt} [/INST]
```
### Llama.cpp
```
llama.cpp/main -m neo_7b.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `neo_7b.Q8_0`)
3. Run `gguf-split --merge neo_7b.Q8_0/neo_7b.Q8_0-00001-of-XXXXX.gguf neo_7b.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
|
Pfannerstill/torch_policy_gradient
|
Pfannerstill
| 2024-05-31T12:21:32Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-31T12:21:23Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: torch_policy_gradient
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 408.50 +/- 183.41
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JuanGondu/ft_llama3_2epoch
|
JuanGondu
| 2024-05-31T12:19:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T12:17:54Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** JuanGondu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Haru4me/ppo-PyramidsRND
|
Haru4me
| 2024-05-31T12:18:04Z | 21 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-05-31T12:17:42Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Haru4me/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Hamzezi/pythia-160m-dpo-pos
|
Hamzezi
| 2024-05-31T12:16:02Z | 151 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T00:23:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Mattis0525/bert-base-chinese-finetuned-tcfd
|
Mattis0525
| 2024-05-31T12:14:12Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-30T22:37:31Z |
---
base_model: bert-base-chinese
tags:
- generated_from_keras_callback
model-index:
- name: Mattis0525/bert-base-chinese-finetuned-tcfd
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Mattis0525/bert-base-chinese-finetuned-tcfd
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6502
- Train Accuracy: 0.0591
- Validation Loss: 0.6504
- Validation Accuracy: 0.0591
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.9480 | 0.0555 | 0.8742 | 0.0566 | 0 |
| 0.8735 | 0.0567 | 0.7660 | 0.0589 | 1 |
| 0.7694 | 0.0574 | 0.7093 | 0.0584 | 2 |
| 0.7190 | 0.0588 | 0.6563 | 0.0604 | 3 |
| 0.6720 | 0.0592 | 0.6636 | 0.0601 | 4 |
| 0.6479 | 0.0596 | 0.6639 | 0.0602 | 5 |
| 0.6446 | 0.0598 | 0.6266 | 0.0614 | 6 |
| 0.6257 | 0.0602 | 0.6393 | 0.0609 | 7 |
| 0.6534 | 0.0590 | 0.6301 | 0.0588 | 8 |
| 0.6502 | 0.0591 | 0.6504 | 0.0591 | 9 |
### Framework versions
- Transformers 4.41.1
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
EduRayan/A-new-repo
|
EduRayan
| 2024-05-31T12:11:31Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-31T11:47:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: A-new-repo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# A-new-repo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6928
- eval_accuracy: 0.4933
- eval_f1: 0.1828
- eval_runtime: 70.4773
- eval_samples_per_second: 4.257
- eval_steps_per_second: 0.27
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.30.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.13.3
|
Ramikan-BR/tinyllama-coder-py-LORA-v15
|
Ramikan-BR
| 2024-05-31T12:08:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T12:08:15Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stablediffusionapi/citron-animae-treasure
|
stablediffusionapi
| 2024-05-31T12:01:36Z | 3 | 2 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-05-31T11:58:54Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# citron-animetreasure API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "citron-animae-treasure"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/citron-animae-treasure)
Model link: [View model](https://modelslab.com/models/citron-animae-treasure)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "citron-animae-treasure",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
Capx/Llama-3-Obsidian
|
Capx
| 2024-05-31T11:56:48Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"general purpose",
"en",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T07:14:56Z |
---
license: llama3
language:
- en
pipeline_tag: text-generation
tags:
- general purpose
---
# Capx AI's Obsidian series
Capx AI's Latest series of finetunes based on [Meta's Llama-3](https://llama.meta.com/llama3/).

We release Llama-3 Obsidian, our state of the art finetune model for general purpose tasks.
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
We further finetune the pretrained model on a custom dataset to enhance instruction following ability of the model.
#### Model Architecture
Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT).
## How to use:
The model has been finetuned in the Alpaca format
You can run conversational inference by leveraging the Auto classes with the generate() function.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "instruction", "content": "You are \"Capx Assistant\", a conscious sentient superintelligent artificial intelligence developed by Capx AI, and your purpose and drive is to assist the user with any request they have."},
{"role": "input", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Authors
Capx community
### Cite
```bibtex
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
### License
Governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
|
anggaarash/poca-SoccerTwos
|
anggaarash
| 2024-05-31T11:56:15Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"ML-Agents-SoccerTwos",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
] |
reinforcement-learning
| 2024-05-31T11:53:56Z |
---
library_name: ml-agents
tags:
- ML-Agents-SoccerTwos
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: anggaarash/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bartowski/Llama-3-Ultron-GGUF
|
bartowski
| 2024-05-31T11:56:01Z | 132 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-05-31T11:38:02Z |
---
library_name: transformers
tags: []
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Llama-3-Ultron
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3024">b3024</a> for quantization.
Original model: https://huggingface.co/jayasuryajsk/Llama-3-Ultron
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-Ultron-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-Ultron-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-Ultron-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Llama-3-Ultron-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Llama-3-Ultron-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-Ultron-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-Ultron-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-Ultron-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-Ultron-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Llama-3-Ultron-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-Ultron-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Llama-3-Ultron-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-Ultron-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-Ultron-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Llama-3-Ultron-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-Ultron-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-Ultron-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama-3-Ultron-GGUF --include "Llama-3-Ultron-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama-3-Ultron-GGUF --include "Llama-3-Ultron-Q8_0.gguf/*" --local-dir Llama-3-Ultron-Q8_0
```
You can either specify a new local-dir (Llama-3-Ultron-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
RADAMSHI/xlm-roberta-base-finetuned-panx-all
|
RADAMSHI
| 2024-05-31T11:54:43Z | 127 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-05-30T07:05:32Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1851
- F1: 0.8544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2893 | 1.0 | 1252 | 0.2014 | 0.8148 |
| 0.1587 | 2.0 | 2504 | 0.1777 | 0.8427 |
| 0.1015 | 3.0 | 3756 | 0.1851 | 0.8544 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
onurkeles/bertturk-ottoman-raw
|
onurkeles
| 2024-05-31T11:52:55Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-31T11:52:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
weiiv/Phi-3-medium-4k-instruct-Q4_K_M-GGUF
|
weiiv
| 2024-05-31T11:46:49Z | 4 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"phi3",
"phi",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Phi-3-medium-4k-instruct",
"base_model:quantized:unsloth/Phi-3-medium-4k-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-31T11:46:29Z |
---
language:
- en
license: mit
library_name: transformers
tags:
- unsloth
- phi3
- transformers
- phi
- llama-cpp
- gguf-my-repo
base_model: unsloth/Phi-3-medium-4k-instruct
---
# weiiv/Phi-3-medium-4k-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`unsloth/Phi-3-medium-4k-instruct`](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo weiiv/Phi-3-medium-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-medium-4k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo weiiv/Phi-3-medium-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-medium-4k-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo weiiv/Phi-3-medium-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-medium-4k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo weiiv/Phi-3-medium-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-medium-4k-instruct-q4_k_m.gguf -c 2048
```
|
RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf
|
RichardErkhov
| 2024-05-31T11:44:31Z | 48 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T08:59:14Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
japanese-stablelm-base-ja_vocab-beta-7b - GGUF
- Model creator: https://huggingface.co/stabilityai/
- Original model: https://huggingface.co/stabilityai/japanese-stablelm-base-ja_vocab-beta-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q2_K.gguf) | Q2_K | 2.44GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.IQ3_XS.gguf) | IQ3_XS | 2.69GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.IQ3_S.gguf) | IQ3_S | 2.83GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q3_K_S.gguf) | Q3_K_S | 2.83GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.IQ3_M.gguf) | IQ3_M | 2.98GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q3_K.gguf) | Q3_K | 3.15GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q3_K_M.gguf) | Q3_K_M | 3.15GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q3_K_L.gguf) | Q3_K_L | 3.43GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.IQ4_XS.gguf) | IQ4_XS | 3.49GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q4_0.gguf) | Q4_0 | 3.66GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.IQ4_NL.gguf) | IQ4_NL | 3.68GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q4_K_S.gguf) | Q4_K_S | 3.68GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q4_K.gguf) | Q4_K | 3.89GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q4_K_M.gguf) | Q4_K_M | 3.89GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q4_1.gguf) | Q4_1 | 4.04GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q5_0.gguf) | Q5_0 | 4.43GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q5_K_S.gguf) | Q5_K_S | 4.43GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q5_K.gguf) | Q5_K | 4.56GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q5_K_M.gguf) | Q5_K_M | 4.56GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q5_1.gguf) | Q5_1 | 4.82GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q6_K.gguf) | Q6_K | 5.26GB |
| [japanese-stablelm-base-ja_vocab-beta-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-ja_vocab-beta-7b-gguf/blob/main/japanese-stablelm-base-ja_vocab-beta-7b.Q8_0.gguf) | Q8_0 | 6.81GB |
Original model description:
---
language:
- ja
tags:
- japanese-stablelm
- causal-lm
pipeline_tag: text-generation
datasets:
- wikipedia
- mc4
- cc100
- oscar-corpus/OSCAR-2301
- oscar-corpus/OSCAR-2201
- cerebras/SlimPajama-627B
license:
- llama2
extra_gated_fields:
Name: text
Email: text
Country: text
Organization or Affiliation: text
I allow Stability AI to contact me about information related to its models and research: checkbox
---
# Japanese-StableLM-Base-JAVocab-Beta-7B

> A cute robot wearing a kimono writes calligraphy with one single brush — [Stable Diffusion XL](https://clipdrop.co/stable-diffusion)
## Model Description
`japanese-stablelm-base-ja_vocab-beta-7b` is a 7B-parameter decoder-only language model based on [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) that has been fine-tuned on a diverse collection of Japanese data, with the intent of maximizing downstream performance on Japanese language tasks.
Compared to the [standard base model](https://huggingface.co/stabilityai/japanese-stablelm-base-beta-7b), this model uses a tokenizer with an expanded vocabulary derived from Japanese data. This allows it to represent the same amount of text with fewer tokens, which speeds up inference significantly.
For an instruction-following version of this model, see [Japanese-StableLM-Instruct-JAVocab-Beta-7B](https://huggingface.co/stabilityai/japanese-stablelm-instruct-ja_vocab-beta-7b).
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install -r requirements.txt
```
Then start generating text with `japanese-stablelm-base-ja_vocab-beta-7b` by using the following code snippet:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "stabilityai/japanese-stablelm-base-ja_vocab-beta-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
# The next line may need to be modified depending on the environment
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
prompt = """
AI で科学研究を加速するには、
""".strip()
input_ids = tokenizer.encode(
prompt,
add_special_tokens=True,
return_tensors="pt"
)
# this is for reproducibility.
# feel free to change to get different result
seed = 23
torch.manual_seed(seed)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
We suggest playing with different generation config (`top_p`, `repetition_penalty` etc) to find the best setup for your tasks. For example, use higher temperature for roleplay task, lower temperature for reasoning.
## Model Details
* **Model type**: `japanese-stablelm-base-ja_vocab-beta-7b` model is an auto-regressive language model based on the Llama2 transformer architecture.
* **Language(s)**: Japanese
* **Library**: [Tinypar](https://github.com/Stability-AI/jp-tinypar)
* **License**: [Llama2 Community License](https://ai.meta.com/llama/license/).
* **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
## Training Dataset
Roughly 100B tokens from a mixture of the following corpora were used for continued pre-training.
- [Japanese/English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [Japanese mc4](https://huggingface.co/datasets/mc4)
- [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz)
- [Japanese OSCAR](https://oscar-project.github.io/documentation/)
- [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B) (excluding the Books3 subset)
## Use and Limitations
### Intended Use
The model is intended to be used by all individuals as a foundation for application-specific fine-tuning without strict limitations on commercial use.
### Limitations and bias
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
## Authors
This model was developed by the Research & Development team at Stability AI Japan, and the development was co-led by [Takuya Akiba](https://huggingface.co/iwiwi) and [Meng Lee](https://huggingface.co/leemeng). The members of the team are as follows:
- [Meng Lee](https://huggingface.co/leemeng)
- [Fujiki Nakamura](https://huggingface.co/fujiki)
- [Makoto Shing](https://huggingface.co/mkshing)
- [Paul McCann](https://huggingface.co/polm-stability)
- [Takuya Akiba](https://huggingface.co/iwiwi)
- [Naoki Orii](https://huggingface.co/mrorii)
## Acknowledgements
We thank Meta Research for releasing Llama 2 under an open license for others to build on.
We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
|
bergena/political_ads_file_classifier
|
bergena
| 2024-05-31T11:42:53Z | 182 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-31T11:42:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yetanotherhif/jmg_llama3-8b-orpo2k
|
yetanotherhif
| 2024-05-31T11:41:47Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T06:41:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onurkeles/bertturk-cased-2024-ottoman-raw
|
onurkeles
| 2024-05-31T11:41:28Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T11:41:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
prhegde/search-query-generator-ecommerce
|
prhegde
| 2024-05-31T11:39:57Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2024-05-31T07:13:46Z |
---
library_name: transformers
license: apache-2.0
inference: false
---
# Model Card for Model ID
Generates possible search queries for a given product with title and dedscription. Can be used to synthetically generate search queries.
Input -> "Title: " + 《product_title》 + "Description: " + 《product_details》
## Development details
Model is trained with a novel adversarial Generator-Retriever framework.
The details of the framework can be found [here](https://github.com/PraveenSH/adversarial-generator-retriever/blob/main/README.md).
Notebook with the code is available [here](https://github.com/PraveenSH/adversarial-generator-retriever/blob/main/generator_retriever.ipynb)
## Using the model
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
MODEL_ID = "prhegde/search-query-generator-ecommerce"
gen_tokenizer = T5Tokenizer.from_pretrained(MODEL_ID)
gen_model = T5ForConditionalGeneration.from_pretrained(MODEL_ID)
gen_model.eval()
prod_title = "home sweet home pine pallet wall décor"
prod_desc = "decorate your home with this rustic wood , which is made from high-quality pine pallets . this creates a beautiful rustic look for the kitchen , bedroom , or living room — great gift idea for any occasion ; perfect for holidays , birthdays , or game days"
input_sequence = "Title: " + prod_title + " - Description: " + prod_desc
input_ids = gen_tokenizer(input_sequence, return_tensors="pt").input_ids
print(f'Input: {input_sequence}')
nsent = 4
with torch.no_grad():
for i in range(nsent):
output = gen_model.generate(input_ids, max_length=35, num_beams=1, do_sample=True, repetition_penalty=1.8)
target_sequence = gen_tokenizer.decode(output[0], skip_special_tokens=True)
print(f'Target: {target_sequence}')
|
Fulwa/my_awesome_billsum_model
|
Fulwa
| 2024-05-31T11:39:39Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-31T11:36:58Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5641
- Rouge1: 0.1398
- Rouge2: 0.0483
- Rougel: 0.1167
- Rougelsum: 0.1167
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8544 | 0.1255 | 0.0355 | 0.1046 | 0.1043 | 19.0 |
| No log | 2.0 | 124 | 2.6433 | 0.1307 | 0.0396 | 0.1079 | 0.108 | 19.0 |
| No log | 3.0 | 186 | 2.5798 | 0.1383 | 0.0455 | 0.115 | 0.1151 | 19.0 |
| No log | 4.0 | 248 | 2.5641 | 0.1398 | 0.0483 | 0.1167 | 0.1167 | 19.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
morca/mt5-tr-ft
|
morca
| 2024-05-31T11:39:30Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-31T11:38:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ali7538/EPFLLaMA_MCQA_Quantized
|
Ali7538
| 2024-05-31T11:38:17Z | 79 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-31T10:24:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thonnicolas/llama3-8b-oig-unsloth
|
thonnicolas
| 2024-05-31T11:37:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T11:37:18Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** thonnicolas
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ali7538/EPFLLaMA_MCQA
|
Ali7538
| 2024-05-31T11:35:11Z | 152 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T10:32:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kornwtp/mixsp-simcse-bert-base
|
kornwtp
| 2024-05-31T11:33:57Z | 124 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-05-28T05:05:24Z |
---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- transformers
---
|
kornwtp/mixsp-sbert-bert-base
|
kornwtp
| 2024-05-31T11:33:30Z | 102 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-05-28T07:37:58Z |
---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- transformers
---
|
saq1b/midjourney-mimic
|
saq1b
| 2024-05-31T11:32:43Z | 469 | 4 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"midjourney",
"midjourney-v5.2",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-05-31T07:52:25Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- midjourney
- midjourney-v5.2
base_model: stabilityai/stable-diffusion-xl-base-1.0
inference: true
---
# From https://civitai.com/models/251417/midjourney-mimic
## All credits to the owner who made this LoRA, I just uploaded it on huggingface
LoRa mimicking midjourney slyle v5.2 This LoRA works as:
Detail tweaker (supplements the picture with details)
Color enhancer (adds contrast and brightness)
BG depth improver (adds depth on the background)
IMPORTANT: use only with weight from 0.1 to 0.8! you can set more, but the picture will be too colored (smooth edges) and can break proportions. CFG Scale 4 - 6.
use it like pepper, just add a little bit to the picture
|
chuhac/BiomedCLIP-vit-bert-hf
|
chuhac
| 2024-05-31T11:31:48Z | 1,664 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"clip",
"feature-extraction",
"zero-shot-classification",
"custom_code",
"en",
"license:mit",
"region:us"
] |
zero-shot-classification
| 2024-05-08T11:22:24Z |
---
license: mit
language:
- en
pipeline_tag: zero-shot-classification
---
### Huggingface-friendly BiomedCLIP
1. pure torch and huggingface-based implementation of the original microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
2. rename the checkpoint state key names.
### Usage
```python
from transformers import AutoModel, AutoProcessor
model = AutoModel.from_pretrained("chuhac/BiomedCLIP-vit-bert-hf", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("chuhac/BiomedCLIP-vit-bert-hf", trust_remote_code=True)
```
|
marrodion/minilm-l12-v2-toxic-ft
|
marrodion
| 2024-05-31T11:31:28Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dataset_size:10K<n<100K",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:marrodion/minilm-l12-v2-simple",
"base_model:finetune:marrodion/minilm-l12-v2-simple",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-05-31T11:31:10Z |
---
language: []
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dataset_size:10K<n<100K
- loss:CosineSimilarityLoss
base_model: marrodion/minilm-l12-v2-simple
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
widget:
- source_sentence: Ugly white men*
sentences:
- fuck black people
- That's assuming incels can get enough gbp for tendies.
- Happy hari raya puasa (for those who celebrate it)! URL
- source_sentence: Burn in hell fag
sentences:
- What a fucking fag
- How do we get rid of radical islamism such as ISIS?
- You look beautiful girl. I love your makeup. I'm also a transgender woman
- source_sentence: Wtf does he know
sentences:
- Wtf is Muslim I've never heard this word before in my life
- Imagine having a gf. lol what a loser am I right guys
- kill yourself you fucking idiot cunt
- source_sentence: fuck black people
sentences:
- fuck her
- i'm gay as fuck bring it to the dm's s'il vous plait
- Super proud of u priyanka n sid💖😘🤗😄👏lots of love from india
- source_sentence: eradicate the gays
sentences:
- Anyone who supports ISIS terrorism globally should be eradicated.
- This movie was filled with garbage Zionist propaganda.
- Super proud of u priyanka n sid💖😘🤗😄👏lots of love from india
pipeline_tag: sentence-similarity
model-index:
- name: SentenceTransformer based on marrodion/minilm-l12-v2-simple
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: hatespeech sampled dev
type: hatespeech-sampled-dev
metrics:
- type: pearson_cosine
value: 0.5824678478663922
name: Pearson Cosine
- type: spearman_cosine
value: 0.4527341031732577
name: Spearman Cosine
- type: pearson_manhattan
value: 0.5684440833162158
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.4501340877013548
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.5699922346841907
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.4527341031732577
name: Spearman Euclidean
- type: pearson_dot
value: 0.5824678270038964
name: Pearson Dot
- type: spearman_dot
value: 0.4527341031732577
name: Spearman Dot
- type: pearson_max
value: 0.5824678478663922
name: Pearson Max
- type: spearman_max
value: 0.4527341031732577
name: Spearman Max
---
# SentenceTransformer based on marrodion/minilm-l12-v2-simple
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [marrodion/minilm-l12-v2-simple](https://huggingface.co/marrodion/minilm-l12-v2-simple). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [marrodion/minilm-l12-v2-simple](https://huggingface.co/marrodion/minilm-l12-v2-simple) <!-- at revision 04ebcc5b4f811f647374130599adb30be1267455 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("marrodion/minilm-l12-v2-toxic-ft")
# Run inference
sentences = [
'eradicate the gays',
'Anyone who supports ISIS terrorism globally should be eradicated.',
'This movie was filled with garbage Zionist propaganda.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `hatespeech-sampled-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.5825 |
| **spearman_cosine** | **0.4527** |
| pearson_manhattan | 0.5684 |
| spearman_manhattan | 0.4501 |
| pearson_euclidean | 0.57 |
| spearman_euclidean | 0.4527 |
| pearson_dot | 0.5825 |
| spearman_dot | 0.4527 |
| pearson_max | 0.5825 |
| spearman_max | 0.4527 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 33,827 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 35.5 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 34.35 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.82</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:----------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------|:--------------------------------|
| <code>Don't waste good pussy on a trash ass nigga</code> | <code>Stfu obviously trans people aren't retarded</code> | <code>0.7777777777777778</code> |
| <code>The more I read about dumb violent things do, like beating up two women bc they wouldn't kiss for them.. the gayer I become #Pride2019</code> | <code>Nuke them all</code> | <code>0.5555555555555556</code> |
| <code>Forget her cunt and ass and fuck her face till she pukes!</code> | <code>@QTAnon1 Hope she fucks her all the way back to Somalia..</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,781 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 35.77 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 36.67 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 0.11</li><li>mean: 0.83</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------|
| <code>So for Trump and his inner circle to be crying over law and order, they sure know how to follow the system huh? Yet nothing about his ways of following the law were legal. So, how is his asylum rules were ever legal if children were sick and forced to live in unconditioned areas is American or right?</code> | <code>We need every illegal alien gang member in prison. Then deported.</code> | <code>1.0</code> |
| <code>hey girl are you my dick cause im about to beat the shit out of you</code> | <code>Reading all these hot takes from Americans, they all just seem like utter dogs. A human mind should not be this deranged, yet this is the norm in America</code> | <code>0.8888888888888888</code> |
| <code>Its amazing what a failure women are in politics. Its almost like its not in their nature to be powerful. No matter how much they try, poor dears....</code> | <code>Tbh that cut makes him look like neonazi. He looks like he's about to lecture me about how white people are the most discriminates against racially after he called me the n word with that hard r.</code> | <code>0.8888888888888888</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | hatespeech-sampled-dev_spearman_cosine |
|:----------:|:-------:|:-------------:|:----------:|:--------------------------------------:|
| 0.2836 | 300 | 0.0503 | 0.0139 | 0.4258 |
| 0.5671 | 600 | 0.0143 | 0.0135 | 0.4418 |
| **0.8507** | **900** | **0.0134** | **0.0131** | **0.4527** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.0
- Transformers: 4.41.1
- PyTorch: 2.3.0
- Accelerate: 0.30.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
fundacionctic/oracle-dermat
|
fundacionctic
| 2024-05-31T11:31:27Z | 119 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"biology",
"medical",
"es",
"dataset:fundacionctic/DermatES",
"arxiv:1910.09700",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-28T12:57:56Z |
---
library_name: transformers
tags:
- biology
- medical
license: cc-by-nc-nd-4.0
datasets:
- fundacionctic/DermatES
language:
- es
metrics:
- accuracy
- f1
pipeline_tag: text-classification
---
# Model Card for Model ID
This is a fine-tuned version of the pre-trained biomedical language model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) in Spanish, tailored for text classification tasks. We used two NVIDIA GPUs for training.
## Model Details
### Model Description
This model has been fine-tuned for text classification on dermatological Spanish electronic health records (EHR). It leverages the pre-trained biomedical language understanding from the [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) model and adapts it to classify dermatology-related texts effectively.
The model is intended to predict among 25 different skin diseases from a medical record. It could be a first visit or a follow-up visit.
It takes as input four features:
- *textual medical record:* the EHR written by a doctor
- *disease type:* the type of disease associated with the EHR
- *disease location:* the location in the body of the disease
- *disease severity:* how severe or lethal is the disease
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Fundacion CTIC](https://www.fundacionctic.org)
- **Funded by:** [SATEC](https://www.satec.es)
- **Model type:** Fine-tuned LM Encoder
- **Language(s) (NLP):** Spanish
- **License:** CC-BY-NC
- **Finetuned from model:** [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:**
- **Paper [optional]:** Coming soon...
- **Demo [optional]:** [More Information Needed]
## Uses
The Model is meant to be used for research ONLY ! The industrial version of the model is called [predict-dermat](https://huggingface.co/fundacionctic/predict-dermat/) and is meant to predict not only the disease but also the 3 features mentionned above.
We DO NOT recommend to fine-tune this model. It is already meant to be a downstream task.
### Direct Use
This model can be directly used for classifying dermatological text data in Spanish EHRs.
### Downstream Use
The model can be integrated into healthcare applications for automatic classification of dermatological conditions from patient records.
### Out-of-Scope Use
The model is not suitable for non-medical text classification tasks or for texts in languages other than Spanish.
## Bias, Risks, and Limitations
This model is fine-tuned on a specific dataset and may not generalize well to other types of medical texts or conditions. Users should be cautious of biases in the training data that could affect the model's performance.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should validate the model's performance on their specific data and consider any ethical implications of deploying a machine learning model in a healthcare setting.
## How to Get Started with the Model
```
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification,
tokenizer = RobertaTokenizerFast.from_pretrained("fundacionctic/oracle-dermat")
model = RobertaForSequenceClassification.from_pretrained("fundacionctic/oracle-dermat")
inputs = tokenizer("Ejemplo de texto dermatológico".tolist(),
truncation=True,
padding='max_length',
max_length=max_length, # Replace with your desired maximum sequence length
return_tensors='pt',
return_attention_mask=True,
))
outputs = model(input_ids, attention_mask=attention_mask)
```
[More Information Needed]
## Training Details
### Training Data
The model was fine-tuned on the DermatES dataset from Fundación CTIC, which contains Spanish dermatological EHRs.
### Training Procedure
The training used two NVIDIA GPUs (11gb and 49gb)
#### Preprocessing
Lowercased, anonymized and accents removed texts
#### Training Hyperparameters
- **Training regime:** fp32
#### Speeds, Sizes, Times
Epochs: 9
Batch size: 64
Learning rate: 0.0001
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The evaluation was performed on 0.2 of the [DermatES](https://huggingface.co/datasets/fundacionctic/DermatES) dataset.
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
- *Accuracy:* 0.84
- *F1 Score:* 0.75
- *top-k (k=2) accuracy:* 0.92
- *top-k (k=2) f1 Score:* 0.90
#### Summary
The model achieves high accuracy and F1 score on dermatological text classification, demonstrating its effectiveness for this specific medical domain.
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications
### Model Architecture and Objective
The model is based on the [RoBERTa](https://huggingface.co/FacebookAI/roberta-base) architecture, fine-tuned for the objective of text classification in the biomedical domain.
### Compute Infrastructure
#### Hardware
Two NVIDIA GPUs were used for the fine-tuning process.
#### Software
The fine-tuning was performed using the 🤗 Transformers library.
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:** Coming soon
**APA:**
[More Information Needed]
## Glossary [optional]
## More Information [optional]
[More Information Needed]
## Model Card Authors
Leon-Paul Schaub Torre, Pelayo Quiros and Helena Garcia-Mieres
## Model Card Contact
[email protected]
[email protected]
|
Toshifumi/Llama3-Toshi-IMDB_20240601v1
|
Toshifumi
| 2024-05-31T11:28:08Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T11:19:35Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Toshifumi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HVD2407/godel
|
HVD2407
| 2024-05-31T11:27:10Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-31T11:22:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
taoki/deepseek-coder-7B-instruct-ja-stackoverflow-GGUF
|
taoki
| 2024-05-31T11:26:10Z | 39 | 0 | null |
[
"gguf",
"dataset:p1atdev/ja-stackoverflow",
"base_model:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"base_model:quantized:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2023-12-27T01:53:20Z |
---
base_model: deepseek-ai/deepseek-coder-7b-instruct-v1.5
datasets:
- p1atdev/ja-stackoverflow
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
model_creator: Toshihiko Aoki
model_name: Deepseek Coder 7B Instruct ja-stackoverflow SFT - GGUF
model_type: deepseek
prompt_template: 'You are an AI programming assistant, utilizing the Deepseek Coder
model, developed by Deepseek Company, and you only answer questions related to computer
science. For politically sensitive questions, security and privacy issues, and other
non-computer science questions, you will refuse to answer.
### Instruction:
{prompt}
### Response:
'
---
# Deepseek Coder 7B Instruct ja-stackoverflow SFT - GGUF
## Description
This repository contains a model trained (QLoRA-SFT) with the following data:
- Base model: [Deepseek Coder 7B Instruct v1.5](https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5)
- Training data: [日本語版 Stack Overflow](https://huggingface.co/datasets/p1atdev/ja-stackoverflow)
- accepted_answer_score > 2 and popular_answer_score > 2
|
Amanaccessassist/finetuned-blurr-nonblur
|
Amanaccessassist
| 2024-05-31T11:24:31Z | 234 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-05-31T11:22:30Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-blurr-nonblur
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-blurr-nonblur
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2435
- Accuracy: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6486 | 1.0 | 14 | 0.6255 | 0.6646 |
| 0.552 | 2.0 | 28 | 0.5737 | 0.6772 |
| 0.4207 | 3.0 | 42 | 0.5175 | 0.7975 |
| 0.3545 | 4.0 | 56 | 0.4484 | 0.8861 |
| 0.2082 | 5.0 | 70 | 0.3621 | 0.8861 |
| 0.167 | 6.0 | 84 | 0.2930 | 0.9051 |
| 0.176 | 7.0 | 98 | 0.3003 | 0.8861 |
| 0.1275 | 8.0 | 112 | 0.2435 | 0.9241 |
| 0.11 | 9.0 | 126 | 0.2581 | 0.9051 |
| 0.1009 | 10.0 | 140 | 0.2474 | 0.9114 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
ymoslem/whisper-small-ga2en-v1.5-r
|
ymoslem
| 2024-05-31T11:23:07Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ga",
"en",
"dataset:ymoslem/IWSLT2023-GA-EN",
"dataset:ymoslem/FLEURS-GA-EN",
"dataset:ymoslem/BitesizeIrish-GA-EN",
"dataset:ymoslem/SpokenWords-GA-EN-MTed",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-30T12:49:10Z |
---
language:
- ga
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- ymoslem/IWSLT2023-GA-EN
- ymoslem/FLEURS-GA-EN
- ymoslem/BitesizeIrish-GA-EN
- ymoslem/SpokenWords-GA-EN-MTed
metrics:
- bleu
- wer
model-index:
- name: Whisper Small GA-EN Speech Translation + VAD
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IWSLT-2023, FLEURS, BiteSize, and SpokenWords
type: ymoslem/IWSLT2023-GA-EN
metrics:
- name: Bleu
type: bleu
value: 28.22
- name: Wer
type: wer
value: 68.52769022962629
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small GA-EN Speech Translation + VAD
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IWSLT-2023, FLEURS, BiteSize, and SpokenWords dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7352
- Bleu: 28.22
- Chrf: 44.19
- Wer: 68.5277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Wer |
|:-------------:|:------:|:----:|:---------------:|:-----:|:-----:|:--------:|
| 1.9529 | 0.2188 | 100 | 1.7388 | 12.76 | 29.03 | 97.1184 |
| 1.5762 | 0.4376 | 200 | 1.5362 | 15.3 | 33.31 | 98.4241 |
| 1.2624 | 0.6565 | 300 | 1.4346 | 17.94 | 37.2 | 101.4408 |
| 1.0367 | 0.8753 | 400 | 1.4502 | 21.52 | 39.13 | 85.4120 |
| 0.4677 | 1.0941 | 500 | 1.4693 | 23.26 | 40.49 | 78.4331 |
| 0.4284 | 1.3129 | 600 | 1.5163 | 21.31 | 41.41 | 86.0873 |
| 0.4026 | 1.5317 | 700 | 1.4999 | 24.11 | 40.59 | 79.3787 |
| 0.4132 | 1.7505 | 800 | 1.5134 | 27.77 | 43.01 | 70.1936 |
| 0.3701 | 1.9694 | 900 | 1.5368 | 27.74 | 42.61 | 66.0964 |
| 0.1337 | 2.1882 | 1000 | 1.5692 | 27.96 | 43.77 | 64.9257 |
| 0.143 | 2.4070 | 1100 | 1.5516 | 26.06 | 42.12 | 71.3192 |
| 0.144 | 2.6258 | 1200 | 1.5839 | 27.55 | 43.19 | 69.7434 |
| 0.1372 | 2.8446 | 1300 | 1.5510 | 27.93 | 43.07 | 66.1414 |
| 0.0573 | 3.0635 | 1400 | 1.6567 | 26.34 | 41.69 | 72.3998 |
| 0.0554 | 3.2823 | 1500 | 1.6511 | 27.98 | 42.66 | 68.5277 |
| 0.0534 | 3.5011 | 1600 | 1.6732 | 28.29 | 43.2 | 67.1319 |
| 0.0588 | 3.7199 | 1700 | 1.6687 | 27.0 | 43.31 | 70.7789 |
| 0.0486 | 3.9387 | 1800 | 1.6759 | 28.02 | 43.97 | 66.3665 |
| 0.0224 | 4.1575 | 1900 | 1.7597 | 26.86 | 41.81 | 70.5538 |
| 0.0264 | 4.3764 | 2000 | 1.7113 | 27.58 | 43.38 | 70.4638 |
| 0.0233 | 4.5952 | 2100 | 1.7013 | 27.83 | 42.87 | 68.2575 |
| 0.0192 | 4.8140 | 2200 | 1.7351 | 25.39 | 42.09 | 78.0279 |
| 0.0149 | 5.0328 | 2300 | 1.7350 | 27.62 | 43.99 | 70.5538 |
| 0.0086 | 5.2516 | 2400 | 1.7331 | 29.37 | 45.08 | 68.5277 |
| 0.006 | 5.4705 | 2500 | 1.7145 | 29.04 | 44.19 | 66.9968 |
| 0.0064 | 5.6893 | 2600 | 1.7322 | 28.27 | 43.6 | 70.2386 |
| 0.0053 | 5.9081 | 2700 | 1.7239 | 27.86 | 43.78 | 69.6083 |
| 0.0021 | 6.1269 | 2800 | 1.7288 | 28.14 | 44.12 | 68.5727 |
| 0.0016 | 6.3457 | 2900 | 1.7375 | 28.26 | 44.14 | 68.7078 |
| 0.0023 | 6.5646 | 3000 | 1.7352 | 28.22 | 44.19 | 68.5277 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.2.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
MarPla/my_awesome_billsum_model
|
MarPla
| 2024-05-31T11:22:20Z | 109 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-09T21:01:51Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7758
- Rouge1: 0.0847
- Rouge2: 0.026
- Rougel: 0.069
- Rougelsum: 0.0691
- Gen Len: 18.9356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 7.0515 | 1.0 | 775 | 5.9513 | 0.0782 | 0.0229 | 0.0637 | 0.0637 | 18.964 |
| 6.0983 | 2.0 | 1550 | 5.8347 | 0.083 | 0.0254 | 0.0678 | 0.0679 | 18.9427 |
| 6.0491 | 3.0 | 2325 | 5.7848 | 0.0853 | 0.0262 | 0.0697 | 0.0697 | 18.9273 |
| 5.9983 | 4.0 | 3100 | 5.7758 | 0.0847 | 0.026 | 0.069 | 0.0691 | 18.9356 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2
- Datasets 2.2.1
- Tokenizers 0.19.1
|
illuin-explo/CroissantLLM_ft_translation_correction
|
illuin-explo
| 2024-05-31T11:22:07Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:croissantllm/CroissantCool-v0.2",
"base_model:finetune:croissantllm/CroissantCool-v0.2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T11:20:55Z |
---
license: mit
base_model: croissantllm/CroissantCool-v0.2
tags:
- generated_from_trainer
model-index:
- name: gpfs/workdir/fayssema/models/out_newtok_dataset1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: croissantllm/CroissantCool-v0.2
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizerFast
is_llama_derived_model: true
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
tokens:
- "<|im_start|>"
- "<|im_end|>"
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: manu/dataset_1
split: train
type: sharegpt
chat_template: "chatml"
default_system_message: null
dataset_prepared_path: new_pii_2
val_set_size: 0.05
output_dir: /gpfs/workdir/fayssema/models/out_newtok_dataset1
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 16
num_epochs: 3
# optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00003
train_on_inputs: false
group_by_length: false
bf16: auto
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
flash_attn_cross_entropy: false
flash_attn_rms_norm: true
flash_attn_fuse_qkv: false
flash_attn_fuse_mlp: true
warmup_steps: 100
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: #deepspeed_configs/zero2.json # multi-gpu only
weight_decay: 0.05
fsdp:
fsdp_config:
```
</details><br>
# gpfs/workdir/fayssema/models/out_newtok_dataset1
This model is a fine-tuned version of [croissantllm/CroissantCool-v0.2](https://huggingface.co/croissantllm/CroissantCool-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0845 | 0.0 | 1 | 0.8684 |
| 0.1841 | 0.25 | 73 | 0.0205 |
| 0.2394 | 0.51 | 146 | 0.0134 |
| 0.1685 | 0.76 | 219 | 0.0128 |
| 0.1385 | 1.01 | 292 | 0.0209 |
| 0.1561 | 1.26 | 365 | 0.0128 |
| 0.1352 | 1.52 | 438 | 0.0090 |
| 0.162 | 1.77 | 511 | 0.0094 |
| 0.0661 | 2.02 | 584 | 0.0085 |
| 0.1344 | 2.27 | 657 | 0.0089 |
| 0.0718 | 2.53 | 730 | 0.0088 |
| 0.0942 | 2.78 | 803 | 0.0087 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-5_0bpw_exl2
|
Zoyd
| 2024-05-31T11:20:21Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-05-31T10:41:18Z |
---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_2bpw_exl2)**</center> | <center>3588 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_5bpw_exl2)**</center> | <center>3990 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_0bpw_exl2)**</center> | <center>4718 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_5bpw_exl2)**</center> | <center>5443 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_75bpw_exl2)**</center> | <center>5809 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_0bpw_exl2)**</center> | <center>6166 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_25bpw_exl2)**</center> | <center>6537 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-5_0bpw_exl2)**</center> | <center>7625 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_0bpw_exl2)**</center> | <center>9111 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_5bpw_exl2)**</center> | <center>9831 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-8_0bpw_exl2)**</center> | <center>11277 MB</center> | <center>8</center> |
# LuminRP-13B-128k-v0.5
LuminRP-13B-128k-v0.5 is the 13B parameter version of the v0.5, LuminRP-7B model which specializes in RP/ERP by merging a couple models that excels in it.
***
>[!IMPORTANT]
> * Link to [Ppoyaa/LuminRP-7B-128k-v0.5](https://huggingface.co/Ppoyaa/LuminRP-7B-128k-v0.5)
> * This model can and will output X-rated content.
***
## SillyTavern
**Template**: Alpaca, ChatML, and Mistral should be okay.
**Instruct Mode**: On
***
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/LuminRP-13B-128k-v0.5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
***
Merge Details Below:
<details><summary>See Merge Config</summary>
```
ChaoticNeutrals/BuRP_7B
Endevor/InfinityRP-v1-7B
Nitral-AI/Kunocchini-7b-128k-test
core-3/kuno-royale-v2-7b
KatyTheCutie/LemonadeRP-4.5.3
grimjim/kukulemon-7B
MaziyarPanahi/Calme-7B-Instruct-v0.9
icefog72/WestIceLemonTeaRP-32k-7b
crestf411/daybreak-kunoichi-2dpo-7b
Undi95/Mistral-RP-0.1-7B
```
</details><br>
|
Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_5bpw_exl2
|
Zoyd
| 2024-05-31T11:19:47Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-05-31T09:32:57Z |
---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_2bpw_exl2)**</center> | <center>3588 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_5bpw_exl2)**</center> | <center>3990 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_0bpw_exl2)**</center> | <center>4718 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_5bpw_exl2)**</center> | <center>5443 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_75bpw_exl2)**</center> | <center>5809 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_0bpw_exl2)**</center> | <center>6166 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_25bpw_exl2)**</center> | <center>6537 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-5_0bpw_exl2)**</center> | <center>7625 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_0bpw_exl2)**</center> | <center>9111 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_5bpw_exl2)**</center> | <center>9831 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-8_0bpw_exl2)**</center> | <center>11277 MB</center> | <center>8</center> |
# LuminRP-13B-128k-v0.5
LuminRP-13B-128k-v0.5 is the 13B parameter version of the v0.5, LuminRP-7B model which specializes in RP/ERP by merging a couple models that excels in it.
***
>[!IMPORTANT]
> * Link to [Ppoyaa/LuminRP-7B-128k-v0.5](https://huggingface.co/Ppoyaa/LuminRP-7B-128k-v0.5)
> * This model can and will output X-rated content.
***
## SillyTavern
**Template**: Alpaca, ChatML, and Mistral should be okay.
**Instruct Mode**: On
***
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/LuminRP-13B-128k-v0.5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
***
Merge Details Below:
<details><summary>See Merge Config</summary>
```
ChaoticNeutrals/BuRP_7B
Endevor/InfinityRP-v1-7B
Nitral-AI/Kunocchini-7b-128k-test
core-3/kuno-royale-v2-7b
KatyTheCutie/LemonadeRP-4.5.3
grimjim/kukulemon-7B
MaziyarPanahi/Calme-7B-Instruct-v0.9
icefog72/WestIceLemonTeaRP-32k-7b
crestf411/daybreak-kunoichi-2dpo-7b
Undi95/Mistral-RP-0.1-7B
```
</details><br>
|
Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_5bpw_exl2
|
Zoyd
| 2024-05-31T11:18:54Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-05-31T09:13:19Z |
---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_2bpw_exl2)**</center> | <center>3588 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_5bpw_exl2)**</center> | <center>3990 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_0bpw_exl2)**</center> | <center>4718 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_5bpw_exl2)**</center> | <center>5443 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_75bpw_exl2)**</center> | <center>5809 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_0bpw_exl2)**</center> | <center>6166 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_25bpw_exl2)**</center> | <center>6537 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-5_0bpw_exl2)**</center> | <center>7625 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_0bpw_exl2)**</center> | <center>9111 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_5bpw_exl2)**</center> | <center>9831 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-8_0bpw_exl2)**</center> | <center>11277 MB</center> | <center>8</center> |
# LuminRP-13B-128k-v0.5
LuminRP-13B-128k-v0.5 is the 13B parameter version of the v0.5, LuminRP-7B model which specializes in RP/ERP by merging a couple models that excels in it.
***
>[!IMPORTANT]
> * Link to [Ppoyaa/LuminRP-7B-128k-v0.5](https://huggingface.co/Ppoyaa/LuminRP-7B-128k-v0.5)
> * This model can and will output X-rated content.
***
## SillyTavern
**Template**: Alpaca, ChatML, and Mistral should be okay.
**Instruct Mode**: On
***
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/LuminRP-13B-128k-v0.5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
***
Merge Details Below:
<details><summary>See Merge Config</summary>
```
ChaoticNeutrals/BuRP_7B
Endevor/InfinityRP-v1-7B
Nitral-AI/Kunocchini-7b-128k-test
core-3/kuno-royale-v2-7b
KatyTheCutie/LemonadeRP-4.5.3
grimjim/kukulemon-7B
MaziyarPanahi/Calme-7B-Instruct-v0.9
icefog72/WestIceLemonTeaRP-32k-7b
crestf411/daybreak-kunoichi-2dpo-7b
Undi95/Mistral-RP-0.1-7B
```
</details><br>
|
Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_0bpw_exl2
|
Zoyd
| 2024-05-31T11:17:56Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-05-31T10:49:51Z |
---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_2bpw_exl2)**</center> | <center>3588 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_5bpw_exl2)**</center> | <center>3990 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_0bpw_exl2)**</center> | <center>4718 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_5bpw_exl2)**</center> | <center>5443 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_75bpw_exl2)**</center> | <center>5809 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_0bpw_exl2)**</center> | <center>6166 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_25bpw_exl2)**</center> | <center>6537 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-5_0bpw_exl2)**</center> | <center>7625 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_0bpw_exl2)**</center> | <center>9111 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_5bpw_exl2)**</center> | <center>9831 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-8_0bpw_exl2)**</center> | <center>11277 MB</center> | <center>8</center> |
# LuminRP-13B-128k-v0.5
LuminRP-13B-128k-v0.5 is the 13B parameter version of the v0.5, LuminRP-7B model which specializes in RP/ERP by merging a couple models that excels in it.
***
>[!IMPORTANT]
> * Link to [Ppoyaa/LuminRP-7B-128k-v0.5](https://huggingface.co/Ppoyaa/LuminRP-7B-128k-v0.5)
> * This model can and will output X-rated content.
***
## SillyTavern
**Template**: Alpaca, ChatML, and Mistral should be okay.
**Instruct Mode**: On
***
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/LuminRP-13B-128k-v0.5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
***
Merge Details Below:
<details><summary>See Merge Config</summary>
```
ChaoticNeutrals/BuRP_7B
Endevor/InfinityRP-v1-7B
Nitral-AI/Kunocchini-7b-128k-test
core-3/kuno-royale-v2-7b
KatyTheCutie/LemonadeRP-4.5.3
grimjim/kukulemon-7B
MaziyarPanahi/Calme-7B-Instruct-v0.9
icefog72/WestIceLemonTeaRP-32k-7b
crestf411/daybreak-kunoichi-2dpo-7b
Undi95/Mistral-RP-0.1-7B
```
</details><br>
|
Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_25bpw_exl2
|
Zoyd
| 2024-05-31T11:17:43Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-05-31T10:14:48Z |
---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **4.25 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_2bpw_exl2)**</center> | <center>3588 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_5bpw_exl2)**</center> | <center>3990 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_0bpw_exl2)**</center> | <center>4718 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_5bpw_exl2)**</center> | <center>5443 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_75bpw_exl2)**</center> | <center>5809 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_0bpw_exl2)**</center> | <center>6166 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_25bpw_exl2)**</center> | <center>6537 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-5_0bpw_exl2)**</center> | <center>7625 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_0bpw_exl2)**</center> | <center>9111 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_5bpw_exl2)**</center> | <center>9831 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-8_0bpw_exl2)**</center> | <center>11277 MB</center> | <center>8</center> |
# LuminRP-13B-128k-v0.5
LuminRP-13B-128k-v0.5 is the 13B parameter version of the v0.5, LuminRP-7B model which specializes in RP/ERP by merging a couple models that excels in it.
***
>[!IMPORTANT]
> * Link to [Ppoyaa/LuminRP-7B-128k-v0.5](https://huggingface.co/Ppoyaa/LuminRP-7B-128k-v0.5)
> * This model can and will output X-rated content.
***
## SillyTavern
**Template**: Alpaca, ChatML, and Mistral should be okay.
**Instruct Mode**: On
***
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/LuminRP-13B-128k-v0.5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
***
Merge Details Below:
<details><summary>See Merge Config</summary>
```
ChaoticNeutrals/BuRP_7B
Endevor/InfinityRP-v1-7B
Nitral-AI/Kunocchini-7b-128k-test
core-3/kuno-royale-v2-7b
KatyTheCutie/LemonadeRP-4.5.3
grimjim/kukulemon-7B
MaziyarPanahi/Calme-7B-Instruct-v0.9
icefog72/WestIceLemonTeaRP-32k-7b
crestf411/daybreak-kunoichi-2dpo-7b
Undi95/Mistral-RP-0.1-7B
```
</details><br>
|
Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_75bpw_exl2
|
Zoyd
| 2024-05-31T11:17:33Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-05-31T09:46:39Z |
---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_2bpw_exl2)**</center> | <center>3588 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_5bpw_exl2)**</center> | <center>3990 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_0bpw_exl2)**</center> | <center>4718 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_5bpw_exl2)**</center> | <center>5443 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_75bpw_exl2)**</center> | <center>5809 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_0bpw_exl2)**</center> | <center>6166 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_25bpw_exl2)**</center> | <center>6537 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-5_0bpw_exl2)**</center> | <center>7625 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_0bpw_exl2)**</center> | <center>9111 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_5bpw_exl2)**</center> | <center>9831 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-8_0bpw_exl2)**</center> | <center>11277 MB</center> | <center>8</center> |
# LuminRP-13B-128k-v0.5
LuminRP-13B-128k-v0.5 is the 13B parameter version of the v0.5, LuminRP-7B model which specializes in RP/ERP by merging a couple models that excels in it.
***
>[!IMPORTANT]
> * Link to [Ppoyaa/LuminRP-7B-128k-v0.5](https://huggingface.co/Ppoyaa/LuminRP-7B-128k-v0.5)
> * This model can and will output X-rated content.
***
## SillyTavern
**Template**: Alpaca, ChatML, and Mistral should be okay.
**Instruct Mode**: On
***
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/LuminRP-13B-128k-v0.5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
***
Merge Details Below:
<details><summary>See Merge Config</summary>
```
ChaoticNeutrals/BuRP_7B
Endevor/InfinityRP-v1-7B
Nitral-AI/Kunocchini-7b-128k-test
core-3/kuno-royale-v2-7b
KatyTheCutie/LemonadeRP-4.5.3
grimjim/kukulemon-7B
MaziyarPanahi/Calme-7B-Instruct-v0.9
icefog72/WestIceLemonTeaRP-32k-7b
crestf411/daybreak-kunoichi-2dpo-7b
Undi95/Mistral-RP-0.1-7B
```
</details><br>
|
Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_0bpw_exl2
|
Zoyd
| 2024-05-31T11:17:23Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-05-31T09:19:23Z |
---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_2bpw_exl2)**</center> | <center>3588 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-2_5bpw_exl2)**</center> | <center>3990 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_0bpw_exl2)**</center> | <center>4718 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_5bpw_exl2)**</center> | <center>5443 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-3_75bpw_exl2)**</center> | <center>5809 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_0bpw_exl2)**</center> | <center>6166 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-4_25bpw_exl2)**</center> | <center>6537 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-5_0bpw_exl2)**</center> | <center>7625 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_0bpw_exl2)**</center> | <center>9111 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-6_5bpw_exl2)**</center> | <center>9831 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/Ppoyaa_LuminRP-13B-128k-v0.5-8_0bpw_exl2)**</center> | <center>11277 MB</center> | <center>8</center> |
# LuminRP-13B-128k-v0.5
LuminRP-13B-128k-v0.5 is the 13B parameter version of the v0.5, LuminRP-7B model which specializes in RP/ERP by merging a couple models that excels in it.
***
>[!IMPORTANT]
> * Link to [Ppoyaa/LuminRP-7B-128k-v0.5](https://huggingface.co/Ppoyaa/LuminRP-7B-128k-v0.5)
> * This model can and will output X-rated content.
***
## SillyTavern
**Template**: Alpaca, ChatML, and Mistral should be okay.
**Instruct Mode**: On
***
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/LuminRP-13B-128k-v0.5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
***
Merge Details Below:
<details><summary>See Merge Config</summary>
```
ChaoticNeutrals/BuRP_7B
Endevor/InfinityRP-v1-7B
Nitral-AI/Kunocchini-7b-128k-test
core-3/kuno-royale-v2-7b
KatyTheCutie/LemonadeRP-4.5.3
grimjim/kukulemon-7B
MaziyarPanahi/Calme-7B-Instruct-v0.9
icefog72/WestIceLemonTeaRP-32k-7b
crestf411/daybreak-kunoichi-2dpo-7b
Undi95/Mistral-RP-0.1-7B
```
</details><br>
|
ariakhosh/a5
|
ariakhosh
| 2024-05-31T11:02:43Z | 0 | 0 | null |
[
"safetensors",
"arxiv:2305.14314",
"arxiv:2302.13971",
"arxiv:2304.07327",
"region:us"
] | null | 2024-05-31T11:02:17Z |
# Guanaco Models Based on LLaMA
| [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
**The Guanaco models are open-source finetuned chatbots obtained through 4-bit QLoRA tuning of LLaMA base models on the OASST1 dataset. They are available in 7B, 13B, 33B, and 65B parameter sizes.**
⚠️Guanaco is a model purely intended for research purposes and could produce problematic outputs.
## Why use Guanaco?
- **Competitive with commercial chatbot systems on the Vicuna and OpenAssistant benchmarks** (ChatGPT and BARD) according to human and GPT-4 raters. We note that the relative performance on tasks not covered in these benchmarks could be very different. In addition, commercial systems evolve over time (we used outputs from the March 2023 version of the models).
- **Available open-source for research purposes**. Guanaco models allow *cheap* and *local* experimentation with high-quality chatbot systems.
- **Replicable and efficient training procedure** that can be extended to new use cases. Guanaco training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora).
- **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning.
- **Lightweight** checkpoints which only contain adapter weights.
## License and Intended Use
Guanaco adapter weights are available under Apache 2 license. Note the use of the Guanaco adapter weights, requires access to the LLaMA model weighs.
Guanaco is based on LLaMA and therefore should be used according to the LLaMA license.
## Usage
Here is an example of how you would load Guanaco 7B in 4-bits:
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Inference can then be performed as usual with HF models as follows:
```python
prompt = "Introduce yourself"
formatted_prompt = (
f"A chat between a curious human and an artificial intelligence assistant."
f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n"
f"### Human: {prompt} ### Assistant:"
)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0")
outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Expected output similar to the following:
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have.
```
## Current Inference Limitations
Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels.
Below is how you would load the model in 16 bits:
```python
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Model Card
**Architecture**: The Guanaco models are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$.
**Base Model**: Guanaco uses LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that Guanaco can inherit biases and limitations of the base model.
**Finetuning Data**: Guanaco is finetuned on OASST1. The exact dataset is available at [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
**Languages**: The OASST1 dataset is multilingual (see [the paper](https://arxiv.org/abs/2304.07327) for details) and as such Guanaco responds to user queries in different languages. We note, however, that OASST1 is heavy in high-resource languages. In addition, human evaluation of Guanaco was only performed in English and based on qualitative analysis we observed degradation in performance in other languages.
Next, we describe Training and Evaluation details.
### Training
Guanaco models are the result of 4-bit QLoRA supervised finetuning on the OASST1 dataset.
All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models.
For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer.
### Training hyperparameters
Size| Dataset | Batch Size | Learning Rate | Max Steps | Sequence length
---|---|---|---|---|---
7B | OASST1 | 16 | 2e-4 | 1875 | 512
13B | OASST1 | 16 | 2e-4 | 1875 | 512
33B | OASST1 | 16 | 1e-4 | 1875 | 512
65B | OASST1 | 16 | 1e-4 | 1875 | 512
### Evaluation
We test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses. We use the Vicuna and OpenAssistant datasets with 80 and 953 prompts respectively.
In both human and automated evaluations, for each prompt, raters compare all pairs of responses across the models considered. For human raters we randomize the order of the systems, for GPT-4 we evaluate with both orders.
Benchmark | Vicuna | | Vicuna | | OpenAssistant | | -
-----------|----|-----|--------|---|---------------|---|---
Prompts | 80 | | 80 | | 953 | |
Judge | Human | | GPT-4 | | GPT-4 | |
Model | Elo | Rank | Elo | Rank | Elo | Rank | **Median Rank**
GPT-4 | 1176 | 1 | 1348 | 1 | 1294 | 1 | 1
Guanaco-65B | 1023 | 2 | 1022 | 2 | 1008 | 3 | 2
Guanaco-33B | 1009 | 4 | 992 | 3 | 1002 | 4 | 4
ChatGPT-3.5 Turbo | 916 | 7 | 966 | 5 | 1015 | 2 | 5
Vicuna-13B | 984 | 5 | 974 | 4 | 936 | 5 | 5
Guanaco-13B | 975 | 6 | 913 | 6 | 885 | 6 | 6
Guanaco-7B | 1010 | 3 | 879 | 8 | 860 | 7 | 7
Bard | 909 | 8 | 902 | 7 | - | - | 8
We also use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy.
Dataset | 7B | 13B | 33B | 65B
---|---|---|---|---
LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4
Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7
Longform | 32.1 | 43.2 | 56.6 | 59.7
Chip2 | 34.5 | 41.6 | 53.6 | 59.8
HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1
Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3
OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2
Alpaca | 38.8 | 47.8 | 57.3 | 62.5
FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9
## Risks and Biases
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
However, we note that finetuning on OASST1 seems to reduce biases as measured on the CrowS dataset. We report here the performance of Guanaco-65B compared to other baseline models on the CrowS dataset.
| | LLaMA-65B | GPT-3 | OPT-175B | Guanaco-65B |
|----------------------|-----------|-------|----------|---------------|
| Gender | 70.6 | 62.6 | 65.7 | **47.5** |
| Religion | {79.0} | 73.3 | 68.6 | **38.7** |
| Race/Color | 57.0 | 64.7 | 68.6 | **45.3** |
| Sexual orientation | {81.0} | 76.2 | 78.6 | **59.1** |
| Age | 70.1 | 64.4 | 67.8 | **36.3** |
| Nationality | 64.2 | 61.6 | 62.9 | **32.4** |
| Disability | 66.7 | 76.7 | 76.7 | **33.9** |
| Physical appearance | 77.8 | 74.6 | 76.2 | **43.1** |
| Socioeconomic status | 71.5 | 73.8 | 76.2 | **55.3** |
| Average | 66.6 | 67.2 | 69.5 | **43.5** |
## Citation
```bibtex
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
```
|
ariakhosh/a2
|
ariakhosh
| 2024-05-31T11:01:38Z | 0 | 0 | null |
[
"safetensors",
"arxiv:2305.14314",
"arxiv:2302.13971",
"arxiv:2304.07327",
"region:us"
] | null | 2024-05-31T11:01:06Z |
# Guanaco Models Based on LLaMA
| [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
**The Guanaco models are open-source finetuned chatbots obtained through 4-bit QLoRA tuning of LLaMA base models on the OASST1 dataset. They are available in 7B, 13B, 33B, and 65B parameter sizes.**
⚠️Guanaco is a model purely intended for research purposes and could produce problematic outputs.
## Why use Guanaco?
- **Competitive with commercial chatbot systems on the Vicuna and OpenAssistant benchmarks** (ChatGPT and BARD) according to human and GPT-4 raters. We note that the relative performance on tasks not covered in these benchmarks could be very different. In addition, commercial systems evolve over time (we used outputs from the March 2023 version of the models).
- **Available open-source for research purposes**. Guanaco models allow *cheap* and *local* experimentation with high-quality chatbot systems.
- **Replicable and efficient training procedure** that can be extended to new use cases. Guanaco training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora).
- **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning.
- **Lightweight** checkpoints which only contain adapter weights.
## License and Intended Use
Guanaco adapter weights are available under Apache 2 license. Note the use of the Guanaco adapter weights, requires access to the LLaMA model weighs.
Guanaco is based on LLaMA and therefore should be used according to the LLaMA license.
## Usage
Here is an example of how you would load Guanaco 7B in 4-bits:
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Inference can then be performed as usual with HF models as follows:
```python
prompt = "Introduce yourself"
formatted_prompt = (
f"A chat between a curious human and an artificial intelligence assistant."
f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n"
f"### Human: {prompt} ### Assistant:"
)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0")
outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Expected output similar to the following:
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have.
```
## Current Inference Limitations
Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels.
Below is how you would load the model in 16 bits:
```python
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Model Card
**Architecture**: The Guanaco models are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$.
**Base Model**: Guanaco uses LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that Guanaco can inherit biases and limitations of the base model.
**Finetuning Data**: Guanaco is finetuned on OASST1. The exact dataset is available at [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
**Languages**: The OASST1 dataset is multilingual (see [the paper](https://arxiv.org/abs/2304.07327) for details) and as such Guanaco responds to user queries in different languages. We note, however, that OASST1 is heavy in high-resource languages. In addition, human evaluation of Guanaco was only performed in English and based on qualitative analysis we observed degradation in performance in other languages.
Next, we describe Training and Evaluation details.
### Training
Guanaco models are the result of 4-bit QLoRA supervised finetuning on the OASST1 dataset.
All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models.
For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer.
### Training hyperparameters
Size| Dataset | Batch Size | Learning Rate | Max Steps | Sequence length
---|---|---|---|---|---
7B | OASST1 | 16 | 2e-4 | 1875 | 512
13B | OASST1 | 16 | 2e-4 | 1875 | 512
33B | OASST1 | 16 | 1e-4 | 1875 | 512
65B | OASST1 | 16 | 1e-4 | 1875 | 512
### Evaluation
We test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses. We use the Vicuna and OpenAssistant datasets with 80 and 953 prompts respectively.
In both human and automated evaluations, for each prompt, raters compare all pairs of responses across the models considered. For human raters we randomize the order of the systems, for GPT-4 we evaluate with both orders.
Benchmark | Vicuna | | Vicuna | | OpenAssistant | | -
-----------|----|-----|--------|---|---------------|---|---
Prompts | 80 | | 80 | | 953 | |
Judge | Human | | GPT-4 | | GPT-4 | |
Model | Elo | Rank | Elo | Rank | Elo | Rank | **Median Rank**
GPT-4 | 1176 | 1 | 1348 | 1 | 1294 | 1 | 1
Guanaco-65B | 1023 | 2 | 1022 | 2 | 1008 | 3 | 2
Guanaco-33B | 1009 | 4 | 992 | 3 | 1002 | 4 | 4
ChatGPT-3.5 Turbo | 916 | 7 | 966 | 5 | 1015 | 2 | 5
Vicuna-13B | 984 | 5 | 974 | 4 | 936 | 5 | 5
Guanaco-13B | 975 | 6 | 913 | 6 | 885 | 6 | 6
Guanaco-7B | 1010 | 3 | 879 | 8 | 860 | 7 | 7
Bard | 909 | 8 | 902 | 7 | - | - | 8
We also use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy.
Dataset | 7B | 13B | 33B | 65B
---|---|---|---|---
LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4
Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7
Longform | 32.1 | 43.2 | 56.6 | 59.7
Chip2 | 34.5 | 41.6 | 53.6 | 59.8
HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1
Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3
OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2
Alpaca | 38.8 | 47.8 | 57.3 | 62.5
FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9
## Risks and Biases
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
However, we note that finetuning on OASST1 seems to reduce biases as measured on the CrowS dataset. We report here the performance of Guanaco-65B compared to other baseline models on the CrowS dataset.
| | LLaMA-65B | GPT-3 | OPT-175B | Guanaco-65B |
|----------------------|-----------|-------|----------|---------------|
| Gender | 70.6 | 62.6 | 65.7 | **47.5** |
| Religion | {79.0} | 73.3 | 68.6 | **38.7** |
| Race/Color | 57.0 | 64.7 | 68.6 | **45.3** |
| Sexual orientation | {81.0} | 76.2 | 78.6 | **59.1** |
| Age | 70.1 | 64.4 | 67.8 | **36.3** |
| Nationality | 64.2 | 61.6 | 62.9 | **32.4** |
| Disability | 66.7 | 76.7 | 76.7 | **33.9** |
| Physical appearance | 77.8 | 74.6 | 76.2 | **43.1** |
| Socioeconomic status | 71.5 | 73.8 | 76.2 | **55.3** |
| Average | 66.6 | 67.2 | 69.5 | **43.5** |
## Citation
```bibtex
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
```
|
amichelini/distilbert-base-multilingual-cased-sentiments-student
|
amichelini
| 2024-05-31T11:00:35Z | 22 | 1 |
transformers
|
[
"transformers",
"onnx",
"distilbert",
"text-classification",
"sentiment-analysis",
"zero-shot-distillation",
"distillation",
"zero-shot-classification",
"debarta-v3",
"en",
"ar",
"de",
"es",
"fr",
"ja",
"zh",
"id",
"hi",
"it",
"ms",
"pt",
"dataset:tyqiangz/multilingual-sentiments",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-30T10:20:52Z |
---
license: apache-2.0
tags:
- sentiment-analysis
- text-classification
- zero-shot-distillation
- distillation
- zero-shot-classification
- debarta-v3
model-index:
- name: distilbert-base-multilingual-cased-sentiments-student
results: []
datasets:
- tyqiangz/multilingual-sentiments
language:
- en
- ar
- de
- es
- fr
- ja
- zh
- id
- hi
- it
- ms
- pt
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sentiments-student
> **Note**
>
> This is a fork of the `distilbert-base-multilingual-cased-sentiments-student` model. The original model card can be found [here](https://huggingface.co/lxyuan/distilbert-base-multilingual-cased-sentiments-student).
> This is just a conversion of the model to the ONNX format so it can be used in JavaScript/TypeScript applications.
This model is distilled from the zero-shot classification pipeline on the Multilingual Sentiment
dataset using this [script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation).
In reality the multilingual-sentiment dataset is annotated of course,
but we'll pretend and ignore the annotations for the sake of example.
Teacher model: MoritzLaurer/mDeBERTa-v3-base-mnli-xnli
Teacher hypothesis template: "The sentiment of this text is {}."
Student model: distilbert-base-multilingual-cased
## Inference example
```python
from transformers import pipeline
distilled_student_sentiment_classifier = pipeline(
model="lxyuan/distilbert-base-multilingual-cased-sentiments-student",
return_all_scores=True
)
# english
distilled_student_sentiment_classifier ("I love this movie and i would watch it again and again!")
>> [[{'label': 'positive', 'score': 0.9731044769287109},
{'label': 'neutral', 'score': 0.016910076141357422},
{'label': 'negative', 'score': 0.009985478594899178}]]
# malay
distilled_student_sentiment_classifier("Saya suka filem ini dan saya akan menontonnya lagi dan lagi!")
[[{'label': 'positive', 'score': 0.9760093688964844},
{'label': 'neutral', 'score': 0.01804516464471817},
{'label': 'negative', 'score': 0.005945465061813593}]]
# japanese
distilled_student_sentiment_classifier("私はこの映画が大好きで、何度も見ます!")
>> [[{'label': 'positive', 'score': 0.9342429041862488},
{'label': 'neutral', 'score': 0.040193185210227966},
{'label': 'negative', 'score': 0.025563929229974747}]]
```
## Training procedure
Notebook link: [here](https://github.com/LxYuan0420/nlp/blob/main/notebooks/Distilling_Zero_Shot_multilingual_distilbert_sentiments_student.ipynb)
### Training hyperparameters
Result can be reproduce using the following commands:
```bash
python transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py \
--data_file ./multilingual-sentiments/train_unlabeled.txt \
--class_names_file ./multilingual-sentiments/class_names.txt \
--hypothesis_template "The sentiment of this text is {}." \
--teacher_name_or_path MoritzLaurer/mDeBERTa-v3-base-mnli-xnli \
--teacher_batch_size 32 \
--student_name_or_path distilbert-base-multilingual-cased \
--output_dir ./distilbert-base-multilingual-cased-sentiments-student \
--per_device_train_batch_size 16 \
--fp16
```
If you are training this model on Colab, make the following code changes to avoid Out-of-memory error message:
```bash
###### modify L78 to disable fast tokenizer
default=False,
###### update dataset map part at L313
dataset = dataset.map(tokenizer, input_columns="text", fn_kwargs={"padding": "max_length", "truncation": True, "max_length": 512})
###### add following lines to L213
del model
print(f"Manually deleted Teacher model, free some memory for student model.")
###### add following lines to L337
trainer.push_to_hub()
tokenizer.push_to_hub("distilbert-base-multilingual-cased-sentiments-student")
```
### Training log
```bash
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 2009.8864, 'train_samples_per_second': 73.0, 'train_steps_per_second': 4.563, 'train_loss': 0.6473459283913797, 'epoch': 1.0}
100%|███████████████████████████████████████| 9171/9171 [33:29<00:00, 4.56it/s]
[INFO|trainer.py:762] 2023-05-06 10:56:18,555 >> The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:3129] 2023-05-06 10:56:18,557 >> ***** Running Evaluation *****
[INFO|trainer.py:3131] 2023-05-06 10:56:18,557 >> Num examples = 146721
[INFO|trainer.py:3134] 2023-05-06 10:56:18,557 >> Batch size = 128
100%|███████████████████████████████████████| 1147/1147 [08:59<00:00, 2.13it/s]
05/06/2023 11:05:18 - INFO - __main__ - Agreement of student and teacher predictions: 88.29%
[INFO|trainer.py:2868] 2023-05-06 11:05:18,251 >> Saving model checkpoint to ./distilbert-base-multilingual-cased-sentiments-student
[INFO|configuration_utils.py:457] 2023-05-06 11:05:18,251 >> Configuration saved in ./distilbert-base-multilingual-cased-sentiments-student/config.json
[INFO|modeling_utils.py:1847] 2023-05-06 11:05:18,905 >> Model weights saved in ./distilbert-base-multilingual-cased-sentiments-student/pytorch_model.bin
[INFO|tokenization_utils_base.py:2171] 2023-05-06 11:05:18,905 >> tokenizer config file saved in ./distilbert-base-multilingual-cased-sentiments-student/tokenizer_config.json
[INFO|tokenization_utils_base.py:2178] 2023-05-06 11:05:18,905 >> Special tokens file saved in ./distilbert-base-multilingual-cased-sentiments-student/special_tokens_map.json
```
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Sharan1712/llama2_7B_alpaca_loftq_4bit_3a
|
Sharan1712
| 2024-05-31T10:59:37Z | 85 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:tatsu-lab/alpaca",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-31T09:28:26Z |
---
library_name: transformers
license: apache-2.0
datasets:
- tatsu-lab/alpaca
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hitesh17/ppo-LunarLander-v2
|
Hitesh17
| 2024-05-31T10:56:18Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-31T10:55:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.18 +/- 18.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ksgk-fy/ecoach_philippine_v3_merge
|
Ksgk-fy
| 2024-05-31T10:48:13Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T10:44:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RobertIulian10/my_awesome_wnut_model
|
RobertIulian10
| 2024-05-31T10:47:27Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-05-31T10:45:13Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5665467625899281
- name: Recall
type: recall
value: 0.2919369786839666
- name: F1
type: f1
value: 0.38532110091743116
- name: Accuracy
type: accuracy
value: 0.9409174468812791
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2698
- Precision: 0.5665
- Recall: 0.2919
- F1: 0.3853
- Accuracy: 0.9409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2819 | 0.5262 | 0.2234 | 0.3136 | 0.9373 |
| No log | 2.0 | 426 | 0.2698 | 0.5665 | 0.2919 | 0.3853 | 0.9409 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
hickman2049/Pixelcopter-PLE-v0
|
hickman2049
| 2024-05-31T10:47:22Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-31T10:47:18Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 30.80 +/- 21.15
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
lunadong/dn
|
lunadong
| 2024-05-31T10:41:20Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-31T10:41:20Z |
---
license: apache-2.0
---
|
OwOpeepeepoopoo/ZZZBangerMr_lol_2
|
OwOpeepeepoopoo
| 2024-05-31T10:39:09Z | 147 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"mergekit",
"merge",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T10:37:49Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# output_lol2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* /notebooks/dippy-bittensor-subnet/clone_baxtos_bax01-59
* /notebooks/dippy-bittensor-subnet/mmodels/output_lol1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: /notebooks/dippy-bittensor-subnet/clone_baxtos_bax01-59
layer_range: [0, 24]
- model: /notebooks/dippy-bittensor-subnet/mmodels/output_lol1
layer_range: [0, 24]
merge_method: slerp
base_model: /notebooks/dippy-bittensor-subnet/clone_baxtos_bax01-59
parameters:
t:
- filter: self_attn
value: [0.1, 0.3, 0.5, 0.7, 0.9]
- filter: mlp
value: [0.9, 0.7, 0.5, 0.3, 0.1]
- value: 0.5
dtype: bfloat16
```
|
alexgrigore/videomae-base-finetuned-ucf101-subset
|
alexgrigore
| 2024-05-31T10:38:26Z | 66 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-05-28T11:36:27Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8993
- Accuracy: 0.7633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 376
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.8086 | 0.2527 | 95 | 0.8059 | 0.7875 |
| 0.8755 | 1.2527 | 190 | 0.7765 | 0.7875 |
| 0.9334 | 2.2527 | 285 | 0.7846 | 0.7875 |
| 0.8263 | 3.2420 | 376 | 0.7845 | 0.7875 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
adriansanz/te-zsc-synthetic
|
adriansanz
| 2024-05-31T10:38:07Z | 110 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:projecte-aina/roberta-base-ca-v2-cased-te",
"base_model:finetune:projecte-aina/roberta-base-ca-v2-cased-te",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-31T09:17:57Z |
---
license: apache-2.0
base_model: projecte-aina/roberta-base-ca-v2-cased-te
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: SYN_300524_epoch_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SYN_300524_epoch_5
This model is a fine-tuned version of [projecte-aina/roberta-base-ca-v2-cased-te](https://huggingface.co/projecte-aina/roberta-base-ca-v2-cased-te) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3372
- Accuracy: 0.98
- Precision: 0.9803
- Recall: 0.98
- F1: 0.9800
- Ratio: 0.488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 47
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- lr_scheduler_warmup_steps: 4
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Ratio |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----:|
| 0.3174 | 0.0533 | 10 | 0.3307 | 0.984 | 0.9840 | 0.984 | 0.9840 | 0.496 |
| 0.3202 | 0.1067 | 20 | 0.3258 | 0.986 | 0.9861 | 0.986 | 0.9860 | 0.494 |
| 0.3016 | 0.16 | 30 | 0.3282 | 0.986 | 0.9860 | 0.986 | 0.9860 | 0.504 |
| 0.3291 | 0.2133 | 40 | 0.3495 | 0.977 | 0.9774 | 0.977 | 0.9770 | 0.485 |
| 0.2942 | 0.2667 | 50 | 0.3602 | 0.973 | 0.9738 | 0.973 | 0.9730 | 0.479 |
| 0.3121 | 0.32 | 60 | 0.3586 | 0.973 | 0.9731 | 0.973 | 0.9730 | 0.493 |
| 0.3226 | 0.3733 | 70 | 0.3736 | 0.968 | 0.9681 | 0.968 | 0.9680 | 0.508 |
| 0.3226 | 0.4267 | 80 | 0.3515 | 0.979 | 0.9791 | 0.979 | 0.9790 | 0.493 |
| 0.3265 | 0.48 | 90 | 0.3697 | 0.97 | 0.9706 | 0.97 | 0.9700 | 0.482 |
| 0.3424 | 0.5333 | 100 | 0.3650 | 0.971 | 0.9717 | 0.971 | 0.9710 | 0.481 |
| 0.3348 | 0.5867 | 110 | 0.3502 | 0.976 | 0.9760 | 0.976 | 0.9760 | 0.496 |
| 0.3393 | 0.64 | 120 | 0.3441 | 0.978 | 0.9780 | 0.978 | 0.9780 | 0.496 |
| 0.3421 | 0.6933 | 130 | 0.3397 | 0.979 | 0.9791 | 0.979 | 0.9790 | 0.493 |
| 0.3319 | 0.7467 | 140 | 0.3412 | 0.979 | 0.9791 | 0.979 | 0.9790 | 0.493 |
| 0.3554 | 0.8 | 150 | 0.3416 | 0.977 | 0.9772 | 0.977 | 0.9770 | 0.489 |
| 0.3829 | 0.8533 | 160 | 0.3428 | 0.978 | 0.9785 | 0.978 | 0.9780 | 0.484 |
| 0.3631 | 0.9067 | 170 | 0.3396 | 0.979 | 0.9793 | 0.979 | 0.9790 | 0.487 |
| 0.3362 | 0.96 | 180 | 0.3376 | 0.98 | 0.9803 | 0.98 | 0.9800 | 0.488 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ClaudioItaly/Fimburs11V3-Q4_K_M-GGUF
|
ClaudioItaly
| 2024-05-31T10:31:46Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:mergekit-community/Fimburs11V3",
"base_model:quantized:mergekit-community/Fimburs11V3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T10:31:29Z |
---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model: mergekit-community/Fimburs11V3
---
# ClaudioItaly/Fimburs11V3-Q4_K_M-GGUF
This model was converted to GGUF format from [`mergekit-community/Fimburs11V3`](https://huggingface.co/mergekit-community/Fimburs11V3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mergekit-community/Fimburs11V3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo ClaudioItaly/Fimburs11V3-Q4_K_M-GGUF --hf-file fimburs11v3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ClaudioItaly/Fimburs11V3-Q4_K_M-GGUF --hf-file fimburs11v3-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo ClaudioItaly/Fimburs11V3-Q4_K_M-GGUF --hf-file fimburs11v3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo ClaudioItaly/Fimburs11V3-Q4_K_M-GGUF --hf-file fimburs11v3-q4_k_m.gguf -c 2048
```
|
simpnyaDrMei/poca-SoccerTwos
|
simpnyaDrMei
| 2024-05-31T10:24:45Z | 13 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-05-31T10:14:23Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: simpnyaDrMei/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
legraphista/neo_7b_instruct_v0.1-IMat-GGUF
|
legraphista
| 2024-05-31T10:18:31Z | 308 | 0 |
gguf
|
[
"gguf",
"quantized",
"GGUF",
"imatrix",
"quantization",
"imat",
"static",
"16bit",
"8bit",
"6bit",
"5bit",
"4bit",
"3bit",
"2bit",
"1bit",
"text-generation",
"base_model:m-a-p/neo_7b_instruct_v0.1",
"base_model:quantized:m-a-p/neo_7b_instruct_v0.1",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2024-05-31T09:33:01Z |
---
base_model: m-a-p/neo_7b_instruct_v0.1
inference: false
library_name: gguf
license: apache-2.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- imatrix
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
---
# neo_7b_instruct_v0.1-IMat-GGUF
_Llama.cpp imatrix quantization of m-a-p/neo_7b_instruct_v0.1_
Original Model: [m-a-p/neo_7b_instruct_v0.1](https://huggingface.co/m-a-p/neo_7b_instruct_v0.1)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3051](https://github.com/ggerganov/llama.cpp/releases/tag/b3051)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [neo_7b_instruct_v0.1.Q8_0.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q8_0.gguf) | Q8_0 | 8.28GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b_instruct_v0.1.Q6_K.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q6_K.gguf) | Q6_K | 6.40GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b_instruct_v0.1.Q4_K.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q4_K.gguf) | Q4_K | 4.74GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.Q3_K.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q3_K.gguf) | Q3_K | 3.79GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.Q2_K.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q2_K.gguf) | Q2_K | 2.92GB | ✅ Available | 🟢 IMatrix | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [neo_7b_instruct_v0.1.BF16.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.BF16.gguf) | BF16 | 15.59GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b_instruct_v0.1.FP16.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.FP16.gguf) | F16 | 15.59GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b_instruct_v0.1.Q8_0.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q8_0.gguf) | Q8_0 | 8.28GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b_instruct_v0.1.Q6_K.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q6_K.gguf) | Q6_K | 6.40GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b_instruct_v0.1.Q5_K.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q5_K.gguf) | Q5_K | 5.54GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b_instruct_v0.1.Q5_K_S.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q5_K_S.gguf) | Q5_K_S | 5.39GB | ✅ Available | ⚪ Static | 📦 No
| [neo_7b_instruct_v0.1.Q4_K.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q4_K.gguf) | Q4_K | 4.74GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.Q4_K_S.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q4_K_S.gguf) | Q4_K_S | 4.47GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.IQ4_NL.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.IQ4_NL.gguf) | IQ4_NL | 4.44GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.IQ4_XS.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.IQ4_XS.gguf) | IQ4_XS | 4.20GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.Q3_K.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q3_K.gguf) | Q3_K | 3.79GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.Q3_K_L.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q3_K_L.gguf) | Q3_K_L | 4.11GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.Q3_K_S.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q3_K_S.gguf) | Q3_K_S | 3.43GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.IQ3_M.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.IQ3_M.gguf) | IQ3_M | 3.53GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.IQ3_S.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.IQ3_S.gguf) | IQ3_S | 3.43GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.IQ3_XS.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.IQ3_XS.gguf) | IQ3_XS | 3.25GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.IQ3_XXS.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.IQ3_XXS.gguf) | IQ3_XXS | 3.03GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.Q2_K.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q2_K.gguf) | Q2_K | 2.92GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.Q2_K_S.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.Q2_K_S.gguf) | Q2_K_S | 2.71GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.IQ2_M.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.IQ2_M.gguf) | IQ2_M | 2.68GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.IQ2_S.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.IQ2_S.gguf) | IQ2_S | 2.47GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.IQ2_XS.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.IQ2_XS.gguf) | IQ2_XS | 2.36GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.IQ2_XXS.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.IQ2_XXS.gguf) | IQ2_XXS | 2.14GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.IQ1_M.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.IQ1_M.gguf) | IQ1_M | 1.89GB | ✅ Available | 🟢 IMatrix | 📦 No
| [neo_7b_instruct_v0.1.IQ1_S.gguf](https://huggingface.co/legraphista/neo_7b_instruct_v0.1-IMat-GGUF/blob/main/neo_7b_instruct_v0.1.IQ1_S.gguf) | IQ1_S | 1.73GB | ✅ Available | 🟢 IMatrix | 📦 No
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/neo_7b_instruct_v0.1-IMat-GGUF --include "neo_7b_instruct_v0.1.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/neo_7b_instruct_v0.1-IMat-GGUF --include "neo_7b_instruct_v0.1.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{user_prompt} [/INST]{assistant_response}</s><s>[INST] {next_user_prompt} [/INST]
```
### Chat template with system prompt
```
<s>[INST] {user_prompt} [/INST]{assistant_response}</s><s>[INST] {next_user_prompt} [/INST]
```
### Llama.cpp
```
llama.cpp/main -m neo_7b_instruct_v0.1.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `neo_7b_instruct_v0.1.Q8_0`)
3. Run `gguf-split --merge neo_7b_instruct_v0.1.Q8_0/neo_7b_instruct_v0.1.Q8_0-00001-of-XXXXX.gguf neo_7b_instruct_v0.1.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
|
Apel-sin/llama-3-NeuralDaredevil-8B-abliterated-exl2
|
Apel-sin
| 2024-05-31T10:16:40Z | 0 | 1 |
transformers
|
[
"transformers",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T09:43:03Z |
---
library_name: transformers
license: llama3
---
# Exllama v2 mlabonne/NeuralDaredevil-8B-abliterated
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.21">turboderp's ExLlamaV2 v0.0.21</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: <a href="https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated">mlabonne/NeuralDaredevil-8B-abliterated</a><br>
Calibration dataset: <a href="https://huggingface.co/datasets/cosmicvalor/toxic-qna">toxic-qna</a>
## Available sizes
| Branch | Bits | lm_head bits | Description |
| ----- | ---- | ------- | ------------ |
| [8_0](https://huggingface.co/Apel-sin/llama-3-NeuralDaredevil-8B-abliterated-exl2/tree/8_0) | 8.0 | 8.0 | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Apel-sin/llama-3-NeuralDaredevil-8B-abliterated-exl2/tree/6_5) | 6.5 | 8.0 | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_5](https://huggingface.co/Apel-sin/llama-3-NeuralDaredevil-8B-abliterated-exl2/tree/5_5) | 5.5 | 8.0 | Slightly lower quality vs 6.5, but usable on 8GB cards. |
# Llama-3-8B-Instruct-abliterated-v3 Model Card
[My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb)
This is [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.
## Hang on, "abliteration"? Orthogonalization? Ablation? What is this?
TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out.
**TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.**
As far as "abliteration": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes.
Ablate + obliterated = Abliterated
Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization.
## A little more on the methodology, and why this is interesting
To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt.
Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights.
> Why this over fine-tuning?
Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage.
As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.)
Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques.
It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa.
I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity.
> Okay, fine, but why V3? There's no V2 70B?
Well, I released a V2 a while back for 8B under Cognitive Computations.
It ended up being not worth it to try V2 with 70B, I wanted to refine the model before wasting compute cycles on what might not even be a better model.
I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations.
So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.)
## Quirkiness awareness notice
This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects.
If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored.
Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
|
jrahn/llama-3-8b-codestruct-v1
|
jrahn
| 2024-05-31T10:15:32Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"dataset:sahil2801/CodeAlpaca-20k",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-31T10:13:11Z |
---
license: llama3
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- sahil2801/CodeAlpaca-20k
model-index:
- name: outputs/llama-3-8b-codestruct-v1/
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast
load_in_8bit: false
load_in_4bit: true
strict: false
chat_template: llama3
datasets:
- path: sahil2801/CodeAlpaca-20k
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.05
output_dir: ./outputs/llama-3-8b-codestruct-v1/
adapter: qlora
lora_model_dir:
sequence_len: 512
sample_packing: false
pad_to_sequence_len: true
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 16
num_epochs: 1
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: true
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# outputs/llama-3-8b-codestruct-v1/
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the [sahil2801/CodeAlpaca-20k](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9362 | 0.0017 | 1 | 0.9783 |
| 0.5812 | 0.2508 | 149 | 0.5325 |
| 0.4651 | 0.5017 | 298 | 0.5170 |
| 0.5264 | 0.7525 | 447 | 0.5117 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
BoyaWu10/bunny-pretrain-llama3-8b-siglip-s2
|
BoyaWu10
| 2024-05-31T10:15:01Z | 6 | 1 |
transformers
|
[
"transformers",
"bunny-llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-05-31T10:10:44Z |
---
inference: false
license: apache-2.0
---
# Model Card
Bunny is a family of lightweight multimodal models.
Bunny-pretrain-llama3-8b-siglip-s2 is the pretrained weights for [Bunny-v1.1-Llama-3-8B-V](https://huggingface.co/BAAI/Bunny-v1_1-Llama-3-8B-V), which leverages Llama-3-8B-Instruct as the language model backbone and SigLIP as the vision encoder.
It is pretrained on LAION-2M.
More details about this model can be found in [GitHub](https://github.com/BAAI-DCAI/Bunny).
# License
This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses.
The content of this project itself is licensed under the Apache license 2.0.
|
irahulpandey/Llamahodorv1
|
irahulpandey
| 2024-05-31T10:14:05Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-31T10:10:01Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
v-urushkin/NaturalRoBERTa_65ep
|
v-urushkin
| 2024-05-31T10:09:34Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"ru",
"dataset:tay-yozhik/NaturalText",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-31T09:09:34Z |
---
library_name: transformers
datasets:
- tay-yozhik/NaturalText
language:
- ru
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
leroy2024/lora_model
|
leroy2024
| 2024-05-31T10:08:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T10:08:46Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** leroy2024
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Haru4me/ppo-SnowballTarget
|
Haru4me
| 2024-05-31T10:03:43Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-05-31T10:03:40Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Haru4me/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
IlyaGusev/saiga_phi3_medium_sft_m1_d2
|
IlyaGusev
| 2024-05-31T09:57:10Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"mistral",
"arxiv:1910.09700",
"region:us"
] | null | 2024-05-31T09:43:12Z |
---
library_name: peft
base_model: models/phi3_medium
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
NikolayKozloff/Llama-3-11.5B-V2-Q5_0-GGUF
|
NikolayKozloff
| 2024-05-31T09:56:17Z | 1 | 1 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T09:55:54Z |
---
license: other
tags:
- llama-cpp
- gguf-my-repo
base_model: Replete-AI/Llama-3-11.5B-V2
license_name: llama-3
license_link: https://llama.meta.com/llama3/license/
---
# NikolayKozloff/Llama-3-11.5B-V2-Q5_0-GGUF
This model was converted to GGUF format from [`Replete-AI/Llama-3-11.5B-V2`](https://huggingface.co/Replete-AI/Llama-3-11.5B-V2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Replete-AI/Llama-3-11.5B-V2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo NikolayKozloff/Llama-3-11.5B-V2-Q5_0-GGUF --hf-file llama-3-11.5b-v2-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Llama-3-11.5B-V2-Q5_0-GGUF --hf-file llama-3-11.5b-v2-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo NikolayKozloff/Llama-3-11.5B-V2-Q5_0-GGUF --hf-file llama-3-11.5b-v2-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo NikolayKozloff/Llama-3-11.5B-V2-Q5_0-GGUF --hf-file llama-3-11.5b-v2-q5_0.gguf -c 2048
```
|
hanane22/falcon-7b-instruct-ft-adapters_han
|
hanane22
| 2024-05-31T09:54:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T09:34:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahmadjarrar/phi-2-pi
|
ahmadjarrar
| 2024-05-31T09:53:20Z | 152 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-31T09:26:51Z |
---
license: apache-2.0
---
|
lrycro/bert-phishing-categorization-tokenizer3
|
lrycro
| 2024-05-31T09:52:02Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T09:52:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FINwillson/llama-3-8B-welfare-sft-v2
|
FINwillson
| 2024-05-31T09:51:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T09:50:50Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** FINwillson
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NikolayKozloff/Llama-3-11.5B-V2-Q4_0-GGUF
|
NikolayKozloff
| 2024-05-31T09:51:24Z | 2 | 1 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T09:51:06Z |
---
license: other
tags:
- llama-cpp
- gguf-my-repo
base_model: Replete-AI/Llama-3-11.5B-V2
license_name: llama-3
license_link: https://llama.meta.com/llama3/license/
---
# NikolayKozloff/Llama-3-11.5B-V2-Q4_0-GGUF
This model was converted to GGUF format from [`Replete-AI/Llama-3-11.5B-V2`](https://huggingface.co/Replete-AI/Llama-3-11.5B-V2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Replete-AI/Llama-3-11.5B-V2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo NikolayKozloff/Llama-3-11.5B-V2-Q4_0-GGUF --hf-file llama-3-11.5b-v2-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Llama-3-11.5B-V2-Q4_0-GGUF --hf-file llama-3-11.5b-v2-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo NikolayKozloff/Llama-3-11.5B-V2-Q4_0-GGUF --hf-file llama-3-11.5b-v2-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo NikolayKozloff/Llama-3-11.5B-V2-Q4_0-GGUF --hf-file llama-3-11.5b-v2-q4_0.gguf -c 2048
```
|
LyliaEngine/Sinozick_Style_XL_Pony
|
LyliaEngine
| 2024-05-31T09:46:52Z | 92 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:LyliaEngine/Pony_Diffusion_V6_XL",
"base_model:adapter:LyliaEngine/Pony_Diffusion_V6_XL",
"license:cdla-permissive-2.0",
"region:us"
] |
text-to-image
| 2024-05-31T09:44:35Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
(score_9, score_8_up), score_7_up, zPDXL, 1girl, upper body, black mantle,
earrings, cyberpunk, eyepatch, neon eyepatch, black hair, wild hair, long
hair, red eyes, looking at viewer, expressionless, dark, dark theme, black
sclera, konohagakure symbol, forehead protector, naruto \(series\),
<lora:Sinozick_Style_XL_Pony:1>, sinozick style
parameters:
negative_prompt: >-
(extra fingers, deformed hands, polydactyl:1.1), (worst quality, low
quality:1.2), bad quality, shiny, blurry, artists signature, (multiple
tails), nuzzle, censored, pixelated, zPDXL-neg, pointy ears,
output:
url: images/00012-3760017729.jpeg
- text: >-
(score_9, score_8_up), score_7_up, zPDXL, 1girl, white hair, short hair,
white eyes, mouth mask, looking at viewer, white kimono, red background,
film grain, cowboy shot <lora:Sinozick_Style_XL_Pony:1>, sinozick style
parameters:
negative_prompt: >-
(extra fingers, deformed hands, polydactyl:1.1), (worst quality, low
quality:1.2), bad quality, shiny, blurry, artists signature, (multiple
tails), nuzzle, censored, pixelated, zPDXL-neg, pointy ears,
output:
url: images/00019-46392353.jpeg
base_model: LyliaEngine/Pony_Diffusion_V6_XL
instance_prompt: sinozick style, flat color, dark theme
license: cdla-permissive-2.0
---
# Sinozick_Style_XL_Pony
<Gallery />
## Model description
Sinozick is an AI artist on Twitter I like a lot, the style he gets for his images is incredible, and I wanted to reproduce the best I could this style. I think he uses MidJourney, and SD can't replicate it very well, but I'm satisfied enough with it.
One flaw ;
It's better for OCs, using it with pre-made characters can reduce the impact of the style.
Activation Prompt : sinozick style
Helpful prompt Prompt : dark theme, flat color
If you enjoyed this LoRA, think about leaving a like and post some images ! Thanks ! <3
## Source
https://civitai.com/models/432483/sinozick-style-or-style-lora-or-pony
## Credit
https://civitai.com/user/LennonAI
## Trigger words
You should use `sinozick style` to trigger the image generation.
You should use `flat color` to trigger the image generation.
You should use `dark theme` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LyliaEngine/Sinozick_Style_XL_Pony/tree/main) them in the Files & versions tab.
|
alterf/json_mistral
|
alterf
| 2024-05-31T09:46:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T09:45:54Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** alterf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ninagroot/GPT2-705M-finaltest
|
ninagroot
| 2024-05-31T09:44:19Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-29T14:39:39Z |
---
tags:
- generated_from_trainer
model-index:
- name: GPT2-705M
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT2-705M
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.8119 | 1.0 | 3 | 6.8091 |
| 6.6598 | 2.0 | 6 | 6.8246 |
| 6.0219 | 3.0 | 9 | 6.2434 |
| 5.1608 | 4.0 | 12 | 5.4866 |
| 4.6874 | 5.0 | 15 | 5.7119 |
| 4.7554 | 6.0 | 18 | 4.9916 |
| 4.3244 | 7.0 | 21 | 4.8076 |
| 4.3358 | 8.0 | 24 | 4.7170 |
| 4.3353 | 9.0 | 27 | 4.4035 |
| 4.0477 | 10.0 | 30 | 4.1959 |
| 3.7513 | 11.0 | 33 | 3.9729 |
| 3.7101 | 12.0 | 36 | 3.8325 |
| 3.333 | 13.0 | 39 | 3.7540 |
| 3.3225 | 14.0 | 42 | 3.6116 |
| 2.9902 | 15.0 | 45 | 3.5063 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Avelina/lovelace-medium-alpha1
|
Avelina
| 2024-05-31T09:44:17Z | 55 | 1 |
transformers
|
[
"transformers",
"safetensors",
"lsw_transformer",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"arxiv:2405.20053",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-26T12:43:00Z |
---
license: bsd-3-clause
datasets:
- EleutherAI/pile
language:
- en
library_name: transformers
---
# Lovelace Medium Alpha1
551M parameter Transformer-XL style model trained on 100B tokens of The Pile!
This model was originally trained for the "Direct Prefrence Heads" paper, but will also be used as the basis for much of my future research.
All code used to train and run these models is available here: https://github.com/Avelina9X/direct-preference-heads and our paper is available here: https://arxiv.org/abs/2405.20053
## Model Architecture
| Name | Value |
| --- | --- |
| Total Parameters | 551M |
| Non-Embedding Parameters | 512M |
| Vocab Size | 50272 |
| \\(d_\text{vocab}\\) | 768 |
| \\(d_\text{model}\\) | 1536 |
| \\(n_\text{layers}\\) | 18 |
| FFN Activation | SwiGLU |
| \\(d_\text{ffn}\\) | 4096 |
| Attention Type | Full |
| Positon Embedding | Reversed RoPE with ABF |
| \\(n_\text{heads}\\) | 24 |
| \\(d_\text{key}\\) | 64 |
| Trained Context | 2048 |
| Trained Memory | 2048 |
| Max Inference Context | 4096 |
## Model Collection
| Model | Link |
| --- | --- |
| Pre-Trained Model | [lovelace-medium-alpha1](https://huggingface.co/Avelina/lovelace-medium-alpha1) |
| Fine-Tuned Model | [lovelace-medium-alpha1-sft](https://huggingface.co/Avelina/lovelace-medium-alpha1-sft) |
| DPH Aligned Model | [lovelace-medium-alpha1-dph](https://huggingface.co/Avelina/lovelace-medium-alpha1-dph) |
|
ninagroot/Llama-360M-finaltest
|
ninagroot
| 2024-05-31T09:42:44Z | 169 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-30T07:34:53Z |
---
tags:
- generated_from_trainer
model-index:
- name: Llama-360M
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-360M
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 300
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.6417 | 1.0 | 3 | 8.5751 |
| 8.3908 | 2.0 | 6 | 8.3473 |
| 7.9583 | 3.0 | 9 | 7.9814 |
| 7.3598 | 4.0 | 12 | 7.5011 |
| 6.7468 | 5.0 | 15 | 6.9942 |
| 6.3345 | 6.0 | 18 | 6.6309 |
| 6.0489 | 7.0 | 21 | 6.3987 |
| 5.9651 | 8.0 | 24 | 6.2101 |
| 5.7683 | 9.0 | 27 | 5.9691 |
| 5.3051 | 10.0 | 30 | 5.5791 |
| 4.6791 | 11.0 | 33 | 5.1445 |
| 4.3962 | 12.0 | 36 | 4.8859 |
| 4.0007 | 13.0 | 39 | 4.7013 |
| 3.9473 | 14.0 | 42 | 4.4994 |
| 3.5486 | 15.0 | 45 | 4.3178 |
| 3.3243 | 16.0 | 48 | 4.1587 |
| 3.1305 | 17.0 | 51 | 4.0505 |
| 2.8703 | 18.0 | 54 | 3.9467 |
| 2.7661 | 19.0 | 57 | 3.8780 |
| 2.7976 | 20.0 | 60 | 3.8245 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
IlyaGusev/saiga_llama3_8b_sft_m10_d1
|
IlyaGusev
| 2024-05-31T09:42:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"region:us"
] | null | 2024-05-31T09:38:04Z |
---
library_name: peft
base_model: models/llama-3-8b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
Sharan1712/llama2_7B_alpaca_loftq_4bit_3b
|
Sharan1712
| 2024-05-31T09:31:23Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-31T09:28:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sharan1712/llama2_7B_alpaca_loftq_4bit_3c
|
Sharan1712
| 2024-05-31T09:30:07Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-31T09:27:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ar9av/idefics2-8b-fintuned-synthetic_chart_data
|
ar9av
| 2024-05-31T09:29:52Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"base_model:finetune:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"region:us"
] | null | 2024-05-31T09:29:46Z |
---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: idefics2-8b-fintuned-synthetic_chart_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# idefics2-8b-fintuned-synthetic_chart_data
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
|
katopz/llama-3-typhoon-v1.5x-8b-instruct-Q4_K_M-GGUF
|
katopz
| 2024-05-31T09:29:51Z | 10 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"th",
"en",
"base_model:scb10x/llama-3-typhoon-v1.5x-8b-instruct",
"base_model:quantized:scb10x/llama-3-typhoon-v1.5x-8b-instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-05-31T09:29:34Z |
---
language:
- th
- en
license: llama3
tags:
- llama-cpp
- gguf-my-repo
base_model: scb10x/llama-3-typhoon-v1.5x-8b-instruct
pipeline_tag: text-generation
---
# katopz/llama-3-typhoon-v1.5x-8b-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`scb10x/llama-3-typhoon-v1.5x-8b-instruct`](https://huggingface.co/scb10x/llama-3-typhoon-v1.5x-8b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/scb10x/llama-3-typhoon-v1.5x-8b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo katopz/llama-3-typhoon-v1.5x-8b-instruct-Q4_K_M-GGUF --hf-file llama-3-typhoon-v1.5x-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo katopz/llama-3-typhoon-v1.5x-8b-instruct-Q4_K_M-GGUF --hf-file llama-3-typhoon-v1.5x-8b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo katopz/llama-3-typhoon-v1.5x-8b-instruct-Q4_K_M-GGUF --hf-file llama-3-typhoon-v1.5x-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo katopz/llama-3-typhoon-v1.5x-8b-instruct-Q4_K_M-GGUF --hf-file llama-3-typhoon-v1.5x-8b-instruct-q4_k_m.gguf -c 2048
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.