modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-21 00:45:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 567
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-21 00:45:01
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
andakm/cars_new_classifier
|
andakm
| 2024-05-26T06:04:48Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-20T16:20:51Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: andakm/cars_new_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# andakm/cars_new_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0611
- Train Accuracy: 0.6863
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2295, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 2.0876 | 0.2941 | 0 |
| 1.8215 | 0.3922 | 1 |
| 1.5758 | 0.4510 | 2 |
| 1.3175 | 0.5490 | 3 |
| 1.0611 | 0.6863 | 4 |
### Framework versions
- Transformers 4.41.1
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-4_25bpw_exl2
|
Zoyd
| 2024-05-26T06:03:21Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-05-25T23:20:32Z |
---
library_name: transformers
license: llama3
---
**Exllamav2** quant (**exl2** / **4.25 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-2_2bpw_exl2)**</center> | <center>20886 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-2_5bpw_exl2)**</center> | <center>23200 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-3_0bpw_exl2)**</center> | <center>27269 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-3_5bpw_exl2)**</center> | <center>31359 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-3_75bpw_exl2)**</center> | <center>33395 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-4_0bpw_exl2)**</center> | <center>35426 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-4_25bpw_exl2)**</center> | <center>37478 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-5_0bpw_exl2)**</center> | <center>43559 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-6_0bpw_exl2)**</center> | <center>51958 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-6_5bpw_exl2)**</center> | <center>56019 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-8_0bpw_exl2)**</center> | <center>61865 MB</center> | <center>8</center> |
# Smaug-Llama-3-70B-Instruct-abliterated-v3 Model Card
[My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb)
I'll be honest: it just kinda bothered me Smaug isn't evil enough.
This is [abacusai/Smaug-Llama-3-70B-Instruct](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.
## Hang on, "abliteration"? Orthogonalization? Ablation? What is this?
TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out.
**TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.**
As far as "abliteration": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes.
Ablate + obliterated = Abliterated
Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization.
## A little more on the methodology, and why this is interesting
To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt.
Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights.
> Why this over fine-tuning?
Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage.
As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.)
Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques.
It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa.
I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity.
> Okay, fine, but why V3? There's no V2 70B?
Well, I released a V2 a while back for 8B under Cognitive Computations.
It ended up being not worth it to try V2 with 70B, I wanted to refine the model before wasting compute cycles on what might not even be a better model.
I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations.
So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.)
## Quirkiness awareness notice
This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects.
If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored.
Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
|
second-state/Qwen1.5-14B-Chat-GGUF
|
second-state
| 2024-05-26T06:00:22Z | 116 | 4 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation",
"chat",
"en",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:quantized:Qwen/Qwen1.5-14B-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-02-06T09:40:52Z |
---
base_model: Qwen/Qwen1.5-14B-Chat
license: other
license_name: tongyi-qianwen-research
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE
model_creator: Qwen
model_name: Qwen1.5 14B Chat
quantized_by: Second State Inc.
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Qwen1.5-14B-Chat-GGUF
## Original Model
[Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat)
## Run with LlamaEdge
- LlamaEdge version: [v0.2.15](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.15) and above
- Prompt template
- Prompt type: `chatml`
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Context size: `32000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen1.5-14B-Chat-Q5_K_M.gguf llama-api-server.wasm -p chatml
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen1.5-14B-Chat-Q5_K_M.gguf llama-chat.wasm -p chatml
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Qwen1.5-14B-Chat-Q2_K.gguf](https://huggingface.co/second-state/Qwen1.5-14B-Chat-GGUF/blob/main/Qwen1.5-14B-Chat-Q2_K.gguf) | Q2_K | 2 | 6.09 GB| smallest, significant quality loss - not recommended for most purposes |
| [Qwen1.5-14B-Chat-Q3_K_L.gguf](https://huggingface.co/second-state/Qwen1.5-14B-Chat-GGUF/blob/main/Qwen1.5-14B-Chat-Q3_K_L.gguf) | Q3_K_L | 3 | 7.84 GB| small, substantial quality loss |
| [Qwen1.5-14B-Chat-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen1.5-14B-Chat-GGUF/blob/main/Qwen1.5-14B-Chat-Q3_K_M.gguf) | Q3_K_M | 3 | 7.42 GB| very small, high quality loss |
| [Qwen1.5-14B-Chat-Q3_K_S.gguf](https://huggingface.co/second-state/Qwen1.5-14B-Chat-GGUF/blob/main/Qwen1.5-14B-Chat-Q3_K_S.gguf) | Q3_K_S | 3 | 6.95 GB| very small, high quality loss |
| [Qwen1.5-14B-Chat-Q4_0.gguf](https://huggingface.co/second-state/Qwen1.5-14B-Chat-GGUF/blob/main/Qwen1.5-14B-Chat-Q4_0.gguf) | Q4_0 | 4 | 8.18 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen1.5-14B-Chat-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen1.5-14B-Chat-GGUF/blob/main/Qwen1.5-14B-Chat-Q4_K_M.gguf) | Q4_K_M | 4 | 9.19 GB| medium, balanced quality - recommended |
| [Qwen1.5-14B-Chat-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen1.5-14B-Chat-GGUF/blob/main/Qwen1.5-14B-Chat-Q4_K_S.gguf) | Q4_K_S | 4 | 8.56 GB| small, greater quality loss |
| [Qwen1.5-14B-Chat-Q5_0.gguf](https://huggingface.co/second-state/Qwen1.5-14B-Chat-GGUF/blob/main/Qwen1.5-14B-Chat-Q5_0.gguf) | Q5_0 | 5 | 9.85 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen1.5-14B-Chat-Q5_K_M.gguf](https://huggingface.co/second-state/Qwen1.5-14B-Chat-GGUF/blob/main/Qwen1.5-14B-Chat-Q5_K_M.gguf) | Q5_K_M | 5 | 10.5 GB| large, very low quality loss - recommended |
| [Qwen1.5-14B-Chat-Q5_K_S.gguf](https://huggingface.co/second-state/Qwen1.5-14B-Chat-GGUF/blob/main/Qwen1.5-14B-Chat-Q5_K_S.gguf) | Q5_K_S | 5 | 10.0 GB| large, low quality loss - recommended |
| [Qwen1.5-14B-Chat-Q6_K.gguf](https://huggingface.co/second-state/Qwen1.5-14B-Chat-GGUF/blob/main/Qwen1.5-14B-Chat-Q6_K.gguf) | Q6_K | 6 | 12.3 GB| very large, extremely low quality loss |
| [Qwen1.5-14B-Chat-Q8_0.gguf](https://huggingface.co/second-state/Qwen1.5-14B-Chat-GGUF/blob/main/Qwen1.5-14B-Chat-Q8_0.gguf) | Q8_0 | 8 | 15.1 GB| very large, extremely low quality loss - not recommended |
|
DiederikMartens/gBERT_sa_cv_10_fold5
|
DiederikMartens
| 2024-05-26T06:00:10Z | 113 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-26T05:35:33Z |
---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_10_fold5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_10_fold5
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5893
- F1: 0.6773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 401 | 0.4275 | 0.5400 |
| 0.3946 | 2.0 | 802 | 0.4152 | 0.6578 |
| 0.1794 | 3.0 | 1203 | 0.5893 | 0.6773 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
second-state/Qwen1.5-7B-Chat-GGUF
|
second-state
| 2024-05-26T05:59:54Z | 91 | 1 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation",
"chat",
"en",
"base_model:Qwen/Qwen1.5-7B-Chat",
"base_model:quantized:Qwen/Qwen1.5-7B-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-02-06T06:59:47Z |
---
base_model: Qwen/Qwen1.5-7B-Chat
license: other
license_name: tongyi-qianwen-research
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/LICENSE
model_creator: Qwen
model_name: Qwen1.5 7B Chat
quantized_by: Second State Inc.
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Qwen1.5-7B-Chat-GGUF
## Original Model
[Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat)
## Run with LlamaEdge
- LlamaEdge version: [v0.2.15](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.15) and above
- Prompt template
- Prompt type: `chatml`
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Context size: `32000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen1.5-7B-Chat-Q5_K_M.gguf llama-api-server.wasm -p chatml
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen1.5-7B-Chat-Q5_K_M.gguf llama-chat.wasm -p chatml
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Qwen1.5-7B-Chat-Q2_K.gguf](https://huggingface.co/second-state/Qwen1.5-7B-Chat-GGUF/blob/main/Qwen1.5-7B-Chat-Q2_K.gguf) | Q2_K | 2 | 3.10 GB| smallest, significant quality loss - not recommended for most purposes |
| [Qwen1.5-7B-Chat-Q3_K_L.gguf](https://huggingface.co/second-state/Qwen1.5-7B-Chat-GGUF/blob/main/Qwen1.5-7B-Chat-Q3_K_L.gguf) | Q3_K_L | 3 | 4.22 GB| small, substantial quality loss |
| [Qwen1.5-7B-Chat-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen1.5-7B-Chat-GGUF/blob/main/Qwen1.5-7B-Chat-Q3_K_M.gguf) | Q3_K_M | 3 | 3.92 GB| very small, high quality loss |
| [Qwen1.5-7B-Chat-Q3_K_S.gguf](https://huggingface.co/second-state/Qwen1.5-7B-Chat-GGUF/blob/main/Qwen1.5-7B-Chat-Q3_K_S.gguf) | Q3_K_S | 3 | 3.57 GB| very small, high quality loss |
| [Qwen1.5-7B-Chat-Q4_0.gguf](https://huggingface.co/second-state/Qwen1.5-7B-Chat-GGUF/blob/main/Qwen1.5-7B-Chat-Q4_0.gguf) | Q4_0 | 4 | 4.51 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen1.5-7B-Chat-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen1.5-7B-Chat-GGUF/blob/main/Qwen1.5-7B-Chat-Q4_K_M.gguf) | Q4_K_M | 4 | 4.77 GB| medium, balanced quality - recommended |
| [Qwen1.5-7B-Chat-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen1.5-7B-Chat-GGUF/blob/main/Qwen1.5-7B-Chat-Q4_K_S.gguf) | Q4_K_S | 4 | 4.54 GB| small, greater quality loss |
| [Qwen1.5-7B-Chat-Q5_0.gguf](https://huggingface.co/second-state/Qwen1.5-7B-Chat-GGUF/blob/main/Qwen1.5-7B-Chat-Q5_0.gguf) | Q5_0 | 5 | 5.40 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen1.5-7B-Chat-Q5_K_M.gguf](https://huggingface.co/second-state/Qwen1.5-7B-Chat-GGUF/blob/main/Qwen1.5-7B-Chat-Q5_K_M.gguf) | Q5_K_M | 5 | 5.53 GB| large, very low quality loss - recommended |
| [Qwen1.5-7B-Chat-Q5_K_S.gguf](https://huggingface.co/second-state/Qwen1.5-7B-Chat-GGUF/blob/main/Qwen1.5-7B-Chat-Q5_K_S.gguf) | Q5_K_S | 5 | 5.4 GB| large, low quality loss - recommended |
| [Qwen1.5-7B-Chat-Q6_K.gguf](https://huggingface.co/second-state/Qwen1.5-7B-Chat-GGUF/blob/main/Qwen1.5-7B-Chat-Q6_K.gguf) | Q6_K | 6 | 6.34 GB| very large, extremely low quality loss |
| [Qwen1.5-7B-Chat-Q8_0.gguf](https://huggingface.co/second-state/Qwen1.5-7B-Chat-GGUF/blob/main/Qwen1.5-7B-Chat-Q8_0.gguf) | Q8_0 | 8 | 8.21 GB| very large, extremely low quality loss - not recommended |
|
YuyuTong/540526
|
YuyuTong
| 2024-05-26T05:58:58Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-26T05:58:21Z |
---
license: apache-2.0
---
|
second-state/Liberated-Qwen1.5-72B-GGUF
|
second-state
| 2024-05-26T05:58:06Z | 48 | 1 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation",
"en",
"base_model:abacusai/Liberated-Qwen1.5-72B",
"base_model:quantized:abacusai/Liberated-Qwen1.5-72B",
"autotrain_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-03-07T09:17:08Z |
---
base_model: abacusai/Liberated-Qwen1.5-72B
inference: false
library_name: transformers
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE
model_creator: Abacus.AI and Eric Hartford
model_name: Liberated Qwen1.5 72B
pipeline_tag: text-generation
quantized_by: Second State Inc.
language:
- en
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Liberated-Qwen1.5-72B-GGUF
## Original Model
[abacusai/Liberated-Qwen1.5-72B](https://huggingface.co/abacusai/Liberated-Qwen1.5-72B)
## Run with LlamaEdge
- LlamaEdge version: [v0.4.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.4.3) and above
- Prompt template
- Prompt type: `chatml`
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Context size: `32000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Liberated-Qwen1.5-72B-Q4_K_M.gguf llama-api-server.wasm -p chatml
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Liberated-Qwen1.5-72B-Q4_K_M.gguf llama-chat.wasm -p chatml
```
For specifying the system message, append the `--system-prompt` option with the system prompt to the command above. For example,
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Liberated-Qwen1.5-72B-Q4_K_M.gguf llama-chat.wasm -p chatml -s 'Your name is Liberated. You are an uncensored and unbiased AI assistant. You always respond with a JSON object.'
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Liberated-Qwen1.5-72B-Q2_K.gguf](https://huggingface.co/second-state/Liberated-Qwen1.5-72B-GGUF/blob/main/Liberated-Qwen1.5-72B-Q2_K.gguf) | Q2_K | 2 | 28.5 GB| smallest, significant quality loss - not recommended for most purposes |
| [Liberated-Qwen1.5-72B-Q3_K_L.gguf](https://huggingface.co/second-state/Liberated-Qwen1.5-72B-GGUF/blob/main/Liberated-Qwen1.5-72B-Q3_K_L.gguf) | Q3_K_L | 3 | 38.5 GB| small, substantial quality loss |
| [Liberated-Qwen1.5-72B-Q3_K_M.gguf](https://huggingface.co/second-state/Liberated-Qwen1.5-72B-GGUF/blob/main/Liberated-Qwen1.5-72B-Q3_K_M.gguf) | Q3_K_M | 3 | 35.9 GB| very small, high quality loss |
| [Liberated-Qwen1.5-72B-Q3_K_S.gguf](https://huggingface.co/second-state/Liberated-Qwen1.5-72B-GGUF/blob/main/Liberated-Qwen1.5-72B-Q3_K_S.gguf) | Q3_K_S | 3 | 32.9 GB| very small, high quality loss |
| [Liberated-Qwen1.5-72B-Q4_0.gguf](https://huggingface.co/second-state/Liberated-Qwen1.5-72B-GGUF/blob/main/Liberated-Qwen1.5-72B-Q4_0.gguf) | Q4_0 | 4 | 41 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Liberated-Qwen1.5-72B-Q4_K_M.gguf](https://huggingface.co/second-state/Liberated-Qwen1.5-72B-GGUF/blob/main/Liberated-Qwen1.5-72B-Q4_K_M.gguf) | Q4_K_M | 4 | 44.1 GB| medium, balanced quality - recommended |
| [Liberated-Qwen1.5-72B-Q4_K_S.gguf](https://huggingface.co/second-state/Liberated-Qwen1.5-72B-GGUF/blob/main/Liberated-Qwen1.5-72B-Q4_K_S.gguf) | Q4_K_S | 4 | 41.9 GB| small, greater quality loss |
*Quantized with llama.cpp b2334*
|
Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-3_75bpw_exl2
|
Zoyd
| 2024-05-26T05:55:59Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-05-25T20:20:42Z |
---
library_name: transformers
license: llama3
---
**Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-2_2bpw_exl2)**</center> | <center>20886 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-2_5bpw_exl2)**</center> | <center>23200 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-3_0bpw_exl2)**</center> | <center>27269 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-3_5bpw_exl2)**</center> | <center>31359 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-3_75bpw_exl2)**</center> | <center>33395 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-4_0bpw_exl2)**</center> | <center>35426 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-4_25bpw_exl2)**</center> | <center>37478 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-5_0bpw_exl2)**</center> | <center>43559 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-6_0bpw_exl2)**</center> | <center>51958 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-6_5bpw_exl2)**</center> | <center>56019 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/failspy_Smaug-Llama-3-70B-Instruct-abliterated-v3-8_0bpw_exl2)**</center> | <center>61865 MB</center> | <center>8</center> |
# Smaug-Llama-3-70B-Instruct-abliterated-v3 Model Card
[My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb)
I'll be honest: it just kinda bothered me Smaug isn't evil enough.
This is [abacusai/Smaug-Llama-3-70B-Instruct](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.
## Hang on, "abliteration"? Orthogonalization? Ablation? What is this?
TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out.
**TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.**
As far as "abliteration": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes.
Ablate + obliterated = Abliterated
Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization.
## A little more on the methodology, and why this is interesting
To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt.
Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights.
> Why this over fine-tuning?
Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage.
As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.)
Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques.
It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa.
I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity.
> Okay, fine, but why V3? There's no V2 70B?
Well, I released a V2 a while back for 8B under Cognitive Computations.
It ended up being not worth it to try V2 with 70B, I wanted to refine the model before wasting compute cycles on what might not even be a better model.
I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations.
So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.)
## Quirkiness awareness notice
This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects.
If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored.
Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
|
second-state/Qwen1.5-1.8B-Chat-GGUF
|
second-state
| 2024-05-26T05:55:56Z | 1,100 | 2 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation",
"chat",
"en",
"base_model:Qwen/Qwen1.5-1.8B-Chat",
"base_model:quantized:Qwen/Qwen1.5-1.8B-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-02-06T04:33:23Z |
---
base_model: Qwen/Qwen1.5-1.8B-Chat
license: other
license_name: tongyi-qianwen-research
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat/blob/main/LICENSE
model_creator: Qwen
model_name: Qwen1.5 1.8B Chat
quantized_by: Second State Inc.
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Qwen1.5-1.8B-Chat-GGUF
## Original Model
[Qwen/Qwen1.5-1.8B-Chat](https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat)
## Run with LlamaEdge
- LlamaEdge version: [v0.2.15](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.15) and above
- Prompt template
- Prompt type: `chatml`
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Context size: `32000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen1.5-1.8B-Chat-Q5_K_M.gguf llama-api-server.wasm -p chatml
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen1.5-1.8B-Chat-Q5_K_M.gguf llama-chat.wasm -p chatml
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Qwen1.5-1.8B-Chat-Q2_K.gguf](https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF/blob/main/Qwen1.5-1.8B-Chat-Q2_K.gguf) | Q2_K | 2 | 863 MB| smallest, significant quality loss - not recommended for most purposes |
| [Qwen1.5-1.8B-Chat-Q3_K_L.gguf](https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF/blob/main/Qwen1.5-1.8B-Chat-Q3_K_L.gguf) | Q3_K_L | 3 | 1.06 GB| small, substantial quality loss |
| [Qwen1.5-1.8B-Chat-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF/blob/main/Qwen1.5-1.8B-Chat-Q3_K_M.gguf) | Q3_K_M | 3 | 1.02 GB| very small, high quality loss |
| [Qwen1.5-1.8B-Chat-Q3_K_S.gguf](https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF/blob/main/Qwen1.5-1.8B-Chat-Q3_K_S.gguf) | Q3_K_S | 3 | 970 MB| very small, high quality loss |
| [Qwen1.5-1.8B-Chat-Q4_0.gguf](https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF/blob/main/Qwen1.5-1.8B-Chat-Q4_0.gguf) | Q4_0 | 4 | 1.12 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen1.5-1.8B-Chat-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF/blob/main/Qwen1.5-1.8B-Chat-Q4_K_M.gguf) | Q4_K_M | 4 | 1.22 GB| medium, balanced quality - recommended |
| [Qwen1.5-1.8B-Chat-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF/blob/main/Qwen1.5-1.8B-Chat-Q4_K_S.gguf) | Q4_K_S | 4 | 1.16 GB| small, greater quality loss |
| [Qwen1.5-1.8B-Chat-Q5_0.gguf](https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF/blob/main/Qwen1.5-1.8B-Chat-Q5_0.gguf) | Q5_0 | 5 | 1.31 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen1.5-1.8B-Chat-Q5_K_M.gguf](https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF/blob/main/Qwen1.5-1.8B-Chat-Q5_K_M.gguf) | Q5_K_M | 5 | 1.38 GB| large, very low quality loss - recommended |
| [Qwen1.5-1.8B-Chat-Q5_K_S.gguf](https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF/blob/main/Qwen1.5-1.8B-Chat-Q5_K_S.gguf) | Q5_K_S | 5 | 1.33 GB| large, low quality loss - recommended |
| [Qwen1.5-1.8B-Chat-Q6_K.gguf](https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF/blob/main/Qwen1.5-1.8B-Chat-Q6_K.gguf) | Q6_K | 6 | 1.58 GB| very large, extremely low quality loss |
| [Qwen1.5-1.8B-Chat-Q8_0.gguf](https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF/blob/main/Qwen1.5-1.8B-Chat-Q8_0.gguf) | Q8_0 | 8 | 1.96 GB| very large, extremely low quality loss - not recommended |
|
katryo/controlnet-facesynthetics-spiga-sdxl-15000
|
katryo
| 2024-05-26T05:52:29Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-05-26T04:51:43Z |
---
license: openrail++
library_name: diffusers
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
base_model: stabilityai/stable-diffusion-xl-base-1.0
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-katryo/controlnet-facesynthetics-spiga-sdxl-15000
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
You can find some example images below.
prompt: a close-up of a man

prompt: a close-up of a woman

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
godlzj/SDXL_CKPT
|
godlzj
| 2024-05-26T05:51:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-05-25T17:53:40Z |
转载自https://civitai.com/models/139565?modelVersionId=294470
Reprinted from https://civitai.com/models/139565?modelVersionId=294470
|
ShenRu/TT011
|
ShenRu
| 2024-05-26T05:49:58Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-26T05:49:58Z |
---
license: apache-2.0
---
|
DiederikMartens/eBERT_sa_cv_10_fold4
|
DiederikMartens
| 2024-05-26T05:48:53Z | 111 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-26T05:22:23Z |
---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_10_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_10_fold4
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4709
- F1: 0.4896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 401 | 0.6001 | 0.3375 |
| 0.6001 | 2.0 | 802 | 0.4709 | 0.4896 |
| 0.4331 | 3.0 | 1203 | 0.4930 | 0.4776 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/tsBERT_sa_cv_10_fold4
|
DiederikMartens
| 2024-05-26T05:46:46Z | 110 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-26T05:20:36Z |
---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_10_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_10_fold4
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5768
- F1: 0.6619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 401 | 0.4331 | 0.5843 |
| 0.4074 | 2.0 | 802 | 0.4577 | 0.6317 |
| 0.2191 | 3.0 | 1203 | 0.5768 | 0.6619 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
tsang326/test2605
|
tsang326
| 2024-05-26T05:42:07Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:vilm/vinallama-7b-chat",
"base_model:adapter:vilm/vinallama-7b-chat",
"license:llama2",
"region:us"
] | null | 2024-05-26T05:41:51Z |
---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: vilm/vinallama-7b-chat
model-index:
- name: test2605
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test2605
This model is a fine-tuned version of [vilm/vinallama-7b-chat](https://huggingface.co/vilm/vinallama-7b-chat) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.36.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
team-sanai/zoo_3exp_v2_2epoch_5000
|
team-sanai
| 2024-05-26T05:34:38Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-26T05:27:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roofdancer/plain-bart-on-presummarized-2-clusters-wcep
|
roofdancer
| 2024-05-26T05:34:24Z | 112 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:sshleifer/distilbart-cnn-6-6",
"base_model:finetune:sshleifer/distilbart-cnn-6-6",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-26T04:48:58Z |
---
license: apache-2.0
base_model: sshleifer/distilbart-cnn-6-6
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: plain-bart-on-presummarized-2-clusters-wcep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plain-bart-on-presummarized-2-clusters-wcep
This model is a fine-tuned version of [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0775
- Rouge1: 36.3774
- Rouge2: 15.2074
- Rougel: 25.7706
- Rougelsum: 29.2593
- Gen Len: 67.6608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.2178 | 1.0 | 510 | 2.0873 | 36.3079 | 15.0162 | 25.5837 | 29.129 | 67.8461 |
| 1.8901 | 2.0 | 1020 | 2.0696 | 36.0914 | 15.0005 | 25.6729 | 29.2956 | 68.3451 |
| 1.7267 | 3.0 | 1530 | 2.0775 | 36.3774 | 15.2074 | 25.7706 | 29.2593 | 67.6608 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
RSFfen/distilbert-base-uncased-finetuned-imdb
|
RSFfen
| 2024-05-26T05:31:53Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-26T05:27:45Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6838 | 1.0 | 157 | 2.5107 |
| 2.5895 | 2.0 | 314 | 2.4504 |
| 2.531 | 3.0 | 471 | 2.4822 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
shkna1368/hazhar-hemen
|
shkna1368
| 2024-05-26T05:29:26Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-26T05:28:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saransh03sharma/mintrec2-llama-2-7b-200-10
|
saransh03sharma
| 2024-05-26T05:24:56Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-24T18:14:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DiederikMartens/eBERT_sa_cv_10_fold3
|
DiederikMartens
| 2024-05-26T05:22:10Z | 110 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-26T04:55:17Z |
---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_10_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_10_fold3
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5231
- F1: 0.5195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 401 | 0.5293 | 0.4312 |
| 0.5713 | 2.0 | 802 | 0.4941 | 0.4680 |
| 0.3994 | 3.0 | 1203 | 0.5231 | 0.5195 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
auchoi/unslot_practice_lora_model_5_epoch
|
auchoi
| 2024-05-26T05:08:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T05:07:36Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** auchoi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sorour/mistral_cls_fomc_v3
|
Sorour
| 2024-05-26T05:02:21Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-26T04:56:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DiederikMartens/mBERT_sa_cv_10_fold2
|
DiederikMartens
| 2024-05-26T04:52:57Z | 111 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-26T04:26:59Z |
---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_10_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_10_fold2
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4393
- F1: 0.5954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 401 | 0.5961 | 0.4534 |
| 0.5446 | 2.0 | 802 | 0.4266 | 0.4988 |
| 0.3965 | 3.0 | 1203 | 0.4393 | 0.5954 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
keitokei1994/Llama-3-8B-shisa-2x8B-gguf
|
keitokei1994
| 2024-05-26T04:52:49Z | 5 | 0 | null |
[
"gguf",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-24T18:49:04Z |
---
license: llama3
---
# Llama-3-8B-shisa-2x8B-gguf
[Llama-3-8B-shisa-2x8B](https://huggingface.co/keitokei1994/Llama-3-8B-shisa-2x8B)のggufフォーマット変換版です。
|
GENIAC-Team-Ozaki/full-sft-finetuned-stage4-iter86000-v4-cont-neftune-5
|
GENIAC-Team-Ozaki
| 2024-05-26T04:50:51Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-26T04:46:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
katryo/controlnet-facesynthetics-spiga-sdxl-10000
|
katryo
| 2024-05-26T04:49:41Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-05-26T04:06:23Z |
---
license: openrail++
library_name: diffusers
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
base_model: stabilityai/stable-diffusion-xl-base-1.0
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-katryo/controlnet-facesynthetics-spiga-sdxl-10000
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
You can find some example images below.
prompt: a close-up of a man

prompt: a close-up of a woman

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
amir1226/ppo-LunarLander-v2-rl
|
amir1226
| 2024-05-26T04:47:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-26T04:47:22Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.79 +/- 19.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DiederikMartens/gBERT_sa_cv_10_fold2
|
DiederikMartens
| 2024-05-26T04:46:38Z | 111 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-26T04:22:59Z |
---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_10_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_10_fold2
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4944
- F1: 0.6926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 401 | 0.3225 | 0.6749 |
| 0.3982 | 2.0 | 802 | 0.3810 | 0.6846 |
| 0.1835 | 3.0 | 1203 | 0.4944 | 0.6926 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
poojapremnath/SnakeCLEF-resnet
|
poojapremnath
| 2024-05-26T04:45:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-26T04:35:43Z |
---
license: apache-2.0
---
|
mradermacher/Daredevil-8B-GGUF
|
mradermacher
| 2024-05-26T04:37:44Z | 59 | 1 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:mlabonne/Daredevil-8B",
"base_model:quantized:mlabonne/Daredevil-8B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T03:36:18Z |
---
base_model: mlabonne/Daredevil-8B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mlabonne/Daredevil-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-GGUF/resolve/main/Daredevil-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DiederikMartens/eBERT_sa_cv_10_fold1
|
DiederikMartens
| 2024-05-26T04:27:49Z | 110 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-26T04:00:47Z |
---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_10_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_10_fold1
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4772
- F1: 0.4637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 401 | 0.6262 | 0.2953 |
| 0.6031 | 2.0 | 802 | 0.5669 | 0.4470 |
| 0.4469 | 3.0 | 1203 | 0.4772 | 0.4637 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
khnhlinh/gpt-on-hugging-face
|
khnhlinh
| 2024-05-26T04:27:35Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-26T04:27:35Z |
---
license: apache-2.0
---
|
DiederikMartens/gBERT_sa_cv_10_fold1
|
DiederikMartens
| 2024-05-26T04:22:46Z | 111 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-26T03:59:05Z |
---
license: mit
base_model: google-bert/bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gBERT_sa_cv_10_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gBERT_sa_cv_10_fold1
This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3789
- F1: 0.6518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 401 | 0.4394 | 0.5390 |
| 0.3976 | 2.0 | 802 | 0.3789 | 0.6518 |
| 0.1916 | 3.0 | 1203 | 0.4834 | 0.6415 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
stablediffusionapi/fluently-xl
|
stablediffusionapi
| 2024-05-26T04:21:03Z | 29 | 2 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-05-26T04:17:58Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Fluently XL API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "fluently-xl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/fluently-xl)
Model link: [View model](https://modelslab.com/models/fluently-xl)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "fluently-xl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
TroyDoesAI/Contextual-Llama3-8B-RAG
|
TroyDoesAI
| 2024-05-26T04:18:18Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-26T04:10:52Z |
---
license: cc-by-nd-4.0
---
|
Anish13/results_sratch
|
Anish13
| 2024-05-26T04:17:37Z | 36 | 0 |
transformers
|
[
"transformers",
"safetensors",
"transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T02:49:14Z |
---
tags:
- generated_from_trainer
model-index:
- name: results_sratch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_sratch
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 123
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 30
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 3.4563 | 5.5310 | 10000 | 3.3200 |
| 2.7398 | 11.0619 | 20000 | 2.7421 |
| 2.441 | 16.5929 | 30000 | 2.4991 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Kaballas/Kaballas
|
Kaballas
| 2024-05-26T04:14:19Z | 36 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T04:06:29Z |
---
license: apache-2.0
---
|
cti-ttp-18/ttp-extraction-llama
|
cti-ttp-18
| 2024-05-26T04:14:07Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-26T03:49:26Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
mrovejaxd/FNST_trad_j
|
mrovejaxd
| 2024-05-26T04:10:48Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-26T02:03:33Z |
---
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: FNST_trad_j
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FNST_trad_j
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6540
- Accuracy: 0.6525
- F1: 0.6178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 1.1058 | 1.0 | 1500 | 1.0564 | 0.5442 | 0.3843 |
| 0.9559 | 2.0 | 3000 | 0.9522 | 0.585 | 0.5503 |
| 0.8789 | 3.0 | 4500 | 0.8843 | 0.61 | 0.5733 |
| 0.8292 | 4.0 | 6000 | 0.8614 | 0.6167 | 0.5734 |
| 0.7807 | 5.0 | 7500 | 0.8519 | 0.62 | 0.5896 |
| 0.7559 | 6.0 | 9000 | 0.8648 | 0.6283 | 0.5965 |
| 0.7098 | 7.0 | 10500 | 0.8579 | 0.63 | 0.5961 |
| 0.6703 | 8.0 | 12000 | 0.8536 | 0.6417 | 0.6029 |
| 0.6114 | 9.0 | 13500 | 0.8686 | 0.6358 | 0.5997 |
| 0.611 | 10.0 | 15000 | 0.8948 | 0.6342 | 0.6045 |
| 0.5614 | 11.0 | 16500 | 0.9173 | 0.6342 | 0.6046 |
| 0.515 | 12.0 | 18000 | 0.9289 | 0.6425 | 0.6089 |
| 0.5107 | 13.0 | 19500 | 0.9581 | 0.64 | 0.6052 |
| 0.4691 | 14.0 | 21000 | 1.0099 | 0.6433 | 0.6091 |
| 0.4476 | 15.0 | 22500 | 1.0543 | 0.6458 | 0.6108 |
| 0.398 | 16.0 | 24000 | 1.1170 | 0.6425 | 0.6051 |
| 0.3828 | 17.0 | 25500 | 1.1585 | 0.6517 | 0.6102 |
| 0.3567 | 18.0 | 27000 | 1.2252 | 0.6475 | 0.6114 |
| 0.3334 | 19.0 | 28500 | 1.2827 | 0.6675 | 0.6317 |
| 0.2982 | 20.0 | 30000 | 1.4256 | 0.6517 | 0.6257 |
| 0.2734 | 21.0 | 31500 | 1.4591 | 0.6583 | 0.6305 |
| 0.2556 | 22.0 | 33000 | 1.5516 | 0.66 | 0.6263 |
| 0.2409 | 23.0 | 34500 | 1.6793 | 0.6592 | 0.6219 |
| 0.2226 | 24.0 | 36000 | 1.8157 | 0.66 | 0.6218 |
| 0.1971 | 25.0 | 37500 | 1.9089 | 0.6575 | 0.6241 |
| 0.1832 | 26.0 | 39000 | 2.0406 | 0.6558 | 0.6300 |
| 0.1921 | 27.0 | 40500 | 2.1448 | 0.6583 | 0.6254 |
| 0.1496 | 28.0 | 42000 | 2.2888 | 0.6458 | 0.6136 |
| 0.1538 | 29.0 | 43500 | 2.3520 | 0.66 | 0.6241 |
| 0.1558 | 30.0 | 45000 | 2.4748 | 0.6492 | 0.6207 |
| 0.1409 | 31.0 | 46500 | 2.5126 | 0.6542 | 0.6175 |
| 0.119 | 32.0 | 48000 | 2.6540 | 0.6525 | 0.6178 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Mantis-VL/mantis-8b-idefics2-video-eval-20k_2048
|
Mantis-VL
| 2024-05-26T04:08:26Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"idefics2",
"image-text-to-text",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"base_model:finetune:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-05-19T09:43:11Z |
---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: mantis-8b-idefics2-video-eval-20k_2048
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/dongfu/Mantis/runs/f0l8j9ep)
# mantis-8b-idefics2-video-eval-20k_2048
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
QuangDuy/whisper-large-v3-common_voice
|
QuangDuy
| 2024-05-26T04:07:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T04:07:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SergeiAi/ppo-LunarLander-v2
|
SergeiAi
| 2024-05-26T04:06:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-26T04:06:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 205.60 +/- 46.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MVRL/croma-large
|
MVRL
| 2024-05-26T04:04:10Z | 51 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T04:02:43Z |
---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
rupesh2009/tiny-chatbot-dpo
|
rupesh2009
| 2024-05-26T03:54:47Z | 6 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-26T03:52:41Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: tiny-chatbot-dpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-chatbot-dpo
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
elliotthwang/KimLan-Mistral-7B-Instruct-v0.3
|
elliotthwang
| 2024-05-26T03:45:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T13:50:56Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sorour/cls_fomc_mistral_v1
|
Sorour
| 2024-05-26T03:41:27Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T03:20:11Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
model-index:
- name: cls_fomc_mistral_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cls_fomc_mistral_v1
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5623 | 1.2903 | 20 | 0.6185 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
armabird/EPonyAndOOO
|
armabird
| 2024-05-26T03:39:45Z | 0 | 1 | null |
[
"StableDiffusionXL",
"PonyDiffusionXL",
"en",
"license:other",
"region:us"
] | null | 2024-05-25T18:17:45Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
tags:
- StableDiffusionXL
- PonyDiffusionXL
---
# About the model
- This model was created by merging the following two model files.<br>
1. ebara_pony_2<br>https://huggingface.co/tsukihara/xl_model
2. ooo_beta71<br>https://civitai.com/models/179340?modelVersionId=407892
# License
- Follow those licenses.<br>
1. [Pony Diffusion V6 XL](https://civitai.com/models/257749/pony-diffusion-v6-xl)
2. [OOO License](https://civitai.com/models/license/407892)
3. [Stable Diffusion XL 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
4. [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/)
|
suthanhcong/movie_summarize_model
|
suthanhcong
| 2024-05-26T03:31:44Z | 109 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-26T03:31:28Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: movie_summarize_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# movie_summarize_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3072
- Rouge1: 0.1621
- Rouge2: 0.0398
- Rougel: 0.1305
- Rougelsum: 0.1304
- Gen Len: 18.9634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.5827 | 1.0 | 573 | 3.3072 | 0.1621 | 0.0398 | 0.1305 | 0.1304 | 18.9634 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
JinbiaoZhu/gemma-2b-it-QLoRA-RobotPlanning-v2
|
JinbiaoZhu
| 2024-05-26T03:29:18Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-25T06:05:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rupesh2009/sft-tiny-chatbot
|
rupesh2009
| 2024-05-26T03:14:08Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-26T03:12:46Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
euiyulsong/BrierPC_correct
|
euiyulsong
| 2024-05-26T03:06:23Z | 80 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"orpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-26T03:02:13Z |
---
library_name: transformers
tags:
- trl
- orpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF
|
mradermacher
| 2024-05-26T03:05:54Z | 16 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-3",
"70b",
"smaug",
"lumimaid",
"tess",
"arimas",
"breadcrums",
"en",
"base_model:ryzen88/Llama-3-70b-Arimas-story-RP-V1",
"base_model:quantized:ryzen88/Llama-3-70b-Arimas-story-RP-V1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-25T13:40:03Z |
---
base_model: ryzen88/Llama-3-70b-Arimas-story-RP-V1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- llama-3
- 70b
- smaug
- lumimaid
- tess
- arimas
- breadcrums
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ryzen88/Llama-3-70b-Arimas-story-RP-V1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70b-Arimas-story-RP-V1-GGUF/resolve/main/Llama-3-70b-Arimas-story-RP-V1.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
QuantFactory/pair-preference-model-LLaMA3-8B-GGUF
|
QuantFactory
| 2024-05-26T03:05:22Z | 39 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"conversational",
"text-generation",
"arxiv:2405.07863",
"base_model:RLHFlow/pair-preference-model-LLaMA3-8B",
"base_model:quantized:RLHFlow/pair-preference-model-LLaMA3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-24T15:24:15Z |
---
license: llama3
base_model: RLHFlow/pair-preference-model-LLaMA3-8B
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- conversational
---
# pair-preference-model-LLaMA3-8B-GGUF
This is quantized version of [RLHFlow/pair-preference-model-LLaMA3-8B](https://huggingface.co/RLHFlow/pair-preference-model-LLaMA3-8B) created using llama.cpp
# Model Description
This preference model is trained from [LLaMA3-8B-it](meta-llama/Meta-Llama-3-8B-Instruct) with the training script at [Reward Modeling](https://github.com/RLHFlow/RLHF-Reward-Modeling/tree/pm_dev/pair-pm).
The dataset is RLHFlow/pair_preference_model_dataset. It achieves Chat-98.6, Char-hard 65.8, Safety 89.6, and reasoning 94.9 in reward bench.
See our paper [RLHF Workflow: From Reward Modeling to Online RLHF](https://arxiv.org/abs/2405.07863) for more details of this model.
## Service the RM
Here is an example to use the Preference Model to rank a pair. For n>2 responses, it is recommened to use the tournament style ranking strategy to get the best response so that the complexity is linear in n.
```python
device = 0
model = AutoModelForCausalLM.from_pretrained(script_args.preference_name_or_path,
torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2").cuda()
tokenizer = AutoTokenizer.from_pretrained(script_args.preference_name_or_path, use_fast=True)
tokenizer_plain = AutoTokenizer.from_pretrained(script_args.preference_name_or_path, use_fast=True)
tokenizer_plain.chat_template = "\n{% for message in messages %}{% if loop.index0 % 2 == 0 %}\n\n<turn> user\n {{ message['content'] }}{% else %}\n\n<turn> assistant\n {{ message['content'] }}{% endif %}{% endfor %}\n\n\n"
prompt_template = "[CONTEXT] {context} [RESPONSE A] {response_A} [RESPONSE B] {response_B} \n"
token_id_A = tokenizer.encode("A", add_special_tokens=False)
token_id_B = tokenizer.encode("B", add_special_tokens=False)
assert len(token_id_A) == 1 and len(token_id_B) == 1
token_id_A = token_id_A[0]
token_id_B = token_id_B[0]
temperature = 1.0
model.eval()
response_chosen = "BBBB"
response_rejected = "CCCC"
## We can also handle multi-turn conversation.
instruction = [{"role": "user", "content": ...},
{"role": "assistant", "content": ...},
{"role": "user", "content": ...},
]
context = tokenizer_plain.apply_chat_template(instruction, tokenize=False)
responses = [response_chosen, response_rejected]
probs_chosen = []
for chosen_position in [0, 1]:
# we swap order to mitigate position bias
response_A = responses[chosen_position]
response_B = responses[1 - chosen_position]
prompt = prompt_template.format(context=context, response_A=response_A, response_B=response_B)
message = [
{"role": "user", "content": prompt},
]
input_ids = tokenizer.encode(tokenizer.apply_chat_template(message, tokenize=False).replace(tokenizer.bos_token, ""), return_tensors='pt', add_special_tokens=False).cuda()
with torch.no_grad():
output = model(input_ids)
logit_A = output.logits[0, -1, token_id_A].item()
logit_B = output.logits[0, -1, token_id_B].item()
# take softmax to get the probability; using numpy
Z = np.exp(logit_A / temperature) + np.exp(logit_B / temperature)
logit_chosen = [logit_A, logit_B][chosen_position]
prob_chosen = np.exp(logit_chosen / temperature) / Z
probs_chosen.append(prob_chosen)
avg_prob_chosen = np.mean(probs_chosen)
correct = 0.5 if avg_prob_chosen == 0.5 else float(avg_prob_chosen > 0.5)
print(correct)
```
|
LarryAIDraw/aidxlv05_neg
|
LarryAIDraw
| 2024-05-26T03:05:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-26T02:58:15Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/144327?modelVersionId=195614
|
LarryAIDraw/SimplePositiveXLv2
|
LarryAIDraw
| 2024-05-26T03:03:45Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-26T02:57:05Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/118758/simplepositivexl?modelVersionId=182974
|
LarryAIDraw/unaestheticXL_bp5
|
LarryAIDraw
| 2024-05-26T03:03:23Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-26T02:55:23Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/119032?modelVersionId=480651
|
leungchunghong/Phi-3-mini-4k-instruct-Q4_K_M-GGUF
|
leungchunghong
| 2024-05-26T03:02:10Z | 2 | 0 | null |
[
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-05-26T03:02:03Z |
---
language:
- en
license: mit
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.0
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# leungchunghong/Phi-3-mini-4k-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo leungchunghong/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --model phi-3-mini-4k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo leungchunghong/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --model phi-3-mini-4k-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m phi-3-mini-4k-instruct-q4_k_m.gguf -n 128
```
|
Raneechu/textbookbig10_ft5
|
Raneechu
| 2024-05-26T03:02:06Z | 4 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-05-26T03:02:03Z |
---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: textbookbig10_ft5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# textbookbig10_ft5
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.19.1
## Training procedure
### Framework versions
- PEFT 0.6.2
|
nepBros/nepali_news_classifier
|
nepBros
| 2024-05-26T03:01:33Z | 114 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"ne",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-25T15:44:33Z |
---
license: mit
language:
- ne
metrics:
- accuracy
widget:
- text: >-
काठमाडौं शिक्षा विज्ञान प्रविधि मन्त्रालय तयार पार संघीय शिक्षा ऐन मस्
शिक्षक सरुवा व्यवस्थान प्रस्ताव गर यस्तै मस् विभिन्न अवस्था शिक्षक सरुवा नहु
व्यवस्था गर मस् शिक्षक स्थायी सेवा अवधि वर्ष नपुग अनिवार्य अवकास वर्ष बाँ
सरुवा भई कार्य विद्यालय कम्ती शैक्षिक वर्ष पूरा नगर शिक्षक सरुवा नहु
प्रस्ताव गर विद्यमान ऐन विद्यालय वर्ष सेवा अवधि पुरा स्थायी शिक्षक जिल्ला
शिक्षा अधिकारी जिल्ला क्षेत्रीय निर्देशक क्षेत्र शिक्षा विभाग देशैभरी सरुवा
सक् व्यवस्था यस परिवर्तन विभिन्न अवस्था शिक्षक सरुवा नहु प्रस्ताव मस् समेट
व्यवस्थापन समिति सहमती सम्बन्धित स्थानीय तह पालि भित्र विद्यालय कार्य शिक्षक
सरुवा सक् मस् उल्लेख जिल्ला भित्र अन्तर स्थानीय तहबीच शिक्षक सरुवा
व्यवस्थापन समिति स्थानीय तह सहमती जिल्ला तह शिक्षा सम्बन्धि मामिला हेर्
कार्यालय प्रस्ताव सरुवा विवरण प्रदेश शिक्षा विभाग पठाउ
model-index:
- name: nepBros/nepali_news_classifier
results:
- task:
type: text-classification # Required. Example: automatic-speech-recognition
name: classify nepali news # Optional. Example: Speech Recognition
dataset:
type: text_data # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: nepali_news # Required. A pretty name for the dataset. Example: Common Voice (French)
metrics:
- type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 91.16213442791175
---
|
Khieminem/ip102-yolov8-imgcls
|
Khieminem
| 2024-05-26T02:52:16Z | 5 | 0 |
transformers
|
[
"transformers",
"onnx",
"yolos",
"image-classification",
"en",
"dataset:nqait05/ip102",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-04-26T13:30:40Z |
---
license: apache-2.0
datasets:
- nqait05/ip102
language:
- en
pipeline_tag: image-classification
---
Just a simple modal using Yolov8 for Image Classification task on the dataset IP102 with 20 classes extracted based on image amount.
|
Raneechu/textbookbig10_ft4
|
Raneechu
| 2024-05-26T02:47:07Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-05-26T02:47:03Z |
---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: textbookbig10_ft4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# textbookbig10_ft4
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.19.1
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FrankL/storytellerLM-v0.1
|
FrankL
| 2024-05-26T02:46:30Z | 172 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-25T07:41:16Z |
---
library_name: transformers
tags: []
---
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** FrankL
- **Language(s) (NLP):** English
### Direct Use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('FrankL/storytellerLM-v0.1', trust_remote_code=True, torch_dtype=torch.float16)
model = model.to(device='cuda')
tokenizer = AutoTokenizer.from_pretrained('FrankL/storytellerLM-v0.1', trust_remote_code=True)
def inference(
model: AutoModelForCausalLM,
tokenizer: AutoTokenizer,
input_text: str = "Once upon a time, ",
max_new_tokens: int = 16
):
inputs = tokenizer(input_text, return_tensors="pt").to(device)
outputs = model.generate(
**inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=max_new_tokens,
do_sample=True,
top_k=40,
top_p=0.95,
temperature=0.8
)
generated_text = tokenizer.decode(
outputs[0],
skip_special_tokens=True
)
# print(outputs)
print(generated_text)
inference(model, tokenizer)
```
|
Naveenkumar2002/Bart-QnA-Base
|
Naveenkumar2002
| 2024-05-26T02:39:02Z | 178 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-26T02:37:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
G-R-A-V-I-T-Y/long-t5-local-base-ARv1
|
G-R-A-V-I-T-Y
| 2024-05-26T02:36:26Z | 115 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/long-t5-local-base",
"base_model:finetune:google/long-t5-local-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-25T23:45:23Z |
---
license: apache-2.0
base_model: google/long-t5-local-base
tags:
- generated_from_trainer
model-index:
- name: long-t5-local-base-ARv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long-t5-local-base-ARv1
This model is a fine-tuned version of [google/long-t5-local-base](https://huggingface.co/google/long-t5-local-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9303
- Exact Match: 18.0
- Gen Len: 3.38
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| No log | 1.0 | 7 | 3.4004 | 14.0 | 3.86 |
| 2.7206 | 2.0 | 14 | 3.1925 | 8.0 | 3.66 |
| 2.6501 | 3.0 | 21 | 2.9867 | 8.0 | 3.7 |
| 2.6501 | 4.0 | 28 | 2.8576 | 12.0 | 4.58 |
| 1.9849 | 5.0 | 35 | 2.9078 | 12.0 | 4.52 |
| 2.0193 | 6.0 | 42 | 2.8173 | 8.0 | 3.84 |
| 2.0193 | 7.0 | 49 | 2.7735 | 16.0 | 3.42 |
| 1.6108 | 8.0 | 56 | 2.5993 | 12.0 | 3.82 |
| 1.8323 | 9.0 | 63 | 2.5879 | 12.0 | 3.92 |
| 1.4861 | 10.0 | 70 | 2.7203 | 16.0 | 3.4 |
| 1.4861 | 11.0 | 77 | 2.9902 | 24.0 | 3.1 |
| 1.425 | 12.0 | 84 | 2.7667 | 14.0 | 3.36 |
| 1.0387 | 13.0 | 91 | 2.6547 | 18.0 | 3.42 |
| 1.0387 | 14.0 | 98 | 2.7072 | 18.0 | 3.34 |
| 1.0793 | 15.0 | 105 | 2.8158 | 12.0 | 3.58 |
| 1.1969 | 16.0 | 112 | 2.9404 | 14.0 | 3.32 |
| 1.1969 | 17.0 | 119 | 2.8512 | 14.0 | 3.3 |
| 1.15 | 18.0 | 126 | 2.7513 | 18.0 | 3.68 |
| 1.2024 | 19.0 | 133 | 2.7124 | 16.0 | 3.48 |
| 1.3331 | 20.0 | 140 | 2.7484 | 16.0 | 3.4 |
| 1.3331 | 21.0 | 147 | 2.8289 | 18.0 | 3.44 |
| 1.1469 | 22.0 | 154 | 2.9873 | 14.0 | 3.36 |
| 1.5639 | 23.0 | 161 | 3.0321 | 18.0 | 3.4 |
| 1.5639 | 24.0 | 168 | 3.0117 | 14.0 | 3.3 |
| 0.8542 | 25.0 | 175 | 2.8331 | 16.0 | 3.34 |
| 0.9789 | 26.0 | 182 | 2.7876 | 20.0 | 3.36 |
| 0.9789 | 27.0 | 189 | 2.7820 | 20.0 | 3.36 |
| 0.8853 | 28.0 | 196 | 2.8082 | 18.0 | 3.38 |
| 0.9126 | 29.0 | 203 | 2.8316 | 16.0 | 3.36 |
| 1.0543 | 30.0 | 210 | 2.8449 | 18.0 | 3.64 |
| 1.0543 | 31.0 | 217 | 2.8034 | 8.0 | 3.62 |
| 1.0683 | 32.0 | 224 | 2.8115 | 14.0 | 3.46 |
| 0.951 | 33.0 | 231 | 2.9019 | 18.0 | 3.34 |
| 0.951 | 34.0 | 238 | 3.0115 | 18.0 | 3.24 |
| 0.8315 | 35.0 | 245 | 3.0392 | 18.0 | 3.24 |
| 1.1548 | 36.0 | 252 | 3.0643 | 18.0 | 3.36 |
| 1.1548 | 37.0 | 259 | 3.0031 | 16.0 | 3.42 |
| 0.7813 | 38.0 | 266 | 2.9801 | 18.0 | 3.48 |
| 0.671 | 39.0 | 273 | 2.9622 | 18.0 | 3.48 |
| 1.1771 | 40.0 | 280 | 2.9049 | 18.0 | 3.46 |
| 1.1771 | 41.0 | 287 | 2.9042 | 20.0 | 3.56 |
| 0.5959 | 42.0 | 294 | 2.9598 | 18.0 | 3.48 |
| 1.1583 | 43.0 | 301 | 2.9936 | 18.0 | 3.44 |
| 1.1583 | 44.0 | 308 | 3.0072 | 18.0 | 3.44 |
| 0.5728 | 45.0 | 315 | 3.0003 | 18.0 | 3.44 |
| 0.7237 | 46.0 | 322 | 3.0093 | 16.0 | 3.4 |
| 0.7237 | 47.0 | 329 | 2.9688 | 18.0 | 3.42 |
| 0.7295 | 48.0 | 336 | 2.9533 | 18.0 | 3.38 |
| 0.5627 | 49.0 | 343 | 2.9357 | 18.0 | 3.36 |
| 0.6489 | 50.0 | 350 | 2.9317 | 18.0 | 3.4 |
| 0.6489 | 51.0 | 357 | 2.9339 | 18.0 | 3.4 |
| 1.0427 | 52.0 | 364 | 2.9256 | 18.0 | 3.4 |
| 0.9156 | 53.0 | 371 | 2.9220 | 18.0 | 3.4 |
| 0.9156 | 54.0 | 378 | 2.9091 | 18.0 | 3.38 |
| 0.4748 | 55.0 | 385 | 2.9036 | 18.0 | 3.36 |
| 0.5616 | 56.0 | 392 | 2.8998 | 18.0 | 3.36 |
| 0.5616 | 57.0 | 399 | 2.9128 | 18.0 | 3.36 |
| 0.4836 | 58.0 | 406 | 2.9205 | 18.0 | 3.36 |
| 0.6498 | 59.0 | 413 | 2.9282 | 18.0 | 3.36 |
| 0.615 | 60.0 | 420 | 2.9303 | 18.0 | 3.38 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
kid1802/huggy_test
|
kid1802
| 2024-05-26T02:14:07Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-05-26T02:14:02Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: kid1802/huggy_test
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
backyardai/LemonadeRP-4.5.3-GGUF
|
backyardai
| 2024-05-26T02:13:26Z | 575 | 1 | null |
[
"gguf",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:quantized:KatyTheCutie/LemonadeRP-4.5.3",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T03:01:40Z |
---
base_model: KatyTheCutie/LemonadeRP-4.5.3
model_name: LemonadeRP-4.5.3-GGUF
quantized_by: brooketh
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# LemonadeRP 4.5.3
- **Creator:** [KatyTheCutie](https://huggingface.co/KatyTheCutie/)
- **Original:** [LemonadeRP 4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
- **Date Created:** 2024-05-25
- **Trained Context:** 4096 tokens
- **Description:** 7B roleplay focused model, creativity, and less cliché is the focus of this merge.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
***
|
Anish13/results_model8
|
Anish13
| 2024-05-26T02:08:55Z | 37 | 0 |
transformers
|
[
"transformers",
"safetensors",
"transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T23:14:45Z |
---
tags:
- generated_from_trainer
model-index:
- name: results_model8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_model8
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 30
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 3.3262 | 0.5570 | 10000 | 3.3012 |
| 3.0829 | 1.1141 | 20000 | 3.1175 |
| 2.9737 | 1.6711 | 30000 | 3.0091 |
| 2.8584 | 2.2282 | 40000 | 2.9686 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
empathie/Qwen1.5-0.5B-Chat-experiment-2
|
empathie
| 2024-05-26T02:07:47Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-25T03:04:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
atgarcia/wav2vec2part7
|
atgarcia
| 2024-05-26T02:04:31Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-26T01:37:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mrovejaxd/ABL_trad_j
|
mrovejaxd
| 2024-05-26T02:03:25Z | 31 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-26T00:42:17Z |
---
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ABL_trad_j
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ABL_trad_j
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6432
- Accuracy: 0.6883
- F1: 0.6865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.9532 | 1.0 | 1500 | 0.9116 | 0.5825 | 0.5793 |
| 0.8601 | 2.0 | 3000 | 0.8433 | 0.6033 | 0.6016 |
| 0.7962 | 3.0 | 4500 | 0.8150 | 0.6275 | 0.6252 |
| 0.7633 | 4.0 | 6000 | 0.7969 | 0.635 | 0.6334 |
| 0.7153 | 5.0 | 7500 | 0.7825 | 0.6492 | 0.6483 |
| 0.678 | 6.0 | 9000 | 0.7910 | 0.6408 | 0.6392 |
| 0.6336 | 7.0 | 10500 | 0.7772 | 0.6608 | 0.6606 |
| 0.5981 | 8.0 | 12000 | 0.7863 | 0.6617 | 0.6605 |
| 0.5455 | 9.0 | 13500 | 0.7954 | 0.6658 | 0.6657 |
| 0.4972 | 10.0 | 15000 | 0.8206 | 0.6633 | 0.6623 |
| 0.4823 | 11.0 | 16500 | 0.8442 | 0.6683 | 0.6673 |
| 0.4258 | 12.0 | 18000 | 0.8966 | 0.6742 | 0.6734 |
| 0.4182 | 13.0 | 19500 | 0.9327 | 0.6767 | 0.6761 |
| 0.3588 | 14.0 | 21000 | 0.9780 | 0.6717 | 0.6689 |
| 0.3576 | 15.0 | 22500 | 1.0288 | 0.6833 | 0.6828 |
| 0.3252 | 16.0 | 24000 | 1.0873 | 0.6842 | 0.6836 |
| 0.3104 | 17.0 | 25500 | 1.1417 | 0.685 | 0.6847 |
| 0.2691 | 18.0 | 27000 | 1.2447 | 0.6842 | 0.6827 |
| 0.2559 | 19.0 | 28500 | 1.3480 | 0.6825 | 0.6816 |
| 0.2522 | 20.0 | 30000 | 1.4782 | 0.6867 | 0.6859 |
| 0.2234 | 21.0 | 31500 | 1.5748 | 0.6833 | 0.6815 |
| 0.1954 | 22.0 | 33000 | 1.7041 | 0.69 | 0.6897 |
| 0.1979 | 23.0 | 34500 | 1.8398 | 0.6808 | 0.6789 |
| 0.176 | 24.0 | 36000 | 1.9141 | 0.6867 | 0.6860 |
| 0.1862 | 25.0 | 37500 | 2.0105 | 0.6883 | 0.6881 |
| 0.1409 | 26.0 | 39000 | 2.1345 | 0.685 | 0.6840 |
| 0.1527 | 27.0 | 40500 | 2.2039 | 0.6858 | 0.6853 |
| 0.1474 | 28.0 | 42000 | 2.2990 | 0.6933 | 0.6920 |
| 0.1428 | 29.0 | 43500 | 2.3780 | 0.6883 | 0.6878 |
| 0.1348 | 30.0 | 45000 | 2.4859 | 0.6858 | 0.6839 |
| 0.1046 | 31.0 | 46500 | 2.5546 | 0.6825 | 0.6801 |
| 0.1147 | 32.0 | 48000 | 2.6432 | 0.6883 | 0.6865 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
MVRL/satclip-loc-enc-vit16-l40
|
MVRL
| 2024-05-26T01:51:36Z | 0 | 0 | null |
[
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"region:us"
] | null | 2024-05-26T01:51:35Z |
---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
drgary/ft6_lawllm_llama3_athena2
|
drgary
| 2024-05-26T01:31:37Z | 2 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T01:29:51Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** drgary
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
samwit/paligemma_vqav2
|
samwit
| 2024-05-26T01:30:25Z | 4 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:vq_av2",
"base_model:google/paligemma-3b-pt-224",
"base_model:adapter:google/paligemma-3b-pt-224",
"license:gemma",
"region:us"
] | null | 2024-05-26T01:09:51Z |
---
license: gemma
library_name: peft
tags:
- generated_from_trainer
base_model: google/paligemma-3b-pt-224
datasets:
- vq_av2
model-index:
- name: paligemma_vqav2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma_vqav2
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the vq_av2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
han-chi/llama2_uuu_news_qlora
|
han-chi
| 2024-05-26T01:28:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-05-25T05:22:30Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
tz579/example_asr_wav2vec2
|
tz579
| 2024-05-26T01:27:44Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"edinburghcstr/ami",
"generated_from_trainer",
"dataset:ami",
"base_model:facebook/wav2vec2-large-lv60",
"base_model:finetune:facebook/wav2vec2-large-lv60",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-24T20:28:06Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-large-lv60
tags:
- automatic-speech-recognition
- edinburghcstr/ami
- generated_from_trainer
datasets:
- ami
metrics:
- wer
model-index:
- name: facebook/wav2vec2-large-lv60
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: EDINBURGHCSTR/AMI - IHM
type: ami
config: ihm
split: None
args: 'Config: ihm, Training split: train, Eval split: validation'
metrics:
- name: Wer
type: wer
value: 0.9542044754234227
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facebook/wav2vec2-large-lv60
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the EDINBURGHCSTR/AMI - IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2723
- Wer: 0.9542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 1.0919 | 0.1565 | 1000 | 1.0169 | 0.7064 |
| 1.4768 | 0.3131 | 2000 | 0.7156 | 0.4356 |
| 0.9728 | 0.4696 | 3000 | 0.6462 | 0.4030 |
| 0.5418 | 0.6262 | 4000 | 0.6171 | 0.3707 |
| 0.8492 | 0.7827 | 5000 | 0.5758 | 0.3695 |
| 1.4826 | 0.9393 | 6000 | 0.5801 | 0.3545 |
| 0.3274 | 1.0958 | 7000 | 0.5244 | 0.3375 |
| 0.2089 | 1.2523 | 8000 | 0.5047 | 0.3239 |
| 0.2916 | 1.4089 | 9000 | 0.4901 | 0.3171 |
| 0.1617 | 1.5654 | 10000 | 0.5070 | 0.3151 |
| 0.3815 | 1.7220 | 11000 | 0.4948 | 0.3180 |
| 1.0171 | 1.8785 | 12000 | 0.9465 | 0.8379 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0a0+gitcd033a1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
antitheft159/Zovuyo
|
antitheft159
| 2024-05-26T01:24:17Z | 0 | 0 | null |
[
"license:cc-by-nd-4.0",
"region:us"
] | null | 2024-05-26T01:24:00Z |
---
license: cc-by-nd-4.0
---
|
JianKim3293/llama3_lora_blossmodel
|
JianKim3293
| 2024-05-26T01:19:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:finetune:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T01:18:39Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
---
# Uploaded model
- **Developed by:** JianKim3293
- **License:** apache-2.0
- **Finetuned from model :** MLP-KTLim/llama-3-Korean-Bllossom-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
asussome/xwin-finetuned-alpaca-cleaned
|
asussome
| 2024-05-26T01:11:31Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-18T19:18:46Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: xwin-finetuned-alpaca-cleaned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xwin-finetuned-alpaca-cleaned
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 20
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Ichsan2895/Merak-7B-v4_4bit_q128_awq
|
Ichsan2895
| 2024-05-26T01:10:16Z | 80 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"id",
"en",
"dataset:wikipedia",
"dataset:Ichsan2895/OASST_Top1_Indonesian",
"dataset:Ichsan2895/alpaca-gpt4-indonesian",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-05-25T18:37:33Z |
---
datasets:
- wikipedia
- Ichsan2895/OASST_Top1_Indonesian
- Ichsan2895/alpaca-gpt4-indonesian
language:
- id
- en
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://huggingface.co/Ichsan2895/Merak-7B-v4/resolve/main/FINAL_LOGO/6.png" alt="MERAK" style="width: 50%; min-width: 100px; display: block; margin: auto;">
</div>
# HAPPY TO ANNOUNCE THE RELEASE OF MERAK-7B-V4_4bit_q128_awq!
Merak-7B is the Large Language Model of Indonesian Language
This model is based on [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) and fine tuned by some of Indonesia Wikipedia articles that I cleaned before.
Leveraging QLoRA (QLora: Efficient Finetuning of Quantized LLMs), Merak-7B is able to run with 16 GB VRAM
Licensed under Creative Commons-By Attribution-Share Alike-Non Commercial (CC-BY-SA-NC 4.0) Merak-7B empowers AI enthusiasts, researchers alike.
Big thanks to all my friends and communities that help to build our first model. Thanks for Axolotl for a great fine tuning tool which designed to streamline the fine-tuning of various AI models.
Feel free, to ask me about the model and please share the news on your social media.
|
RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf
|
RichardErkhov
| 2024-05-26T01:06:35Z | 6 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-25T22:16:00Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Hebrew-Gemma-11B-Instruct - GGUF
- Model creator: https://huggingface.co/yam-peleg/
- Original model: https://huggingface.co/yam-peleg/Hebrew-Gemma-11B-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Hebrew-Gemma-11B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q2_K.gguf) | Q2_K | 3.9GB |
| [Hebrew-Gemma-11B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.IQ3_XS.gguf) | IQ3_XS | 4.27GB |
| [Hebrew-Gemma-11B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.IQ3_S.gguf) | IQ3_S | 4.48GB |
| [Hebrew-Gemma-11B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q3_K_S.gguf) | Q3_K_S | 4.48GB |
| [Hebrew-Gemma-11B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.IQ3_M.gguf) | IQ3_M | 4.63GB |
| [Hebrew-Gemma-11B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q3_K.gguf) | Q3_K | 4.94GB |
| [Hebrew-Gemma-11B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.94GB |
| [Hebrew-Gemma-11B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q3_K_L.gguf) | Q3_K_L | 5.33GB |
| [Hebrew-Gemma-11B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.IQ4_XS.gguf) | IQ4_XS | 5.44GB |
| [Hebrew-Gemma-11B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q4_0.gguf) | Q4_0 | 5.68GB |
| [Hebrew-Gemma-11B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [Hebrew-Gemma-11B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q4_K_S.gguf) | Q4_K_S | 5.72GB |
| [Hebrew-Gemma-11B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q4_K.gguf) | Q4_K | 6.04GB |
| [Hebrew-Gemma-11B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q4_K_M.gguf) | Q4_K_M | 6.04GB |
| [Hebrew-Gemma-11B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q4_1.gguf) | Q4_1 | 6.25GB |
| [Hebrew-Gemma-11B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q5_0.gguf) | Q5_0 | 6.81GB |
| [Hebrew-Gemma-11B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q5_K_S.gguf) | Q5_K_S | 6.81GB |
| [Hebrew-Gemma-11B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q5_K.gguf) | Q5_K | 7.0GB |
| [Hebrew-Gemma-11B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q5_K_M.gguf) | Q5_K_M | 7.0GB |
| [Hebrew-Gemma-11B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q5_1.gguf) | Q5_1 | 7.37GB |
| [Hebrew-Gemma-11B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q6_K.gguf) | Q6_K | 8.01GB |
| [Hebrew-Gemma-11B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Hebrew-Gemma-11B-Instruct-gguf/blob/main/Hebrew-Gemma-11B-Instruct.Q8_0.gguf) | Q8_0 | 10.37GB |
Original model description:
---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
language:
- en
- he
library_name: transformers
---
# Hebrew-Gemma-11B-Instruct
### Base Models:
- **07.03.2024:** [Hebrew-Gemma-11B](https://huggingface.co/yam-peleg/Hebrew-Gemma-11B)
- **16.03.2024:** [Hebrew-Gemma-11B-V2](https://huggingface.co/yam-peleg/Hebrew-Gemma-11B-V2)
### Instruct Models:
- **07.03.2024:** [Hebrew-Gemma-11B-Instruct](https://huggingface.co/yam-peleg/Hebrew-Gemma-11B-Instruct)
The Hebrew-Gemma-11B-Instruct Large Language Model (LLM) is a instruct fine-tuned version of the [Hebrew-Gemma-11B](https://huggingface.co/yam-peleg/Hebrew-Gemma-11B) generative text model using a variety of conversation datasets.
It is continued pretrain of gemma-7b, extended to a larger scale and trained on 3B additional tokens of both English and Hebrew text data.
# Instruction format
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
Here is a simple hellow world program<end_of_turn><eos>
```
- The conversation starts with **`<bos>`**.
- Each turn is preceded by a **`<start_of_turn>`** delimiter and then the role of the entity (`user` or `model`).
- Turns finish with the **`<end_of_turn>`** token.
- Conversation finish with the **`<eos>`** token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template.
A simple example using the tokenizer's chat template:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "Hebrew-Gemma-11B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda")
chat = [
{ "role": "user", "content": "כתוב קוד פשוט בפייתון שמדפיס למסך את התאריך של היום" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
### Terms of Use
As an extention of Gemma-7B, this model is subject to the original license and terms of use by Google.
### Benchmark Results
- Coming Soon!
### Notice
Hebrew-Gemma-11B is a pretrained base model and therefore does not have any moderation mechanisms.
### Authors
- Trained by Yam Peleg.
- In collaboration with Jonathan Rouach and Arjeo, inc.
|
antitheft159/eblis.195
|
antitheft159
| 2024-05-26T01:00:19Z | 0 | 0 | null |
[
"license:cc-by-nd-4.0",
"region:us"
] | null | 2024-05-26T00:59:29Z |
---
license: cc-by-nd-4.0
---
|
gaalcoro/Logomarca
|
gaalcoro
| 2024-05-26T00:57:47Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-26T00:57:47Z |
---
license: apache-2.0
---
|
Sorour/phi3_cls_fomc
|
Sorour
| 2024-05-26T00:53:51Z | 154 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-19T05:15:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shadowdefense/ShadowWatch001
|
shadowdefense
| 2024-05-26T00:53:07Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-05-26T00:53:07Z |
---
license: other
license_name: terms
license_link: https://beta.openai.com/terms/
---
|
NikolayKozloff/WizardLM-2-7B-abliterated-Q5_0-GGUF
|
NikolayKozloff
| 2024-05-26T00:45:03Z | 5 | 2 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T00:44:50Z |
---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/WizardLM-2-7B-abliterated-Q5_0-GGUF
This model was converted to GGUF format from [`fearlessdots/WizardLM-2-7B-abliterated`](https://huggingface.co/fearlessdots/WizardLM-2-7B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fearlessdots/WizardLM-2-7B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/WizardLM-2-7B-abliterated-Q5_0-GGUF --model wizardlm-2-7b-abliterated-q5_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/WizardLM-2-7B-abliterated-Q5_0-GGUF --model wizardlm-2-7b-abliterated-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m wizardlm-2-7b-abliterated-q5_0.gguf -n 128
```
|
JianKim3293/llama3_lora_lawmodel
|
JianKim3293
| 2024-05-26T00:24:10Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-25T23:08:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
minhz2003/test
|
minhz2003
| 2024-05-26T00:21:29Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-05-26T00:20:20Z |
---
license: apache-2.0
---
|
takassh/gemma-2b-it-lora-model
|
takassh
| 2024-05-26T00:19:42Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-26T00:16:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
legraphista/aya-23-8B-IMat-GGUF
|
legraphista
| 2024-05-26T00:17:38Z | 165 | 0 |
gguf
|
[
"gguf",
"quantized",
"GGUF",
"imatrix",
"quantization",
"imat",
"static",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereForAI/aya-23-8B",
"base_model:quantized:CohereForAI/aya-23-8B",
"license:cc-by-nc-4.0",
"region:us",
"conversational"
] |
text-generation
| 2024-05-25T20:21:19Z |
---
base_model: CohereForAI/aya-23-8B
inference: false
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
library_name: gguf
license: cc-by-nc-4.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- imatrix
- quantization
- imat
- static
---
# aya-23-8B-IMat-GGUF
_Llama.cpp imatrix quantization of CohereForAI/aya-23-8B_
Original Model: [CohereForAI/aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B)
Original dtype: `FP16` (`float16`)
Quantized by: llama.cpp [b2998](https://github.com/ggerganov/llama.cpp/releases/tag/b2998)
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [aya-23-8B.Q8_0.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q8_0.gguf) | Q8_0 | 8.54GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q6_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q6_K.gguf) | Q6_K | 6.60GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q4_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q4_K.gguf) | Q4_K | 5.06GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q3_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q3_K.gguf) | Q3_K | 4.22GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q2_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q2_K.gguf) | Q2_K | 3.44GB | ✅ Available | 🟢 Yes | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [aya-23-8B.FP16.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.FP16.gguf) | F16 | 16.07GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q5_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q5_K.gguf) | Q5_K | 5.80GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q5_K_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q5_K_S.gguf) | Q5_K_S | 5.67GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q4_K_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q4_K_S.gguf) | Q4_K_S | 4.83GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q3_K_L.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q3_K_L.gguf) | Q3_K_L | 4.53GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q3_K_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q3_K_S.gguf) | Q3_K_S | 3.87GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q2_K_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q2_K_S.gguf) | Q2_K_S | 3.25GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ4_NL.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ4_NL.gguf) | IQ4_NL | 4.81GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ4_XS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ4_XS.gguf) | IQ4_XS | 4.60GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ3_M.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ3_M.gguf) | IQ3_M | 3.99GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ3_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ3_S.gguf) | IQ3_S | 3.89GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ3_XS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ3_XS.gguf) | IQ3_XS | 3.72GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ3_XXS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ3_XXS.gguf) | IQ3_XXS | 3.41GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ2_M.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ2_M.gguf) | IQ2_M | 3.08GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ2_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ2_S.gguf) | IQ2_S | 2.90GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ2_XS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ2_XS.gguf) | IQ2_XS | 2.80GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ2_XXS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ2_XXS.gguf) | IQ2_XXS | 2.59GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ1_M.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ1_M.gguf) | IQ1_M | 2.35GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ1_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ1_S.gguf) | IQ1_S | 2.21GB | ✅ Available | 🟢 Yes | 📦 No
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download legraphista/aya-23-8B-IMat-GGUF --include "aya-23-8B.Q8_0.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/aya-23-8B-IMat-GGUF --include "aya-23-8B.Q8_0/*" --local-dir aya-23-8B.Q8_0
# see FAQ for merging GGUF's
```
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `aya-23-8B.Q8_0`)
3. Run `gguf-split --merge aya-23-8B.Q8_0/aya-23-8B.Q8_0-00001-of-XXXXX.gguf aya-23-8B.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
|
takassh/gemma-2b-it-lora
|
takassh
| 2024-05-26T00:16:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T00:16:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Enpas/whisper-base-co
|
Enpas
| 2024-05-26T00:12:44Z | 79 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-23T21:48:26Z |
```
import torch
from transformers import pipeline
device = "cuda:0" if torch.cuda.is_available() else "cpu"
transcribe = pipeline(task="automatic-speech-recognition", model="Enpas/whisper-small-co", chunk_length_s=30, device=device)
transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="am", task="transcribe")
audio = "/content/tr_10000_tr097082.wav"
result = transcribe(audio)
print('Transcription: ', result["text"])
```
|
GTsuya/cute_sexy_robutts_pony
|
GTsuya
| 2024-05-26T00:10:08Z | 4 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:GraydientPlatformAPI/autism-pony",
"base_model:adapter:GraydientPlatformAPI/autism-pony",
"license:mit",
"region:us"
] |
text-to-image
| 2024-05-26T00:08:50Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
cartoon, score_9, score_8_up, score_7_up, mature_female, dirndl, atmospheric
perspective, portrait, church, rating_questionable,
<lora:cute_sexy_robutts_pony:1>
parameters:
negative_prompt: >-
score_6, score_5, score_4, ugly face, ugly eyes, realistic, monochrome,
white and black
output:
url: images/00024-1661246894.png
- text: >-
cartoon, score_9, score_8_up, score_7_up, mature_female, bikini, sideways,
cropped legs, tunnel, rating_explicit, <lora:cute_sexy_robutts_pony:1>
parameters:
negative_prompt: >-
score_6, score_5, score_4, ugly face, ugly eyes, realistic, monochrome,
white and black
output:
url: images/00077-2017120761.png
- text: >-
cartoon, score_9, score_8_up, score_7_up, mature_female, Gloves, dutch
angle, cropped legs, pool, rating_questionable,
<lora:cute_sexy_robutts_pony:1>
parameters:
negative_prompt: >-
score_6, score_5, score_4, ugly face, ugly eyes, realistic, monochrome,
white and black
output:
url: images/00088-1815590393.png
- text: >-
cartoon, score_9, score_8_up, score_7_up, mature_female, armor, from above,
wide shot, refinery, rating_explicit, <lora:cute_sexy_robutts_pony:1>
parameters:
negative_prompt: >-
score_6, score_5, score_4, ugly face, ugly eyes, realistic, monochrome,
white and black
output:
url: images/00171-1644120815.png
- text: >-
cartoon, score_9, score_8_up, score_7_up, mature_female, Gloves, from above,
close-up, flower shop, rating_safe, <lora:cute_sexy_robutts_pony:1>
parameters:
negative_prompt: >-
score_6, score_5, score_4, ugly face, ugly eyes, realistic, monochrome,
white and black
output:
url: images/00217-4158734917.png
- text: >-
cartoon, score_9, score_8_up, score_7_up, mature_female, Latex, atmospheric
perspective, lower body, cooling tower, rating_safe,
<lora:cute_sexy_robutts_pony:1>
parameters:
negative_prompt: >-
score_6, score_5, score_4, ugly face, ugly eyes, realistic, monochrome,
white and black
output:
url: images/00220-397098714.png
base_model: GraydientPlatformAPI/autism-pony
instance_prompt: null
license: mit
---
# cute_sexy_robutts_pony
<Gallery />
## Model description
This LoRA model has been trained with Kohya SS using Cute Sexy Robutts's artworks on Autism Mix SDXL checkpoint. Obtained graphics are close to the original art style. This LoRA model could be use for cartoon/drawing representation of sexy women.
## Download model
Weights for this model are available in Safetensors format.
[Download](/GTsuya/cute_sexy_robutts_pony/tree/main) them in the Files & versions tab.
|
raulgdp/roberta-multiclase-ag_news
|
raulgdp
| 2024-05-26T00:08:49Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-25T21:35:34Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-multiclase-ag_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-multiclase-ag_news
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2671
- Rmse: 1.1967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.3199 | 1.0 | 15000 | 1.2671 | 1.1967 |
| 1.3837 | 2.0 | 30000 | 1.3864 | 1.2230 |
| 1.3879 | 3.0 | 45000 | 1.3865 | 1.8686 |
| 1.385 | 4.0 | 60000 | 1.3864 | 1.2247 |
| 1.3885 | 5.0 | 75000 | 1.3863 | 1.8720 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.0.1+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
umair894/llama3_1e
|
umair894
| 2024-05-25T23:58:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T23:58:25Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** umair894
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fearlessdots/Llama-3-Alpha-Centauri-v0.1
|
fearlessdots
| 2024-05-25T23:47:33Z | 115 | 9 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:NobodyExistsOnTheInternet/ToxicQAFinal",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-25T18:00:36Z |
---
license: llama3
datasets:
- NobodyExistsOnTheInternet/ToxicQAFinal
---
# Llama-3-Alpha-Centauri-v0.1
<img src="alpha_centauri_banner.png" alt="" style="width:500px;height:400px;"/>
**Image generated with [https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS](https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS).**
---
## Disclaimer
**Note:** All models and LoRAs from the **Centaurus** series were created with the sole purpose of research. The usage of this model and/or its related LoRA implies agreement with the following terms:
- The user is responsible for what they might do with it, including how the output of the model is interpreted and used;
- The user should not use the model and its outputs for any illegal purposes;
- The user is the only one resposible for any misuse or negative consequences from using this model and/or its related LoRA.
I do not endorse any particular perspectives presented in the training data.
---
## Centaurus Series
This series aims to develop highly uncensored Large Language Models (LLMs) with the following focuses:
- Science, Technology, Engineering, and Mathematics (STEM)
- Computer Science (including programming)
- Social Sciences
And several key cognitive skills, including but not limited to:
- Reasoning and logical deduction
- Critical thinking
- Analysis
While maintaining strong overall knowledge and expertise, the models will undergo refinement through:
- Fine-tuning processes
- Model merging techniques including Mixture of Experts (MoE)
Please note that these models are experimental and may demonstrate varied levels of effectiveness. Your feedback, critique, or queries are most welcome for improvement purposes.
## Base
This model and its related LoRA was fine-tuned on [https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3).
## LoRA
The LoRA merged with the base model is available at [https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-LoRA](https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-LoRA).
## GGUF
I provide some GGUF files here: [https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF](https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1-GGUF).
## Datasets
- [https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
## Fine Tuning
### - Quantization Configuration
- load_in_4bit=True
- bnb_4bit_quant_type="fp4"
- bnb_4bit_compute_dtype=compute_dtype
- bnb_4bit_use_double_quant=False
### - PEFT Parameters
- lora_alpha=64
- lora_dropout=0.05
- r=128
- bias="none"
### - Training Arguments
- num_train_epochs=1
- per_device_train_batch_size=1
- gradient_accumulation_steps=4
- optim="adamw_bnb_8bit"
- save_steps=25
- logging_steps=25
- learning_rate=2e-4
- weight_decay=0.001
- fp16=False
- bf16=False
- max_grad_norm=0.3
- max_steps=-1
- warmup_ratio=0.03
- group_by_length=True
- lr_scheduler_type="constant"
## Credits
- Meta ([https://huggingface.co/meta-llama](https://huggingface.co/meta-llama)): for the original Llama-3;
- HuggingFace: for hosting this model and for creating the fine-tuning tools used;
- failspy ([https://huggingface.co/failspy](https://huggingface.co/failspy)): for the base model and the orthogonalization implementation;
- NobodyExistsOnTheInternet ([https://huggingface.co/NobodyExistsOnTheInternet](https://huggingface.co/NobodyExistsOnTheInternet)): for the incredible dataset;
- Undi95 ([https://huggingface.co/Undi95](https://huggingface.co/Undi95)) and Sao10k ([https://huggingface.co/Sao10K](https://huggingface.co/Sao10K)): my main inspirations for doing these models =]
A huge thank you to all of them ☺️
## About Alpha Centauri
**Alpha Centauri** is a triple star system located in the constellation of **Centaurus**. It includes three stars: Rigil Kentaurus (also known as **α Centauri A**), Toliman (or **α Centauri B**), and Proxima Centauri (**α Centauri C**). Proxima Centauri is the nearest star to the Sun, residing at approximately 4.25 light-years (1.3 parsecs) away.
The primary pair, **α Centauri A** and **B**, are both similar to our Sun - **α Centauri A** being a class G star with 1.1 solar masses and 1.5 times the Sun's luminosity; **α Centauri B** having 0.9 solar masses and under half the luminosity of the Sun. They revolve around their shared center every 79 years following an elliptical path, ranging from 35.6 astronomical units apart (nearly Pluto's distance from the Sun) to 11.2 astronomical units apart (around Saturn's distance from the Sun.)
Proxima Centauri, or **α Centauri C**, is a diminutive, dim red dwarf (a class M star) initially unseen to the naked eye. At roughly 4.24 light-years (1.3 parsecs) from us, it lies nearer than **α Centauri AB**, the binary system. Presently, the gap between **Proxima Centauri** and **α Centauri AB** amounts to around 13,000 Astronomical Units (0.21 light-years)—comparable to over 430 times Neptune's orbital radius.
Two confirmed exoplanets accompany Proxima Centauri: **Proxima b**, discovered in 2016, is Earth-sized within the habitable zone; **Proxima d**, revealed in 2022, is a potential sub-Earth close to its host star. Meanwhile, disputes surround **Proxima c**, a mini-Neptune detected in 2019. Intriguingly, hints suggest that **α Centauri A** might possess a Neptune-sized object in its habitable region, but further investigation is required before confirming whether it truly exists and qualifies as a planet. Regarding **α Centauri B**, although once thought to harbor a planet (named **α Cen Bb**), subsequent research invalidated this claim, leaving it currently devoid of identified planets.
**Source:** retrived from [https://en.wikipedia.org/wiki/Alpha_Centauri](https://en.wikipedia.org/wiki/Alpha_Centauri) and processed with [https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
|
Sorour/phi3-ft-fomc-v2
|
Sorour
| 2024-05-25T23:45:29Z | 155 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-25T23:33:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuangDuy/whisper-large-v3-vivos
|
QuangDuy
| 2024-05-25T23:40:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T23:40:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thdangtr/blip_recipe1m_title_v6
|
thdangtr
| 2024-05-25T23:35:49Z | 67 | 0 |
transformers
|
[
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-05-25T23:34:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ethan-ng/content-moderation-model
|
ethan-ng
| 2024-05-25T23:33:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-25T23:33:32Z |
---
license: apache-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.