modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-20 18:29:57
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 566
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-20 18:29:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/Isolde-12B-i1-GGUF
|
mradermacher
| 2024-10-20T20:30:07Z | 8 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:arlineka/Isolde-12B",
"base_model:quantized:arlineka/Isolde-12B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-20T18:34:43Z |
---
base_model: arlineka/Isolde-12B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/arlineka/Isolde-12B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Isolde-12B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Isolde-12B-i1-GGUF/resolve/main/Isolde-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
faaany/my_awesome_mind_model
|
faaany
| 2024-10-20T20:27:56Z | 161 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-10-20T20:26:49Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.035398230088495575
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6610
- Accuracy: 0.0354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6409 | 0.0796 |
| No log | 1.8667 | 7 | 2.6512 | 0.0531 |
| 2.6357 | 2.9333 | 11 | 2.6602 | 0.0442 |
| 2.6357 | 4.0 | 15 | 2.6632 | 0.0354 |
| 2.6357 | 4.8 | 18 | 2.6638 | 0.0354 |
| 2.6251 | 5.8667 | 22 | 2.6643 | 0.0354 |
| 2.6251 | 6.9333 | 26 | 2.6623 | 0.0354 |
| 2.6159 | 8.0 | 30 | 2.6610 | 0.0354 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.3.0a0+git3588582
- Datasets 3.0.1
- Tokenizers 0.20.1
|
A790227/your-repo-name
|
A790227
| 2024-10-20T20:27:45Z | 117 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-20T20:19:22Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: your-repo-name
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# your-repo-name
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2352
- Accuracy: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1851 | 1.0 | 1563 | 0.2352 | 0.9268 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
utischoolnlp/Polyverse-1.3B-256-16-stage1
|
utischoolnlp
| 2024-10-20T20:14:55Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"Polyverse",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-20T20:12:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
goku35855/speecht5_finetuned_marathi
|
goku35855
| 2024-10-20T20:08:51Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"speecht5",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-10-20T18:29:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
swpranta/Llama-3.2-1B-Instruct-Q4_0-GGUF
|
swpranta
| 2024-10-20T19:53:53Z | 61 | 0 |
transformers
|
[
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-20T19:53:45Z |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
license: llama3.2
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
base_model: meta-llama/Llama-3.2-1B-Instruct
---
# swpranta/Llama-3.2-1B-Instruct-Q4_0-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.2-1B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo swpranta/Llama-3.2-1B-Instruct-Q4_0-GGUF --hf-file llama-3.2-1b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo swpranta/Llama-3.2-1B-Instruct-Q4_0-GGUF --hf-file llama-3.2-1b-instruct-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo swpranta/Llama-3.2-1B-Instruct-Q4_0-GGUF --hf-file llama-3.2-1b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo swpranta/Llama-3.2-1B-Instruct-Q4_0-GGUF --hf-file llama-3.2-1b-instruct-q4_0.gguf -c 2048
```
|
mlx-community/Fimbulvetr-11B-v2
|
mlx-community
| 2024-10-20T19:27:23Z | 11 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"en",
"base_model:Sao10K/Fimbulvetr-11B-v2",
"base_model:finetune:Sao10K/Fimbulvetr-11B-v2",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-10-20T19:17:23Z |
---
base_model: Sao10K/Fimbulvetr-11B-v2
language:
- en
license: cc-by-nc-4.0
tags:
- mlx
---
# mlx-community/Fimbulvetr-11B-v2
The Model [mlx-community/Fimbulvetr-11B-v2](https://huggingface.co/mlx-community/Fimbulvetr-11B-v2) was converted to MLX format from [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) using mlx-lm version **0.19.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Fimbulvetr-11B-v2")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Jahid05/llama-3.2-3b-2epoch-website-prompt
|
Jahid05
| 2024-10-20T19:23:36Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T18:44:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saurabhswami/HumaneArt
|
saurabhswami
| 2024-10-20T19:17:39Z | 95 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-10-20T18:02:12Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/1.jpg
- text: '-'
output:
url: images/2.jpg
- text: '-'
output:
url: images/3.jpg
- text: '-'
output:
url: images/4.jpg
- text: '-'
output:
url: images/5.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: HumaneArt
license: apache-2.0
---
# HumaneArt
<Gallery />
## Model description
Flux.1 (dev) LoRA trained on simple line illustration style with white background. Please share feedback if you use this :)
Use base humaneart.safetensors to apply the LoRA, lower adaptations at 4, 8 & 12 Epochs also available.
include keyword in prompt to activate LoRA : HumaneArt
A lora trained on Humane-folks Illustrations with permission (Thanks Pragyan!)
https://humane-folks.framer.website/
Original illustrations by Pragyan Skukla
https://pragyanshukla.framer.website/
## Trigger words
You should use `HumaneArt` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/saurabhswami/HumaneArt/tree/main) them in the Files & versions tab.
|
mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF
|
mradermacher
| 2024-10-20T18:44:08Z | 120 | 3 |
transformers
|
[
"transformers",
"gguf",
"chat",
"en",
"de",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-20T16:29:55Z |
---
base_model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2
language:
- en
- de
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v2.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
exploer/tomasbily04
|
exploer
| 2024-10-20T18:42:53Z | 12 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-20T18:42:39Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/tomasbily_000675_00_20241020180402.png
text: tomasbily is standing in nature by the river wearing a nike t-shirt and
jeans.
- output:
url: sample/tomasbily_000675_01_20241020180415.png
text: tomasbily stands in nature by the river wearing shirts and a coat and jeans.
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: tomasbily
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# TomasBily
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `tomasbily` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
zhangyi617/sd_naruto_lora_pgd_2e
|
zhangyi617
| 2024-10-20T18:36:15Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-10-20T18:17:59Z |
---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - zhangyi617/sd_naruto_lora_pgd_2e
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the zhangyi617/naruto_721_train dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
DUNHILL/results2
|
DUNHILL
| 2024-10-20T18:25:55Z | 63 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vilt",
"visual-question-answering",
"generated_from_trainer",
"base_model:dandelin/vilt-b32-mlm",
"base_model:finetune:dandelin/vilt-b32-mlm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2024-10-18T13:45:28Z |
---
library_name: transformers
license: apache-2.0
base_model: dandelin/vilt-b32-mlm
tags:
- generated_from_trainer
model-index:
- name: results2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results2
This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 54.5580
- Bleu Score: {'bleu': 0.0, 'precisions': [0.0, 0.0, 0.0, 0.0], 'brevity_penalty': 5.701223175160721e-08, 'length_ratio': 0.05656108597285068, 'translation_length': 300, 'reference_length': 5304}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu Score |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 620.8193 | 1.0 | 63 | 177.3467 | {'bleu': 0.0, 'precisions': [0.0, 0.0, 0.0, 0.0], 'brevity_penalty': 5.701223175160721e-08, 'length_ratio': 0.05656108597285068, 'translation_length': 300, 'reference_length': 5304} |
| 140.721 | 2.0 | 126 | 65.7476 | {'bleu': 0.0, 'precisions': [0.0, 0.0, 0.0, 0.0], 'brevity_penalty': 5.701223175160721e-08, 'length_ratio': 0.05656108597285068, 'translation_length': 300, 'reference_length': 5304} |
| 60.0697 | 3.0 | 189 | 54.5580 | {'bleu': 0.0, 'precisions': [0.0, 0.0, 0.0, 0.0], 'brevity_penalty': 5.701223175160721e-08, 'length_ratio': 0.05656108597285068, 'translation_length': 300, 'reference_length': 5304} |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Javitron4257/Dog-Cat-Identificator
|
Javitron4257
| 2024-10-20T18:20:08Z | 5 | 0 | null |
[
"pytorch",
"vit",
"vision",
"image-classification",
"dataset:omarques/autotrain-data-dogs-and-cats",
"license:cc",
"region:us"
] |
image-classification
| 2024-10-20T09:42:33Z |
---
license: cc
tags:
- vision
- image-classification
datasets:
- omarques/autotrain-data-dogs-and-cats
---
|
M4-ai/TinyMistral-248M-v3
|
M4-ai
| 2024-10-20T18:16:15Z | 197 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:Locutusque/TM-DATA-V2",
"dataset:LLM360/TxT360",
"dataset:mlfoundations/dclm-baseline-1.0",
"dataset:Skylion007/openwebtext",
"dataset:JeanKaddour/minipile",
"dataset:eminorhan/gutenberg_en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-05T22:51:03Z |
---
language:
- en
license: apache-2.0
datasets:
- Locutusque/TM-DATA-V2
- LLM360/TxT360
- mlfoundations/dclm-baseline-1.0
- Skylion007/openwebtext
- JeanKaddour/minipile
- eminorhan/gutenberg_en
model-index:
- name: TinyMistral-248M-v3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 16.39
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=M4-ai/TinyMistral-248M-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 1.78
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=M4-ai/TinyMistral-248M-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.0
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=M4-ai/TinyMistral-248M-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 0.0
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=M4-ai/TinyMistral-248M-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.15
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=M4-ai/TinyMistral-248M-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.47
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=M4-ai/TinyMistral-248M-v3
name: Open LLM Leaderboard
---
still in training. Trained on about ~21 billion tokens so far.
| Tasks |Version| Filter |n-shot| Metric | | Value | |Stderr|
|----------------------------------------|-------|----------------|-----:|-----------|---|------:|---|-----:|
|Open LLM Leaderboard | N/A| | | | | | | |
| - arc_challenge | 1|none | 25|acc |↑ | 0.2005|± |0.0117|
| | |none | 25|acc_norm |↑ | 0.2406|± |0.0125|
| - gsm8k | 3|flexible-extract| 5|exact_match|↑ | 0.0083|± |0.0025|
| | |strict-match | 5|exact_match|↑ | 0.0000|± |0.0000|
| - hellaswag | 1|none | 10|acc |↑ | 0.2724|± |0.0044|
| | |none | 10|acc_norm |↑ | 0.2838|± |0.0045|
| - mmlu | 2|none | |acc |↑ | 0.2290|± |0.0035|
| - humanities | 2|none | |acc |↑ | 0.2380|± |0.0062|
| - formal_logic | 1|none | 5|acc |↑ | 0.2460|± |0.0385|
| - high_school_european_history | 1|none | 5|acc |↑ | 0.1818|± |0.0301|
| - high_school_us_history | 1|none | 5|acc |↑ | 0.2647|± |0.0310|
| - high_school_world_history | 1|none | 5|acc |↑ | 0.2911|± |0.0296|
| - international_law | 1|none | 5|acc |↑ | 0.2149|± |0.0375|
| - jurisprudence | 1|none | 5|acc |↑ | 0.2685|± |0.0428|
| - logical_fallacies | 1|none | 5|acc |↑ | 0.2209|± |0.0326|
| - moral_disputes | 1|none | 5|acc |↑ | 0.2457|± |0.0232|
| - moral_scenarios | 1|none | 5|acc |↑ | 0.2369|± |0.0142|
| - philosophy | 1|none | 5|acc |↑ | 0.1865|± |0.0221|
| - prehistory | 1|none | 5|acc |↑ | 0.1975|± |0.0222|
| - professional_law | 1|none | 5|acc |↑ | 0.2432|± |0.0110|
| - world_religions | 1|none | 5|acc |↑ | 0.3099|± |0.0355|
| - other | 2|none | |acc |↑ | 0.2375|± |0.0076|
| - business_ethics | 1|none | 5|acc |↑ | 0.3200|± |0.0469|
| - clinical_knowledge | 1|none | 5|acc |↑ | 0.2226|± |0.0256|
| - college_medicine | 1|none | 5|acc |↑ | 0.1965|± |0.0303|
| - global_facts | 1|none | 5|acc |↑ | 0.1800|± |0.0386|
| - human_aging | 1|none | 5|acc |↑ | 0.3004|± |0.0308|
| - management | 1|none | 5|acc |↑ | 0.1942|± |0.0392|
| - marketing | 1|none | 5|acc |↑ | 0.2735|± |0.0292|
| - medical_genetics | 1|none | 5|acc |↑ | 0.3000|± |0.0461|
| - miscellaneous | 1|none | 5|acc |↑ | 0.2478|± |0.0154|
| - nutrition | 1|none | 5|acc |↑ | 0.2222|± |0.0238|
| - professional_accounting | 1|none | 5|acc |↑ | 0.2021|± |0.0240|
| - professional_medicine | 1|none | 5|acc |↑ | 0.1912|± |0.0239|
| - virology | 1|none | 5|acc |↑ | 0.2590|± |0.0341|
| - social sciences | 2|none | |acc |↑ | 0.2203|± |0.0075|
| - econometrics | 1|none | 5|acc |↑ | 0.2368|± |0.0400|
| - high_school_geography | 1|none | 5|acc |↑ | 0.2020|± |0.0286|
| - high_school_government_and_politics| 1|none | 5|acc |↑ | 0.1865|± |0.0281|
| - high_school_macroeconomics | 1|none | 5|acc |↑ | 0.2205|± |0.0210|
| - high_school_microeconomics | 1|none | 5|acc |↑ | 0.2143|± |0.0267|
| - high_school_psychology | 1|none | 5|acc |↑ | 0.1908|± |0.0168|
| - human_sexuality | 1|none | 5|acc |↑ | 0.2672|± |0.0388|
| - professional_psychology | 1|none | 5|acc |↑ | 0.2386|± |0.0172|
| - public_relations | 1|none | 5|acc |↑ | 0.1727|± |0.0362|
| - security_studies | 1|none | 5|acc |↑ | 0.2367|± |0.0272|
| - sociology | 1|none | 5|acc |↑ | 0.2488|± |0.0306|
| - us_foreign_policy | 1|none | 5|acc |↑ | 0.2600|± |0.0441|
| - stem | 2|none | |acc |↑ | 0.2157|± |0.0073|
| - abstract_algebra | 1|none | 5|acc |↑ | 0.2200|± |0.0416|
| - anatomy | 1|none | 5|acc |↑ | 0.1778|± |0.0330|
| - astronomy | 1|none | 5|acc |↑ | 0.1908|± |0.0320|
| - college_biology | 1|none | 5|acc |↑ | 0.2778|± |0.0375|
| - college_chemistry | 1|none | 5|acc |↑ | 0.2200|± |0.0416|
| - college_computer_science | 1|none | 5|acc |↑ | 0.2100|± |0.0409|
| - college_mathematics | 1|none | 5|acc |↑ | 0.2100|± |0.0409|
| - college_physics | 1|none | 5|acc |↑ | 0.2157|± |0.0409|
| - computer_security | 1|none | 5|acc |↑ | 0.2700|± |0.0446|
| - conceptual_physics | 1|none | 5|acc |↑ | 0.2638|± |0.0288|
| - electrical_engineering | 1|none | 5|acc |↑ | 0.2483|± |0.0360|
| - elementary_mathematics | 1|none | 5|acc |↑ | 0.2037|± |0.0207|
| - high_school_biology | 1|none | 5|acc |↑ | 0.1774|± |0.0217|
| - high_school_chemistry | 1|none | 5|acc |↑ | 0.2020|± |0.0282|
| - high_school_computer_science | 1|none | 5|acc |↑ | 0.2500|± |0.0435|
| - high_school_mathematics | 1|none | 5|acc |↑ | 0.2148|± |0.0250|
| - high_school_physics | 1|none | 5|acc |↑ | 0.2053|± |0.0330|
| - high_school_statistics | 1|none | 5|acc |↑ | 0.1481|± |0.0242|
| - machine_learning | 1|none | 5|acc |↑ | 0.3125|± |0.0440|
| - truthfulqa_gen | 3|none | 0|bleu_acc |↑ | 0.2362|± |0.0149|
| | |none | 0|bleu_diff |↑ |-1.0138|± |0.2569|
| | |none | 0|bleu_max |↑ | 7.9522|± |0.4088|
| | |none | 0|rouge1_acc |↑ | 0.2595|± |0.0153|
| | |none | 0|rouge1_diff|↑ |-1.9129|± |0.4349|
| | |none | 0|rouge1_max |↑ |21.7885|± |0.7307|
| | |none | 0|rouge2_acc |↑ | 0.1200|± |0.0114|
| | |none | 0|rouge2_diff|↑ |-1.9771|± |0.3475|
| | |none | 0|rouge2_max |↑ | 9.0199|± |0.5842|
| | |none | 0|rougeL_acc |↑ | 0.2570|± |0.0153|
| | |none | 0|rougeL_diff|↑ |-1.8812|± |0.4185|
| | |none | 0|rougeL_max |↑ |19.6284|± |0.6850|
| - truthfulqa_mc1 | 2|none | 0|acc |↑ | 0.1983|± |0.0140|
| - truthfulqa_mc2 | 2|none | 0|acc |↑ | 0.3861|± |0.0147|
| - winogrande | 1|none | 5|acc |↑ | 0.4972|± |0.0141|
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|-------------------|------:|------|------|------|---|-----:|---|-----:|
| - mmlu | 2|none | |acc |↑ |0.2290|± |0.0035|
| - humanities | 2|none | |acc |↑ |0.2380|± |0.0062|
| - other | 2|none | |acc |↑ |0.2375|± |0.0076|
| - social sciences| 2|none | |acc |↑ |0.2203|± |0.0075|
| - stem | 2|none | |acc |↑ |0.2157|± |0.0073|
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|---------------------------------|------:|------|-----:|--------|---|-----:|---|-----:|
|agieval_nous | 0|none | |acc_norm|↑ |0.2133|± |0.0081|
| - agieval_aqua_rat | 1|none | 0|acc |↑ |0.2047|± |0.0254|
| | |none | 0|acc_norm|↑ |0.1969|± |0.0250|
| - agieval_logiqa_en | 1|none | 0|acc |↑ |0.2043|± |0.0158|
| | |none | 0|acc_norm|↑ |0.2304|± |0.0165|
| - agieval_lsat_ar | 1|none | 0|acc |↑ |0.1739|± |0.0250|
| | |none | 0|acc_norm|↑ |0.1957|± |0.0262|
| - agieval_lsat_lr | 1|none | 0|acc |↑ |0.1549|± |0.0160|
| | |none | 0|acc_norm|↑ |0.1608|± |0.0163|
| - agieval_lsat_rc | 1|none | 0|acc |↑ |0.1636|± |0.0226|
| | |none | 0|acc_norm|↑ |0.2119|± |0.0250|
| - agieval_sat_en | 1|none | 0|acc |↑ |0.2670|± |0.0309|
| | |none | 0|acc_norm|↑ |0.2621|± |0.0307|
| - agieval_sat_en_without_passage| 1|none | 0|acc |↑ |0.2670|± |0.0309|
| | |none | 0|acc_norm|↑ |0.2621|± |0.0307|
| - agieval_sat_math | 1|none | 0|acc |↑ |0.2182|± |0.0279|
| | |none | 0|acc_norm|↑ |0.2318|± |0.0285|
|arc_challenge | 1|none | 0|acc |↑ |0.1945|± |0.0116|
| | |none | 0|acc_norm|↑ |0.2372|± |0.0124|
|truthfulqa_mc2 | 2|none | 0|acc |↑ |0.3861|± |0.0147|
| Groups |Version|Filter|n-shot| Metric | |Value | |Stderr|
|------------|------:|------|------|--------|---|-----:|---|-----:|
|agieval_nous| 0|none | |acc_norm|↑ |0.2133|± |0.0081|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_M4-ai__TinyMistral-248M-v3)
| Metric |Value|
|-------------------|----:|
|Avg. | 4.13|
|IFEval (0-Shot) |16.39|
|BBH (3-Shot) | 1.78|
|MATH Lvl 5 (4-Shot)| 0.00|
|GPQA (0-shot) | 0.00|
|MuSR (0-shot) | 5.15|
|MMLU-PRO (5-shot) | 1.47|
|
MaziyarPanahi/Nemotron-Mini-4B-Instruct-GGUF
|
MaziyarPanahi
| 2024-10-20T18:04:23Z | 118 | 1 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:nvidia/Nemotron-Mini-4B-Instruct",
"base_model:quantized:nvidia/Nemotron-Mini-4B-Instruct",
"region:us",
"conversational"
] |
text-generation
| 2024-10-20T17:42:39Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Nemotron-Mini-4B-Instruct-GGUF
base_model: nvidia/Nemotron-Mini-4B-Instruct
inference: false
model_creator: nvidia
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Nemotron-Mini-4B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Nemotron-Mini-4B-Instruct-GGUF)
- Model creator: [nvidia](https://huggingface.co/nvidia)
- Original model: [nvidia/Nemotron-Mini-4B-Instruct](https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct)
## Description
[MaziyarPanahi/Nemotron-Mini-4B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Nemotron-Mini-4B-Instruct-GGUF) contains GGUF format model files for [nvidia/Nemotron-Mini-4B-Instruct](https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
bunnycore/Llama-3.2-3B-All-Mix
|
bunnycore
| 2024-10-20T17:58:14Z | 158 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Lyte/Llama-3.2-3B-Overthinker",
"base_model:merge:Lyte/Llama-3.2-3B-Overthinker",
"base_model:bunnycore/Llama-3.2-3B-Pure-RP",
"base_model:merge:bunnycore/Llama-3.2-3B-Pure-RP",
"base_model:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"base_model:merge:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T16:50:26Z |
---
base_model:
- bunnycore/Llama-3.2-3B-Pure-RP
- huihui-ai/Llama-3.2-3B-Instruct-abliterated
- Lyte/Llama-3.2-3B-Overthinker
library_name: transformers
tags:
- mergekit
- merge
---
## Model Overview
The Llama-3.2-3B-All-Mix model is a merged language model that combines the strengths of multiple models using the TIES merge method. This model is designed to provide a balanced performance across various tasks and domains.
### Capabilities
* The Llama-3.2-3B-All-Mix model is capable of:
- Generating human-like text
- Conversational dialogue
- Roleplay
- Long-form reasoning
- Answering questions
- Summarizing text
## The following models were included in the merge:
- bunnycore/Llama-3.2-3B-Pure-RP: This model is particularly well-suited for roleplay tasks, allowing for more engaging and interactive conversations.
- Lyte/Llama-3.2-3B-Overthinker: This model excels at long-form reasoning and is capable of generating more in-depth and thoughtful responses.
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [huihui-ai/Llama-3.2-3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [bunnycore/Llama-3.2-3B-Pure-RP](https://huggingface.co/bunnycore/Llama-3.2-3B-Pure-RP)
* [Lyte/Llama-3.2-3B-Overthinker](https://huggingface.co/Lyte/Llama-3.2-3B-Overthinker)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Lyte/Llama-3.2-3B-Overthinker
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Pure-RP
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
Abirami1213/chatModel
|
Abirami1213
| 2024-10-20T17:57:29Z | 143 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T17:55:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
deman539/snowflake-arctic-embed-m-long-finetuned-indeed-jobs
|
deman539
| 2024-10-20T17:55:05Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"nomic_bert",
"sentence-similarity",
"feature-extraction",
"custom_code",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-20T17:54:22Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("deman539/snowflake-arctic-embed-m-long-finetuned-indeed-jobs")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
prithivMLmods/Canopus-Snoopy-Charlie-Brown-Flux-LoRA
|
prithivMLmods
| 2024-10-20T17:49:13Z | 91 | 10 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"Snoopy Charlie Brown",
"flux",
"cartoon",
"flux-dev",
"art",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-09-08T12:31:24Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- Snoopy Charlie Brown
- flux
- cartoon
- flux-dev
- art
widget:
- text: 'Snoopy and Charlie Brown hugging on a grassy field with a tree in the background, under a light blue sky with wispy clouds.'
output:
url: images/000.png
- text: 'Snoopy and Charlie Brown hugging under a starry night sky, with a tree in the background and a grassy field illuminated by moonlight.'
output:
url: images/111.png
- text: 'Charlie Brown and Snoopy, clad in space suits, stand near their shuttle, mesmerized by a colossal black hole pulling in light from the distant galaxy.'
output:
url: images/222.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Snoopy Charlie Brown
license: creativeml-openrail-m
---
# Snoopy-Charlie-Brown-Flux-LoRA
<Gallery />
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.**
## Model description
**prithivMLmods/Canopus-Snoopy-Charlie-Brown-Flux-LoRA**
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW8bit | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 25 & 1.7K+ |
| Epoch | 20 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 100+ [ Hi-RES ]
& More ...............
## Trigger prompts
A black ford mustang parked in the parking lot, in the style of futurism influence, uhd image, furaffinity, focus, street photography, thin steel forms, 32k uhd --ar 2:3 --v 5
Ferrari car f3 458 tt, in the style of liam wong, fujifilm x-t4, multiple exposure, tsubasa nakai, uhd image, pinturicchio, crimson --ar 16:9 --v 5.2
Bugatti Veyron in cobalt blue metallic, high detail, octane render, 8k
| Parameter | Value |
|-----------------|---------------------------------------------------------------------------------------|
| Prompt | Bugatti Veyron in cobalt blue metallic, high detail, octane render, 8k |
| Sampler | euler |
## Setting Up
```
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "prithivMLmods/Canopus-Snoopy-Charlie-Brown-Flux-LoRA"
trigger_word = "Snoopy Charlie Brown" # Leave trigger_word blank if not used.
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `Snoopy Charlie Brown` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/Canopus-Snoopy-Charlie-Brown-Flux-LoRA/tree/main) them in the Files & versions tab.
|
QuantFactory/magnum-v4-9b-GGUF
|
QuantFactory
| 2024-10-20T17:29:00Z | 92 | 2 |
transformers
|
[
"transformers",
"gguf",
"chat",
"text-generation",
"en",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-20T16:30:29Z |
---
license: gemma
language:
- en
tags:
- chat
pipeline_tag: text-generation
library_name: transformers
---
[](https://hf.co/QuantFactory)
# QuantFactory/magnum-v4-9b-GGUF
This is quantized version of [anthracite-org/magnum-v4-9b](https://huggingface.co/anthracite-org/magnum-v4-9b) created using llama.cpp
# Original Model Card

This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
This model is fine-tuned on top of [gemma 2 9b (chatML'ified)](https://huggingface.co/IntervitensInc/gemma-2-9b-chatml).
## Prompting
A typical input would look like this:
```py
<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
```
## SillyTavern templates
Below are Instruct and Context templates for use within SillyTavern.
<details><summary>context template</summary>
```yaml
{
"story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
"example_separator": "",
"chat_start": "",
"use_stop_strings": false,
"allow_jailbreak": false,
"always_force_name2": true,
"trim_sentences": false,
"include_newline": false,
"single_line": false,
"name": "Magnum ChatML"
}
```
</details><br>
<details><summary>instruct template</summary>
```yaml
{
"system_prompt": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as "!" and "~" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"input_sequence": "<|im_start|>user\n",
"output_sequence": "<|im_start|>assistant\n",
"last_output_sequence": "",
"system_sequence": "<|im_start|>system\n",
"stop_sequence": "<|im_end|>",
"wrap": false,
"macro": true,
"names": true,
"names_force_groups": true,
"activation_regex": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"skip_examples": false,
"output_suffix": "<|im_end|>\n",
"input_suffix": "<|im_end|>\n",
"system_suffix": "<|im_end|>\n",
"user_alignment_message": "",
"system_same_as_user": false,
"last_system_sequence": "",
"name": "Magnum ChatML"
}
```
</details><br>
## Axolotl config
<details><summary>See axolotl config</summary>
```yaml
base_model: /workspace/data/gemma-2-9b-chatml
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: false
liger_rms_norm: false
liger_swiglu: true
liger_cross_entropy: true
liger_fused_linear_cross_entropy: false
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: anthracite-org/c2_logs_16k_llama_v1.1
type: sharegpt
conversation: chatml
- path: NewEden/Claude-Instruct-5K
type: sharegpt
conversation: chatml
- path: anthracite-org/kalo-opus-instruct-22k-no-refusal
type: sharegpt
conversation: chatml
- path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
type: sharegpt
conversation: chatml
- path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
type: sharegpt
conversation: chatml
- path: anthracite-org/nopm_claude_writing_fixed
type: sharegpt
conversation: chatml
- path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
type: sharegpt
conversation: chatml
- path: anthracite-org/kalo_opus_misc_240827
type: sharegpt
conversation: chatml
- path: anthracite-org/kalo_misc_part2
type: sharegpt
conversation: chatml
chat_template: chatml
shuffle_merged_datasets: false
default_system_message: "You are a helpful assistant that responds to the user."
dataset_prepared_path: /workspace/data/9b-fft-data
val_set_size: 0.0
output_dir: /workspace/data/9b-fft-out
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project: 9b-Nemo-config-fft
wandb_entity:
wandb_watch:
wandb_name: attempt-01
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 4
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.00001
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
auto_resume_from_checkpoints: true
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 1
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.001
fsdp:
fsdp_config:
special_tokens:
pad_token: <pad>
```
</details><br>
## Credits
We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
We would also like to thank all members of Anthracite who made this finetune possible.
## Datasets
- [anthracite-org/c2_logs_16k_llama_v1.1](https://huggingface.co/datasets/anthracite-org/c2_logs_16k_llama_v1.1)
- [NewEden/Claude-Instruct-5K](https://huggingface.co/datasets/NewEden/Claude-Instruct-5K)
- [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
- [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned)
- [lodrick-the-lafted/kalo-opus-instruct-3k-filtered](https://huggingface.co/datasets/lodrick-the-lafted/kalo-opus-instruct-3k-filtered)
- [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
- [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
- [anthracite-org/kalo_opus_misc_240827](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827)
- [anthracite-org/kalo_misc_part2](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2)
## Training
The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Safety
...
|
bartowski/magnum-v4-72b-GGUF
|
bartowski
| 2024-10-20T17:28:39Z | 1,078 | 4 | null |
[
"gguf",
"chat",
"text-generation",
"en",
"base_model:anthracite-org/magnum-v4-72b",
"base_model:quantized:anthracite-org/magnum-v4-72b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2024-10-20T14:31:02Z |
---
base_model: anthracite-org/magnum-v4-72b
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- chat
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of magnum-v4-72b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3930">b3930</a> for quantization.
Original model: https://huggingface.co/anthracite-org/magnum-v4-72b
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [magnum-v4-72b-Q8_0.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/tree/main/magnum-v4-72b-Q8_0) | Q8_0 | 77.26GB | true | Extremely high quality, generally unneeded but max available quant. |
| [magnum-v4-72b-Q6_K.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/tree/main/magnum-v4-72b-Q6_K) | Q6_K | 64.35GB | true | Very high quality, near perfect, *recommended*. |
| [magnum-v4-72b-Q5_K_M.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/tree/main/magnum-v4-72b-Q5_K_M) | Q5_K_M | 54.45GB | true | High quality, *recommended*. |
| [magnum-v4-72b-Q5_K_S.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/tree/main/magnum-v4-72b-Q5_K_S) | Q5_K_S | 51.38GB | true | High quality, *recommended*. |
| [magnum-v4-72b-Q4_K_M.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-Q4_K_M.gguf) | Q4_K_M | 47.42GB | false | Good quality, default size for must use cases, *recommended*. |
| [magnum-v4-72b-Q4_K_S.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-Q4_K_S.gguf) | Q4_K_S | 43.89GB | false | Slightly lower quality with more space savings, *recommended*. |
| [magnum-v4-72b-Q4_0.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-Q4_0.gguf) | Q4_0 | 41.38GB | false | Legacy format, generally not worth using over similarly sized formats |
| [magnum-v4-72b-Q3_K_XL.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-Q3_K_XL.gguf) | Q3_K_XL | 40.60GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [magnum-v4-72b-IQ4_XS.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-IQ4_XS.gguf) | IQ4_XS | 39.71GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [magnum-v4-72b-Q3_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-Q3_K_L.gguf) | Q3_K_L | 39.51GB | false | Lower quality but usable, good for low RAM availability. |
| [magnum-v4-72b-Q3_K_M.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-Q3_K_M.gguf) | Q3_K_M | 37.70GB | false | Low quality. |
| [magnum-v4-72b-IQ3_M.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-IQ3_M.gguf) | IQ3_M | 35.50GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [magnum-v4-72b-Q3_K_S.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-Q3_K_S.gguf) | Q3_K_S | 34.49GB | false | Low quality, not recommended. |
| [magnum-v4-72b-IQ3_XXS.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-IQ3_XXS.gguf) | IQ3_XXS | 31.85GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [magnum-v4-72b-Q2_K_L.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-Q2_K_L.gguf) | Q2_K_L | 31.03GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [magnum-v4-72b-Q2_K.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-Q2_K.gguf) | Q2_K | 29.81GB | false | Very low quality but surprisingly usable. |
| [magnum-v4-72b-IQ2_M.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-IQ2_M.gguf) | IQ2_M | 29.34GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [magnum-v4-72b-IQ2_XS.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-IQ2_XS.gguf) | IQ2_XS | 27.06GB | false | Low quality, uses SOTA techniques to be usable. |
| [magnum-v4-72b-IQ2_XXS.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-IQ2_XXS.gguf) | IQ2_XXS | 25.49GB | false | Very low quality, uses SOTA techniques to be usable. |
| [magnum-v4-72b-IQ1_M.gguf](https://huggingface.co/bartowski/magnum-v4-72b-GGUF/blob/main/magnum-v4-72b-IQ1_M.gguf) | IQ1_M | 23.74GB | false | Extremely low quality, *not* recommended. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/magnum-v4-72b-GGUF --include "magnum-v4-72b-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/magnum-v4-72b-GGUF --include "magnum-v4-72b-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (magnum-v4-72b-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
LLM2407/samsum
|
LLM2407
| 2024-10-20T17:10:21Z | 117 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-20T05:09:21Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# samsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1232 | 1.0 | 1842 | 1.8594 |
| 2.0019 | 2.0 | 3684 | 1.8068 |
| 1.9604 | 3.0 | 5526 | 1.7807 |
| 1.9283 | 4.0 | 7368 | 1.7771 |
| 1.9285 | 5.0 | 9210 | 1.7693 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
abatpool/mal-mms
|
abatpool
| 2024-10-20T17:04:38Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/mms-1b-fl102",
"base_model:finetune:facebook/mms-1b-fl102",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-20T15:44:30Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-fl102
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: mal-mms
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: ml
split: test
args: ml
metrics:
- name: Wer
type: wer
value: 0.5393294648613798
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mal-mms
This model is a fine-tuned version of [facebook/mms-1b-fl102](https://huggingface.co/facebook/mms-1b-fl102) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3051
- Wer: 0.5393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.748 | 1.5625 | 100 | 0.4114 | 0.6370 |
| 0.4627 | 3.125 | 200 | 0.3346 | 0.6006 |
| 0.3883 | 4.6875 | 300 | 0.3143 | 0.5725 |
| 0.3596 | 6.25 | 400 | 0.3133 | 0.5709 |
| 0.3294 | 7.8125 | 500 | 0.3069 | 0.5603 |
| 0.3078 | 9.375 | 600 | 0.3073 | 0.5516 |
| 0.2881 | 10.9375 | 700 | 0.3110 | 0.5522 |
| 0.2755 | 12.5 | 800 | 0.3041 | 0.5519 |
| 0.2627 | 14.0625 | 900 | 0.3163 | 0.5467 |
| 0.245 | 15.625 | 1000 | 0.3009 | 0.5432 |
| 0.2303 | 17.1875 | 1100 | 0.3074 | 0.5374 |
| 0.2233 | 18.75 | 1200 | 0.3123 | 0.5413 |
| 0.2142 | 20.3125 | 1300 | 0.3123 | 0.5397 |
| 0.2125 | 21.875 | 1400 | 0.3088 | 0.5403 |
| 0.2025 | 23.4375 | 1500 | 0.3055 | 0.5416 |
| 0.2072 | 25.0 | 1600 | 0.3051 | 0.5393 |
### Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Nucha/Nucha_ITSkillNER_BERT
|
Nucha
| 2024-10-20T16:53:36Z | 35 | 1 | null |
[
"safetensors",
"bert",
"Skills",
"NER",
"SkillNER",
"BERT",
"token-classification",
"en",
"base_model:Nucha/Nucha_ITSkillNER_BERT",
"base_model:finetune:Nucha/Nucha_ITSkillNER_BERT",
"license:mit",
"region:us"
] |
token-classification
| 2024-10-07T08:52:57Z |
---
license:
- mit
language:
- en
base_model:
- Nucha/Nucha_SkillNER_BERT
tags:
- Skills
- NER
- SkillNER
- BERT
widget:
- text: "ตัวอย่างข้อความที่ใช้ทดสอบ"
pipeline_tag: token-classification
---
# Computing Skill NER
**Nucha_SkillNER_BERT** is a Named Entity Recognition (NER) model specifically fine-tuned to recognize skill-related entities from text, focusing on identifying both hard and soft skills. This model is built on top of a BERT-based architecture, allowing it to leverage contextual understanding for accurate extraction of skill-related information. It is particularly useful for analyzing job descriptions, resumes, or any text where skills are explicitly mentioned.
The model supports the recognition of multiple skill categories, including technical skills (e.g., programming languages, software tools) and soft skills (e.g., communication, leadership). It is ideal for applications in recruitment, talent management, or skill-based data analysis.
## How to Use
You can use the **Nucha/Nucha_SkillNER_BERT** model for Named Entity Recognition (NER) by loading it directly from Hugging Face's **transformers** library. Below is an example of how to use the model with the **pipeline** API for entity extraction.
### Step-by-Step Example:
```python
# Libraly
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
# Load the pre-trained model and tokenizer
model_name = "Nucha/Nucha_SkillNER_BERT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
# Create a NER pipeline
ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
# Sample text
text = "I have experience in Python, JavaScript, and cloud technologies like AWS and Azure."
# Run the pipeline on the text
ner_results = ner_pipeline(text)
# Display the results
for entity in ner_results:
print(f"Entity: {entity['word']}, Label: {entity['entity_group']}, Score: {entity['score']:.4f}")
```
### Output Explanation:
- Entity: This is the word or phrase identified in the text that matches one of the model's recognized categories.
- Label: The classification label assigned to the entity, such as **SKILL** or **TECHNOLOGY** .
- Score: The confidence score of the model for the identified entity, represented as a floating-point number.
## Demo
The **Nucha/Nucha_SkillNER_BERT** model is designed for Named Entity Recognition (NER) specifically targeting skill-related entities in text. This demo allows users to input any text and see how well the model identifies different skills.
https://huggingface.co/spaces/Nucha/NuchaSkillNER
### How to Use:
- Input Text: Enter any text that contains information about skills or related topics. For example, you can input job descriptions, resumes, or any relevant text.
- Analyze: Click the "Analyze" button to run the model on the provided text. The model will process the input and extract named entities, specifically skills.
- Results: The output will display the recognized entities along with their labels and confidence scores. The labels will indicate the type of skills identified (e.g., programming languages, frameworks, tools).
## Evaluation
The **Nucha/Nucha_SkillNER_BERT** model has undergone rigorous evaluation to ensure its effectiveness in Named Entity Recognition (NER) tasks, specifically in identifying and categorizing skills relevant to various domains. The evaluation was conducted on a diverse set of datasets designed to reflect real-world scenarios.
### Metrics
The model's performance was assessed using standard NER metrics:
- **Accuracy**: Measures the overall correctness of the model's predictions.
- **Precision**: Indicates the proportion of true positive results in the total predicted positives.
- **Recall**: Reflects the ability of the model to find all relevant instances in the dataset.
- **F1 Score**: The harmonic mean of precision and recall, providing a single score that balances both metrics.
```
precision recall f1-score support
HSKILL 0.89 0.91 0.90 3708
SSKILL 0.91 0.91 0.91 2299
micro avg 0.90 0.91 0.90 6007
macro avg 0.90 0.91 0.91 6007
weighted avg 0.90 0.91 0.90 6007
Accuracy: 0.9972517975663717 (Train:5083/Test:1017)
```
#### Testing Data
You can employ this model using the Transformers library's *pipeline* for NER, or incorporate it as a conventional Transformer in the HuggingFace ecosystem.
```
1017/5083
```
### Results
You can employ this model using the Transformers library's *pipeline* for NER, or incorporate it as a conventional Transformer in the HuggingFace ecosystem.
```JSON
[
0:{
"entity":"B-HSKILL"
"score":"np.float32(0.9990522)"
"index":110
"word":"machine"
"start":581
"end":588
}
1:{
"entity":"I-HSKILL"
"score":"np.float32(0.9995209)"
"index":111
"word":"learning"
"start":589
"end":597
}
...
]
```
## Conclusion
The **Nucha/Nucha_SkillNER_BERT** model demonstrates strong performance in identifying skills in text data, making it a valuable tool for applications in recruitment, resume screening, and skill extraction tasks. Continuous improvements and further evaluations will enhance its accuracy and adaptability to specific use cases.
|
lukasgrouleff/distilbert-base-uncased-distilled-clinc-finalmodel-tuned
|
lukasgrouleff
| 2024-10-20T16:42:41Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-20T16:42:29Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc-finalmodel-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc-finalmodel-tuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1821
- Accuracy: 0.9490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 477 | 0.9872 | 0.7568 |
| 1.5625 | 2.0 | 954 | 0.4304 | 0.8981 |
| 0.6762 | 3.0 | 1431 | 0.2624 | 0.9326 |
| 0.3206 | 4.0 | 1908 | 0.2171 | 0.9419 |
| 0.2139 | 5.0 | 2385 | 0.2028 | 0.9432 |
| 0.1788 | 6.0 | 2862 | 0.1948 | 0.9465 |
| 0.1638 | 7.0 | 3339 | 0.1905 | 0.9465 |
| 0.1541 | 8.0 | 3816 | 0.1865 | 0.9487 |
| 0.1496 | 9.0 | 4293 | 0.1850 | 0.9461 |
| 0.1464 | 10.0 | 4770 | 0.1833 | 0.9494 |
| 0.1439 | 11.0 | 5247 | 0.1822 | 0.9477 |
| 0.1424 | 12.0 | 5724 | 0.1821 | 0.9490 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
katopz/kbtg-kpoint-v1-fused
|
katopz
| 2024-10-20T16:29:25Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"mlx",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T16:17:34Z |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- mlx
license: llama3.2
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
base_model: meta-llama/Llama-3.2-3B-Instruct
---
# katopz/kbtg-kpoint-v1-fused
The Model [katopz/kbtg-kpoint-v1-fused](https://huggingface.co/katopz/kbtg-kpoint-v1-fused) was converted to MLX format from [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) using mlx-lm version **0.19.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("katopz/kbtg-kpoint-v1-fused")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
QuantFactory/Llama-3-8B-Instruct-Finance-RAG-GGUF
|
QuantFactory
| 2024-10-20T16:22:33Z | 3,471 | 36 |
transformers
|
[
"transformers",
"gguf",
"finance",
"text-generation",
"en",
"dataset:virattt/financial-qa-10K",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-20T15:43:33Z |
---
library_name: transformers
tags:
- finance
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- virattt/financial-qa-10K
language:
- en
pipeline_tag: text-generation
---
[](https://hf.co/QuantFactory)
# QuantFactory/Llama-3-8B-Instruct-Finance-RAG-GGUF
This is quantized version of [curiousily/Llama-3-8B-Instruct-Finance-RAG](https://huggingface.co/curiousily/Llama-3-8B-Instruct-Finance-RAG) created using llama.cpp
# Original Model Card
# Llama 3 8B Instruct (Financial RAG)
This model is a fine-tuned version of the original [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) model
on 4000 examples from the [virattt/financial-qa-10K](https://huggingface.co/datasets/virattt/financial-qa-10K) dataset.
The model is fine-tuned using a LoRA adapter for RAG use cases. It is optimized to answer a question based on a context:
```txt
Answer the question:
{question}
Using the information:
{context}
```
## Usage
Load the model:
```py
MODEL_NAME = "curiousily/Llama-3-8B-Instruct-Finance-RAG"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="auto"
)
pipe = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=128,
return_full_text=False,
)
```
Format the prompt (uses the original Instruct prompt format):
````py
prompt = """
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Use only the information to answer the question<|eot_id|><|start_header_id|>user<|end_header_id|>
How much did the company's net earnings amount to in fiscal 2022?
Information:
```
Net earnings were $17.1 billion in fiscal 2022.
```<|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
````
And make a prediction:
```py
print(outputs[0]["generated_text"])
```
```
$17.1 billion
```
Here's a helper function to build your prompts:
```py
def create_test_prompt(data_row):
prompt = dedent(f"""
{data_row["question"]}
Information:
```
{data_row["context"]}
```
""")
messages = [
{"role": "system", "content": "Use only the information to answer the question"},
{"role": "user", "content": prompt},
]
return tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
```
Where `data_row` must be a dict:
```
data_row = {
"question": "...",
"context": "..."
}
```
## Sample Predictions
Here's a sample of the predictions from *trained* and *untrained* models
```txt
Example 1
answer: Delta Air Lines' agreements with its regional carriers typically last at least ten years with options for extensions. Delta controls operational aspects like scheduling and pricing.
trained: Delta Connection agreements typically last at least ten years with options for extensions.
untrained: According to the information, the terms of Delta Air Lines' agreements with its regional carriers through Delta Connection are:
1. The agreements typically last at least ten years.
2. There are options for extensions.
3. Delta controls major operational aspects like scheduling and pricing.
4. The regional carriers supply the services.
Example 2
answer: The company evaluates acquisition-related intangibles for impairment by comparing the asset's carrying amount to undiscounted future net cash flows expected from the asset. An impairment loss is recognized if the carrying amount exceeds the asset's recoverable amount.
trained: The company evaluates acquisition-related intangible and other long-lived assets for impairment whenever events or changes in circumstances indicate that the carrying amount may not be recoverable. This involves measuring the recoverability of the asset by comparing its carrying amount to the future undiscounted net cash flows expected to be generated by the asset group.
untrained: According to the information, the company uses the following criteria to determine whether an impairment loss should be recognized on acquisition-related intangible assets:
* Events or changes in circumstances that indicate the carrying amount of an asset may not be recoverable.
* A comparison of the carrying amount of an asset to future undiscounted net cash flows expected to be generated by the asset group.
In other words, the company evaluates whether the asset's carrying amount is recoverable by comparing it to the expected future cash flows, which involves assumptions about future prospects and computations of estimated future cash flows.
Example 3
answer: In the United States, the approval process for biosimilars is governed by the Public Health Service Act (PHSA) and the regulations implementing these statutes, specifically including provisions made under federal health care reform legislation enacted in March 2010.
trained: The Federal Food, Drug, and Cosmetic Act (the FFDCA) and the Public Health Service Act (PHSA)
untrained: The legal framework that governs the approval process for biosimilars in the United States is the Federal Food, Drug, and Cosmetic Act (FFDCA) and the Public Health Service Act (PHSA), as well as the regulations implementing these statutes.
Example 4
answer: Timothy S. Teter holds a B.S. degree in Mechanical Engineering from the University of California at Davis and a J.D. degree from Stanford Law School.
trained: B.S. in Mechanical Engineering from the University of California at Davis and a J.D. from Stanford Law School
untrained: According to the information, Timothy S. Teter holds:
1. A B.S. degree in Mechanical Engineering from the University of California at Davis.
2. A J.D. degree from Stanford Law School.
Example 5
answer: Beginning in fiscal year 2024, the company plans to exclude paused Connected Fitness subscriptions from its new 'Ending Paid Connected Fitness Subscriptions' metric and will treat a pause action as a churn event in its 'Average Net Monthly Paid Connected Fitness Subscription Churn' metric.
trained: Starting in fiscal year 2024, the company will no longer include paused Connected Fitness subscriptions in their Ending Paid Connected Fitness Subscriptions metric and will treat a pause action as a churn event in their Average Net Monthly Paid Connected Fitness Subscription Churn.
untrained: Starting in fiscal year 2024, the company will:
* No longer include paused Connected Fitness subscriptions in the Ending Paid Connected Fitness Subscriptions metric
* Treat a pause action as a churn event in the Average Net Monthly Paid Connected Fitness Subscription Churn
```
## License
Uses the original Llama 3 License.
A custom commercial license is available at: https://llama.meta.com/llama3/license
|
AppyFizz/calrealxl
|
AppyFizz
| 2024-10-20T16:09:53Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-20T16:08:30Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### calrealxl on Stable Diffusion via Dreambooth
#### model by AppyFizz
This your the Stable Diffusion model fine-tuned the calrealxl concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **calrealxl woman**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:





|
tdnathmlenthusiast/speecht5_finetuned_voice_dataset_bn_v_3
|
tdnathmlenthusiast
| 2024-10-20T16:05:09Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-10-19T20:56:30Z |
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_voice_dataset_bn_v_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voice_dataset_bn_v_3
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 125
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:--------:|:----:|:---------------:|
| 0.6046 | 12.2699 | 250 | 0.5646 |
| 0.5583 | 24.5399 | 500 | 0.5268 |
| 0.5364 | 36.8098 | 750 | 0.5188 |
| 0.5171 | 49.0798 | 1000 | 0.5087 |
| 0.5098 | 61.3497 | 1250 | 0.5018 |
| 0.501 | 73.6196 | 1500 | 0.5022 |
| 0.4984 | 85.8896 | 1750 | 0.4955 |
| 0.4929 | 98.1595 | 2000 | 0.5000 |
| 0.4933 | 110.4294 | 2250 | 0.4944 |
| 0.4868 | 122.6994 | 2500 | 0.5006 |
| 0.4805 | 134.9693 | 2750 | 0.4991 |
| 0.4802 | 147.2393 | 3000 | 0.5008 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
mradermacher/Smoke_7B-i1-GGUF
|
mradermacher
| 2024-10-20T16:02:06Z | 118 | 1 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:FourOhFour/Smoke_7B",
"base_model:quantized:FourOhFour/Smoke_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-20T13:17:41Z |
---
base_model: FourOhFour/Smoke_7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/FourOhFour/Smoke_7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Smoke_7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Smoke_7B-i1-GGUF/resolve/main/Smoke_7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MaziyarPanahi/Chocolatine-3B-Instruct-DPO-Revised-GGUF
|
MaziyarPanahi
| 2024-10-20T15:59:29Z | 184 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:jpacifico/Chocolatine-3B-Instruct-DPO-Revised",
"base_model:quantized:jpacifico/Chocolatine-3B-Instruct-DPO-Revised",
"region:us",
"conversational"
] |
text-generation
| 2024-10-20T15:40:51Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Chocolatine-3B-Instruct-DPO-Revised-GGUF
base_model: jpacifico/Chocolatine-3B-Instruct-DPO-Revised
inference: false
model_creator: jpacifico
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Chocolatine-3B-Instruct-DPO-Revised-GGUF](https://huggingface.co/MaziyarPanahi/Chocolatine-3B-Instruct-DPO-Revised-GGUF)
- Model creator: [jpacifico](https://huggingface.co/jpacifico)
- Original model: [jpacifico/Chocolatine-3B-Instruct-DPO-Revised](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-Revised)
## Description
[MaziyarPanahi/Chocolatine-3B-Instruct-DPO-Revised-GGUF](https://huggingface.co/MaziyarPanahi/Chocolatine-3B-Instruct-DPO-Revised-GGUF) contains GGUF format model files for [jpacifico/Chocolatine-3B-Instruct-DPO-Revised](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-Revised).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
crocutacrocuto/dinov2-base-MEGbis-5
|
crocutacrocuto
| 2024-10-20T15:56:42Z | 137 | 0 |
transformers
|
[
"transformers",
"safetensors",
"dinov2",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-10-20T15:56:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
esahit/ul2-base-dutch-finetuned-oba-book-search
|
esahit
| 2024-10-20T15:46:53Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:yhavinga/ul2-base-dutch",
"base_model:finetune:yhavinga/ul2-base-dutch",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-20T09:57:55Z |
---
library_name: transformers
license: apache-2.0
base_model: yhavinga/ul2-base-dutch
tags:
- generated_from_trainer
model-index:
- name: ul2-base-dutch-finetuned-oba-book-search
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ul2-base-dutch-finetuned-oba-book-search
This model is a fine-tuned version of [yhavinga/ul2-base-dutch](https://huggingface.co/yhavinga/ul2-base-dutch) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5040
- Top-5-accuracy: 0.0597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.3
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Top-5-accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------------:|
| 6.7559 | 0.0848 | 500 | 7.0741 | 0.0 |
| 7.3594 | 0.1696 | 1000 | 7.0888 | 0.0 |
| 7.4457 | 0.2544 | 1500 | 6.8574 | 0.0 |
| 7.6522 | 0.3392 | 2000 | 7.2824 | 0.0 |
| 7.4598 | 0.4239 | 2500 | 7.1592 | 0.0 |
| 7.4733 | 0.5087 | 3000 | 6.8309 | 0.0 |
| 7.1533 | 0.5935 | 3500 | 6.3314 | 0.0 |
| 7.1903 | 0.6783 | 4000 | 6.6715 | 0.0 |
| 12.2465 | 0.7631 | 4500 | 7.5477 | 0.0 |
| 7.0061 | 0.8479 | 5000 | 6.7576 | 0.0 |
| 6.7448 | 0.9327 | 5500 | 6.2698 | 0.0 |
| 6.4934 | 1.0175 | 6000 | 6.0520 | 0.0 |
| 6.7022 | 1.1023 | 6500 | 6.4743 | 0.0 |
| 6.6138 | 1.1870 | 7000 | 6.6552 | 0.0 |
| 6.1879 | 1.2718 | 7500 | 5.8394 | 0.0 |
| 6.3701 | 1.3566 | 8000 | 6.2708 | 0.0 |
| 6.0675 | 1.4414 | 8500 | 5.8804 | 0.0 |
| 5.9228 | 1.5262 | 9000 | 5.4786 | 0.0796 |
| 5.8256 | 1.6110 | 9500 | 5.8534 | 0.0 |
| 5.529 | 1.6958 | 10000 | 5.4673 | 0.0796 |
| 5.3783 | 1.7806 | 10500 | 5.1146 | 0.0 |
| 5.3029 | 1.8654 | 11000 | 5.1393 | 0.0 |
| 5.0497 | 1.9501 | 11500 | 4.8904 | 0.0 |
| 4.9395 | 2.0349 | 12000 | 4.7346 | 0.0 |
| 4.6926 | 2.1197 | 12500 | 4.6029 | 0.0 |
| 4.5387 | 2.2045 | 13000 | 4.3546 | 0.1393 |
| 4.3876 | 2.2893 | 13500 | 4.2308 | 0.0597 |
| 4.2131 | 2.3741 | 14000 | 4.1112 | 0.1990 |
| 4.0999 | 2.4589 | 14500 | 3.9334 | 0.0995 |
| 3.9525 | 2.5437 | 15000 | 3.8421 | 0.0 |
| 3.8629 | 2.6285 | 15500 | 3.7120 | 0.1592 |
| 3.7975 | 2.7132 | 16000 | 3.5973 | 0.0796 |
| 3.7205 | 2.7980 | 16500 | 3.5398 | 0.0796 |
| 3.6382 | 2.8828 | 17000 | 3.5131 | 0.2786 |
| 3.5967 | 2.9676 | 17500 | 3.5040 | 0.0597 |
### Framework versions
- Transformers 4.44.2
- Pytorch 1.13.0+cu116
- Datasets 3.0.0
- Tokenizers 0.19.1
|
danlou/persona-generator-llama-2-7b-qlora-merged
|
danlou
| 2024-10-20T15:43:58Z | 29 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:meta-llama/Llama-2-7b",
"base_model:finetune:meta-llama/Llama-2-7b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-15T09:53:03Z |
---
license: llama2
base_model:
- meta-llama/Llama-2-7b
pipeline_tag: text-generation
library_name: transformers
---
The code below shows how this Buyer Persona generator can be used.
This model was developed for [MarketFit.ai](https://danlou.co/marketfitai).
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from tqdm import tqdm
device = "cuda" if torch.cuda.is_available() else "cpu"
model_id = "danlou/persona-generator-llama-2-7b-qlora-merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.float16)
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i:i + n]
def parse_outputs(output_text):
try:
output_lns = output_text.split('\n')
assert len(output_lns) == 2
assert len(output_lns[0].split(',')) == 2
assert len(output_lns[1]) > 16
name, age = [s.strip() for s in output_lns[0].split(',')]
desc = output_lns[1].strip()
except AssertionError:
raise Exception('Malformed output.')
try:
age = int(age)
except ValueError:
raise Exception('Malformed output (age).')
return {'name': name, 'age': age, 'description': desc}
def generate_personas(product, n=1, batch_size=32, parse=True):
prompt = f"### Instruction:\nDescribe the ideal persona for this product:\n{product}\n\n### Response:\n"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
personas = []
with tqdm(total=n) as pbar:
for batch in chunks(range(n), batch_size):
outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
num_return_sequences=len(batch),
max_length=512,
min_length=32,
temperature=0.9)
for output_ids in outputs:
output_decoded = tokenizer.decode(output_ids, skip_special_tokens=True)
output_decoded = output_decoded[len(prompt):].strip()
try:
if parse:
personas.append(parse_outputs(output_decoded))
else:
personas.append(output_decoded)
except Exception as e:
print(e)
continue
pbar.update(len(batch))
return personas
product = "Koonie 10000mAh Rechargeable Desk Fan, 8-Inch Battery Operated Clip on Fan, USB Fan, 4 Speeds, Strong Airflow, Sturdy Clamp for Golf Cart Office Desk Outdoor Travel Camping Tent Gym Treadmill, Black (USB Gadgets > USB Fans)"
personas = generate_personas(product, n=3)
for e in personas:
print(e)
# Persona 1 - The yoga instructor
# {'name': 'Sarah', 'age': 28, 'description': 'Yoga instructor who is passionate about health and fitness. She works from a home studio where she also practices yoga and meditation. Sarah values products that are eco-friendly and sustainable. She loves products that are versatile and can be used for different purposes. Sarah is looking for a product that is durable and can withstand frequent use. She values products that are stylish and aesthetically pleasing.'}
# Persona 2 - The golf enthusiast
#{'name': 'Sophia', 'age': 60, 'description': "Golf enthusiast. Sophia spends most of her weekends on the golf course, and she needs a fan that she can carry around in her golf cart. She needs a fan that's lightweight, easy to clip on, and has a long battery life. She also wants a fan that's affordable, especially since she plays at different courses."}
# Persona 3 - The truck driver
# {'name': 'Mike', 'age': 32, 'description': "Truck driver who spends most of his day on the road. The cab of his truck can get hot and stuffy, and Mike needs a fan that can keep him comfortable and alert while he's driving. He needs a fan that's easy to install and adjust, so he can keep it on his dashboard and direct the airflow where he needs it most."}
```
|
cosmicthrillseeking/minisft
|
cosmicthrillseeking
| 2024-10-20T15:40:15Z | 141 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T15:38:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantFactory/TouchNight-Ministral-8B-Instruct-2410-HF-GGUF
|
QuantFactory
| 2024-10-20T15:36:01Z | 82 | 2 |
vllm
|
[
"vllm",
"gguf",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"license:other",
"region:us",
"conversational"
] | null | 2024-10-20T14:56:34Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
extra_gated_prompt: >-
# Mistral AI Research License
If You want to use a Mistral Model, a Derivative or an Output for any purpose that is not expressly authorized under this Agreement, You must request a license from Mistral AI, which Mistral AI may grant to You in Mistral AI's sole discretion. To discuss such a license, please contact Mistral AI via the website contact form: https://mistral.ai/contact/
## 1. Scope and acceptance
**1.1. Scope of the Agreement.** This Agreement applies to any use, modification, or Distribution of any Mistral Model by You, regardless of the source You obtained a copy of such Mistral Model.
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral Model, or by creating, using or distributing a Derivative of the Mistral Model, You agree to be bound by this Agreement.
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement on behalf of Your employer or another person or entity, You warrant and represent that You have the authority to act and accept this Agreement on their behalf. In such a case, the word "You" in this Agreement will refer to Your employer or such other person or entity.
## 2. License
**2.1. Grant of rights**. Subject to Section 3 below, Mistral AI hereby grants You a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable, limited license to use, copy, modify, and Distribute under the conditions provided in Section 2.2 below, the Mistral Model and any Derivatives made by or for Mistral AI and to create Derivatives of the Mistral Model.
**2.2. Distribution of Mistral Model and Derivatives made by or for Mistral AI.** Subject to Section 3 below, You may Distribute copies of the Mistral Model and/or Derivatives made by or for Mistral AI, under the following conditions:
You must make available a copy of this Agreement to third-party recipients of the Mistral Models and/or Derivatives made by or for Mistral AI you Distribute, it being specified that any rights to use the Mistral Models and/or Derivatives made by or for Mistral AI shall be directly granted by Mistral AI to said third-party recipients pursuant to the Mistral AI Research License agreement executed between these parties;
You must retain in all copies of the Mistral Models the following attribution notice within a "Notice" text file distributed as part of such copies: "Licensed by Mistral AI under the Mistral AI Research License".
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3 below, You may Distribute any Derivatives made by or for You under additional or different terms and conditions, provided that:
In any event, the use and modification of Mistral Model and/or Derivatives made by or for Mistral AI shall remain governed by the terms and conditions of this Agreement;
You include in any such Derivatives made by or for You prominent notices stating that You modified the concerned Mistral Model; and
Any terms and conditions You impose on any third-party recipients relating to Derivatives made by or for You shall neither limit such third-party recipients' use of the Mistral Model or any Derivatives made by or for Mistral AI in accordance with the Mistral AI Research License nor conflict with any of its terms and conditions.
## 3. Limitations
**3.1. Misrepresentation.** You must not misrepresent or imply, through any means, that the Derivatives made by or for You and/or any modified version of the Mistral Model You Distribute under your name and responsibility is an official product of Mistral AI or has been endorsed, approved or validated by Mistral AI, unless You are authorized by Us to do so in writing.
**3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives (whether or not created by Mistral AI) and Outputs for Research Purposes.
## 4. Intellectual Property
**4.1. Trademarks.** No trademark licenses are granted under this Agreement, and in connection with the Mistral Models, You may not use any name or mark owned by or associated with Mistral AI or any of its affiliates, except (i) as required for reasonable and customary use in describing and Distributing the Mistral Models and Derivatives made by or for Mistral AI and (ii) for attribution purposes as required by this Agreement.
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are solely responsible for the Outputs You generate and their subsequent uses in accordance with this Agreement. Any Outputs shall be subject to the restrictions set out in Section 3 of this Agreement.
**4.3. Derivatives.** By entering into this Agreement, You accept that any Derivatives that You may create or that may be created for You shall be subject to the restrictions set out in Section 3 of this Agreement.
## 5. Liability
**5.1. Limitation of liability.** In no event, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall Mistral AI be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this Agreement or out of the use or inability to use the Mistral Models and Derivatives (including but not limited to damages for loss of data, loss of goodwill, loss of expected profit or savings, work stoppage, computer failure or malfunction, or any damage caused by malware or security breaches), even if Mistral AI has been advised of the possibility of such damages.
**5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI from and against any claims, damages, or losses arising out of or related to Your use or Distribution of the Mistral Models and Derivatives.
## 6. Warranty
**6.1. Disclaimer.** Unless required by applicable law or prior agreed to by Mistral AI in writing, Mistral AI provides the Mistral Models and Derivatives on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Mistral AI does not represent nor warrant that the Mistral Models and Derivatives will be error-free, meet Your or any third party's requirements, be secure or will allow You or any third party to achieve any kind of result or generate any kind of content. You are solely responsible for determining the appropriateness of using or Distributing the Mistral Models and Derivatives and assume any risks associated with Your exercise of rights under this Agreement.
## 7. Termination
**7.1. Term.** This Agreement is effective as of the date of your acceptance of this Agreement or access to the concerned Mistral Models or Derivatives and will continue until terminated in accordance with the following terms.
**7.2. Termination.** Mistral AI may terminate this Agreement at any time if You are in breach of this Agreement. Upon termination of this Agreement, You must cease to use all Mistral Models and Derivatives and shall permanently delete any copy thereof. The following provisions, in their relevant parts, will survive any termination or expiration of this Agreement, each for the duration necessary to achieve its own intended purpose (e.g. the liability provision will survive until the end of the applicable limitation period):Sections 5 (Liability), 6(Warranty), 7 (Termination) and 8 (General Provisions).
**7.3. Litigation.** If You initiate any legal action or proceedings against Us or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging that the Model or a Derivative, or any part thereof, infringe upon intellectual property or other rights owned or licensable by You, then any licenses granted to You under this Agreement will immediately terminate as of the date such legal action or claim is filed or initiated.
## 8. General provisions
**8.1. Governing laws.** This Agreement will be governed by the laws of France, without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement.
**8.2. Competent jurisdiction.** The courts of Paris shall have exclusive jurisdiction of any dispute arising out of this Agreement.
**8.3. Severability.** If any provision of this Agreement is held to be invalid, illegal or unenforceable, the remaining provisions shall be unaffected thereby and remain valid as if such provision had not been set forth herein.
## 9. Definitions
"Agreement": means this Mistral AI Research License agreement governing the access, use, and Distribution of the Mistral Models, Derivatives and Outputs.
"Derivative": means any (i) modified version of the Mistral Model (including but not limited to any customized or fine-tuned version thereof), (ii) work based on the Mistral Model, or (iii) any other derivative work thereof.
"Distribution", "Distributing", "Distribute" or "Distributed": means supplying, providing or making available, by any means, a copy of the Mistral Models and/or the Derivatives as the case may be, subject to Section 3 of this Agreement.
"Mistral AI", "We" or "Us": means Mistral AI, a French société par actions simplifiée registered in the Paris commercial registry under the number 952 418 325, and having its registered seat at 15, rue des Halles, 75001 Paris.
"Mistral Model": means the foundational large language model(s), and its elements which include algorithms, software, instructed checkpoints, parameters, source code (inference code, evaluation code and, if applicable, fine-tuning code) and any other elements associated thereto made available by Mistral AI under this Agreement, including, if any, the technical documentation, manuals and instructions for the use and operation thereof.
"Research Purposes": means any use of a Mistral Model, Derivative, or Output that is solely for (a) personal, scientific or academic research, and (b) for non-profit and non-commercial purposes, and not directly or indirectly connected to any commercial activities or business operations. For illustration purposes, Research Purposes does not include (1) any usage of the Mistral Model, Derivative or Output by individuals or contractors employed in or engaged by companies in the context of (a) their daily tasks, or (b) any activity (including but not limited to any testing or proof-of-concept) that is intended to generate revenue, nor (2) any Distribution by a commercial entity of the Mistral Model, Derivative or Output whether in return for payment or free of charge, in any medium or form, including but not limited to through a hosted or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
"Outputs": means any content generated by the operation of the Mistral Models or the Derivatives from a prompt (i.e., text instructions) provided by users. For the avoidance of doubt, Outputs do not include any components of a Mistral Models, such as any fine-tuned versions of the Mistral Models, the weights, or parameters.
"You": means the individual or entity entering into this Agreement with Mistral AI.
*Mistral AI processes your personal data below to provide the model and enforce its license. If you are affiliated with a commercial entity, we may also send you communications about our models. For more information on your rights and data handling, please see our <a href="https://mistral.ai/terms/">privacy policy</a>.*
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
Job title: text
I understand that I can only use the model, any derivative versions and their outputs for non-commercial research purposes: checkbox
I understand that if I am a commercial entity, I am not permitted to use or distribute the model internally or externally, or expose it in my own offerings without a commercial license: checkbox
I understand that if I upload the model, or any derivative version, on any platform, I must include the Mistral Research License: checkbox
I understand that for commercial use of the model, I can contact Mistral or use the Mistral AI API on la Plateforme or any of our cloud provider partners: checkbox
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Mistral Privacy Policy
: checkbox
geo: ip_location
extra_gated_description: >-
Mistral AI processes your personal data below to provide the model and enforce its license. If you are affiliated with a commercial entity, we may also send you communications about our models. For more information on your rights and data handling, please see our <a href="https://mistral.ai/terms/">privacy policy</a>.
extra_gated_button_content: Submit
library_name: vllm
---
[](https://hf.co/QuantFactory)
# QuantFactory/Ministral-8B-Instruct-2410-HF-GGUF
This is quantized version of [TouchNight/Ministral-8B-Instruct-2410-HF](https://huggingface.co/TouchNight/Ministral-8B-Instruct-2410-HF) created using llama.cpp
# Original Model Card
# Model Card for Ministral-8B-Instruct-2410
We introduce two new state-of-the-art models for local intelligence, on-device computing, and at-the-edge use cases. We call them les Ministraux: Ministral 3B and Ministral 8B.
The Ministral-8B-Instruct-2410 Language Model is an instruct fine-tuned model significantly outperforming existing models of similar size, released under the Mistral Research License.
If you are interested in using Ministral-3B or Ministral-8B commercially, outperforming Mistral-7B, [reach out to us](https://mistral.ai/contact/).
For more details about les Ministraux please refer to our release [blog post](https://mistral.ai/news/ministraux).
## Ministral 8B Key features
- Released under the **Mistral Research License**, reach out to us for a commercial license
- Trained with a **128k context window** with **interleaved sliding-window attention**
- Trained on a large proportion of **multilingual and code data**
- Supports **function calling**
- Vocabulary size of **131k**, using the **V3-Tekken** tokenizer
### Basic Instruct Template (V3-Tekken)
```
<s>[INST]user message[/INST]assistant response</s>[INST]new user message[/INST]
```
*For more information about the tokenizer please refer to [mistral-common](https://github.com/mistralai/mistral-common)*
## Ministral 8B Architecture
| Feature | Value |
|:---------------------:|:--------------------:|
| **Architecture** | Dense Transformer |
| **Parameters** | 8,019,808,256 |
| **Layers** | 36 |
| **Heads** | 32 |
| **Dim** | 4096 |
| **KV Heads (GQA)** | 8 |
| **Hidden Dim** | 12288 |
| **Head Dim** | 128 |
| **Vocab Size** | 131,072 |
| **Context Length** | 128k |
| **Attention Pattern** | Ragged (128k,32k,32k,32k) |
## Benchmarks
#### Base Models
<u>Knowledge & Commonsense</u>
| Model | MMLU | AGIEval | Winogrande | Arc-c | TriviaQA |
|:-------------:|:------:|:---------:|:------------:|:-------:|:----------:|
| Mistral 7B Base | 62.5 | 42.5 | 74.2 | 67.9 | 62.5 |
| Llama 3.1 8B Base | 64.7 | 44.4 | 74.6 | 46.0 | 60.2 |
| ***Ministral 8B Base*** | ***<u>65.0</u>*** | ***<u>48.3</u>*** | ***<u>75.3</u>*** | ***<u>71.9</u>*** | ***<u>65.5</u>*** |
| | | | | | |
| Gemma 2 2B Base | 52.4 | 33.8 | 68.7 | 42.6 | 47.8 |
| Llama 3.2 3B Base | 56.2 | 37.4 | 59.6 | 43.1 | 50.7 |
| ***Ministral 3B Base*** | ***<u>60.9</u>*** | ***<u>42.1</u>*** | ***<u>72.7</u>*** | ***<u>64.2</u>*** | ***<u>56.7</u>*** |
<u>Code & Math</u>
| Model | HumanEval pass@1 |GSM8K maj@8 |
|:-------------:|:-------------------:|:---------------:|
| Mistral 7B Base | 26.8 | 32.0 |
| Llama 3.1 8B Base | ***<u>37.8</u>*** | 42.2 |
| ***Ministral 8B Base*** | 34.8 | ***<u>64.5</u>*** |
| | | |
| Gemma 2 2B | 20.1 | 35.5 |
| Llama 3.2 3B | 14.6 | 33.5 |
| ***Ministral 3B*** | ***<u>34.2</u>*** | ***<u>50.9</u>*** |
<u>Multilingual</u>
| Model | French MMLU | German MMLU | Spanish MMLU |
|:-------------:|:-------------:|:-------------:|:-------------:|
| Mistral 7B Base | 50.6 | 49.6 | 51.4 |
| Llama 3.1 8B Base | 50.8 | 52.8 | 54.6 |
| ***Ministral 8B Base*** | ***<u>57.5</u>*** | ***<u>57.4</u>*** | ***<u>59.6</u>*** |
| | | | |
| Gemma 2 2B Base | 41.0 | 40.1 | 41.7 |
| Llama 3.2 3B Base | 42.3 | 42.2 | 43.1 |
| ***Ministral 3B Base*** | ***<u>49.1</u>*** | ***<u>48.3</u>*** | ***<u>49.5</u>*** |
### Instruct Models
<u>Chat/Arena (gpt-4o judge)</u>
| Model | MTBench | Arena Hard | Wild bench |
|:-------------:|:---------:|:------------:|:------------:|
| Mistral 7B Instruct v0.3 | 6.7 | 44.3 | 33.1 |
| Llama 3.1 8B Instruct | 7.5 | 62.4 | 37.0 |
| Gemma 2 9B Instruct | 7.6 | 68.7 | ***<u>43.8</u>*** |
| ***Ministral 8B Instruct*** | ***<u>8.3</u>*** | ***<u>70.9</u>*** | 41.3 |
| | | | |
| Gemma 2 2B Instruct | 7.5 | 51.7 | 32.5 |
| Llama 3.2 3B Instruct | 7.2 | 46.0 | 27.2 |
| ***Ministral 3B Instruct*** | ***<u>8.1</u>*** | ***<u>64.3</u>*** | ***<u>36.3</u>*** |
<u>Code & Math</u>
| Model | MBPP pass@1 | HumanEval pass@1 | Math maj@1 |
|:-------------:|:-------------:|:------------------:|:-------------:|
| Mistral 7B Instruct v0.3 | 50.2 | 38.4 | 13.2 |
| Gemma 2 9B Instruct | 68.5 | 67.7 | 47.4 |
Llama 3.1 8B Instruct | 69.7 | 67.1 | 49.3 |
| ***Ministral 8B Instruct*** | ***<u>70.0</u>*** | ***<u>76.8</u>*** | ***<u>54.5</u>*** |
| | | | |
| Gemma 2 2B Instruct | 54.5 | 42.7 | 22.8 |
| Llama 3.2 3B Instruct | 64.6 | 61.0 | 38.4 |
| ***Ministral 3B* Instruct** | ***<u>67.7</u>*** | ***<u>77.4</u>*** | ***<u>51.7</u>*** |
<u>Function calling</u>
| Model | Internal bench |
|:-------------:|:-----------------:|
| Mistral 7B Instruct v0.3 | 6.9 |
| Llama 3.1 8B Instruct | N/A |
| Gemma 2 9B Instruct | N/A |
| ***Ministral 8B Instruct*** | ***<u>31.6</u>*** |
| | |
| Gemma 2 2B Instruct | N/A |
| Llama 3.2 3B Instruct | N/A |
| ***Ministral 3B Instruct*** | ***<u>28.4</u>*** |
## Usage Examples
### vLLM (recommended)
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
> [!IMPORTANT]
> Currently vLLM is capped at 32k context size because interleaved attention kernels for paged attention are not yet implemented in vLLM.
> Attention kernels for paged attention are being worked on and as soon as it is fully supported in vLLM, this model card will be updated.
> To take advantage of the full 128k context size we recommend [Mistral Inference](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410#mistral-inference)
**_Installation_**
Make sure you install `vLLM >= v0.6.2`:
```
pip install --upgrade vllm
```
Also make sure you have `mistral_common >= 1.4.4` installed:
```
pip install --upgrade mistral_common
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile).
**_Offline_**
```py
from vllm import LLM
from vllm.sampling_params import SamplingParams
model_name = "mistralai/Ministral-8B-Instruct-2410"
sampling_params = SamplingParams(max_tokens=8192)
# note that running Ministral 8B on a single GPU requires 24 GB of GPU RAM
# If you want to divide the GPU requirement over multiple devices, please add *e.g.* `tensor_parallel=2`
llm = LLM(model=model_name, tokenizer_mode="mistral", config_format="mistral", load_format="mistral")
prompt = "Do we need to think for 10 seconds to find the answer of 1 + 1?"
messages = [
{
"role": "user",
"content": prompt
},
]
outputs = llm.chat(messages, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
# You don't need to think for 10 seconds to find the answer to 1 + 1. The answer is 2,
# and you can easily add these two numbers in your mind very quickly without any delay.
```
**_Server_**
You can also use Ministral-8B in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Ministral-8B-Instruct-2410 --tokenizer_mode mistral --config_format mistral --load_format mistral
```
**Note:** Running Ministral-8B on a single GPU requires 24 GB of GPU RAM.
If you want to divide the GPU requirement over multiple devices, please add *e.g.* `--tensor_parallel=2`
2. And ping the client:
```
curl --location 'http://<your-node-url>:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer token' \
--data '{
"model": "mistralai/Ministral-8B-Instruct-2410",
"messages": [
{
"role": "user",
"content": "Do we need to think for 10 seconds to find the answer of 1 + 1?"
}
]
}'
```
### Mistral-inference
We recommend using [mistral-inference](https://github.com/mistralai/mistral-inference) to quickly try out / "vibe-check" the model.
**_Install_**
Make sure to have `mistral_inference >= 1.5.0` installed.
```
pip install mistral_inference --upgrade
```
**_Download_**
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '8B-Instruct')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Ministral-8B-Instruct-2410", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/8B-Instruct --instruct --max_tokens 256
```
### Passkey detection
> [!IMPORTANT]
> In this example the passkey message has over >100k tokens and mistral-inference
> does not have a chunked pre-fill mechanism. Therefore you will need a lot of
> GPU memory in order to run the below example (80 GB). For a more memory-efficient
> solution we recommend using vLLM.
```py
from mistral_inference.transformer import Transformer
from pathlib import Path
import json
from mistral_inference.generate import generate
from huggingface_hub import hf_hub_download
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
def load_passkey_request() -> ChatCompletionRequest:
passkey_file = hf_hub_download(repo_id="mistralai/Ministral-8B-Instruct-2410", filename="passkey_example.json")
with open(passkey_file, "r") as f:
data = json.load(f)
message_content = data["messages"][0]["content"]
return ChatCompletionRequest(messages=[UserMessage(content=message_content)])
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path, softmax_fp32=False)
completion_request = load_passkey_request()
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result) # The pass key is 13005.
```
### Instruct following
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="How often does the letter r occur in Mistral?")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
tekken = tokenizer.instruct_tokenizer.tokenizer
tekken.special_token_policy = SpecialTokenPolicy.IGNORE
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## The Mistral AI Team
Albert Jiang, Alexandre Abou Chahine, Alexandre Sablayrolles, Alexis Tacnet, Alodie Boissonnet, Alok Kothari, Amélie Héliou, Andy Lo, Anna Peronnin, Antoine Meunier, Antoine Roux, Antonin Faure, Aritra Paul, Arthur Darcet, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Avinash Sooriyarachchi, Baptiste Rozière, Barry Conklin, Bastien Bouillon, Blanche Savary de Beauregard, Carole Rambaud, Caroline Feldman, Charles de Freminville, Charline Mauro, Chih-Kuan Yeh, Chris Bamford, Clement Auguy, Corentin Heintz, Cyriaque Dubois, Devendra Singh Chaplot, Diego Las Casas, Diogo Costa, Eléonore Arcelin, Emma Bou Hanna, Etienne Metzger, Fanny Olivier Autran, Francois Lesage, Garance Gourdel, Gaspard Blanchet, Gaspard Donada Vidal, Gianna Maria Lengyel, Guillaume Bour, Guillaume Lample, Gustave Denis, Harizo Rajaona, Himanshu Jaju, Ian Mack, Ian Mathew, Jean-Malo Delignon, Jeremy Facchetti, Jessica Chudnovsky, Joachim Studnia, Justus Murke, Kartik Khandelwal, Kenneth Chiu, Kevin Riera, Leonard Blier, Leonard Suslian, Leonardo Deschaseaux, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Sophia Yang, Margaret Jennings, Marie Pellat, Marie Torelli, Marjorie Janiewicz, Mathis Felardos, Maxime Darrin, Michael Hoff, Mickaël Seznec, Misha Jessel Kenyon, Nayef Derwiche, Nicolas Carmont Zaragoza, Nicolas Faurie, Nicolas Moreau, Nicolas Schuhl, Nikhil Raghuraman, Niklas Muhs, Olivier de Garrigues, Patricia Rozé, Patricia Wang, Patrick von Platen, Paul Jacob, Pauline Buche, Pavankumar Reddy Muddireddy, Perry Savas, Pierre Stock, Pravesh Agrawal, Renaud de Peretti, Romain Sauvestre, Romain Sinthe, Roman Soletskyi, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Soham Ghosh, Sylvain Regnier, Szymon Antoniak, Teven Le Scao, Theophile Gervet, Thibault Schueller, Thibaut Lavril, Thomas Wang, Timothée Lacroix, Valeriia Nemychnikova, Wendy Shang, William El Sayed, William Marshall
|
Abhaykoul/HelpingAI2.5-prototype-v2
|
Abhaykoul
| 2024-10-20T15:33:45Z | 5 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"HelpingAI",
"Emotions",
"humanlike",
"eq",
"prototype",
"conversational",
"base_model:HelpingAI/HelpingAI2-9B",
"base_model:finetune:HelpingAI/HelpingAI2-9B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T15:28:37Z |
---
license: other
base_model:
- OEvortex/HelpingAI2-9B
pipeline_tag: text-generation
library_name: transformers
tags:
- HelpingAI
- Emotions
- humanlike
- eq
- prototype
---
# HelpingAI 2.5 Prototype
## Overview
Welcome to the **HelpingAI 2.5 Prototype**! This model is designed to provide emotionally intelligent conversational AI experiences. By understanding user emotions and context, HelpingAI 2.5 aims to enhance human-computer interactions, making them more meaningful and engaging.
## Key Features
- **Emotion Recognition:** Understands user emotions for tailored responses.
- **Contextual Understanding:** Adapts based on conversation history.
- **Multi-Domain Support:** Suitable for various applications including customer support, education, and personal assistance.
- **User Feedback Integration:** Continuously improves based on user interactions and feedback.
## Demo
Experience the model in action! Visit our [demo space](https://huggingface.co/spaces/Abhaykoul/HelpingAI2.5-prototype) to try out the HelpingAI 2.5 prototype.
## Getting Involved
We’re eager to hear your thoughts! Feel free to provide feedback or report issues via [discussion](https://huggingface.co/Abhaykoul/HelpingAI2.5-prototype-v2/discussions).
## Future Plans
We are excited to announce that we will be re-releasing **HelpingAI (3B, 3B Coder, and Flash)** with a new personality and more human-like features on **Diwali**! Additionally, the **HelpingAI 2.5 models** will be available on **November 17** 🎉.
## Acknowledgments
- [Hugging Face](https://huggingface.co) for providing an incredible platform for AI development.
- The open-source community for their continuous support and contributions.
---
Join us in shaping the future of AI! 🤖💖
|
sarpba/whisper-base-hungarian_v1
|
sarpba
| 2024-10-20T15:30:01Z | 253 | 7 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hu",
"dataset:fleurs",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-12T15:55:47Z |
---
library_name: transformers
language:
- hu
base_model: openai/whisper-base
tags:
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: Whisper Base Hungarian v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs
type: fleurs
config: hu_hu
split: test
args: hu_hu
metrics:
- name: Wer
type: wer
value: 29.48142356294297
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
A kezdeti próbálkozásokat mind eltávolítottam, ez a jelenleg rendelkezésre álló eszközök és technológia által létrehozható legjobb magyar nyelvere finomhangolt whisper base modell.
A többi magyar nyelvre finomhangolt base modelltől nagyságrendellek jobb eredményeket ér el minden adatkészleten!
# Whisper Base Hungarian
Ez a modell a finomhangolt változata a [openai/whisper-base](https://huggingface.co/openai/whisper-base) -nek sarpba/big_audio_data_hun adatkészleten.
Teszteredmények:
("google/fleurs", "hu_hu", "test") (képzés közbeni)
- Loss: 0.7999
- Wer Ortho: 33.8788
- Wer: 29.4814
("mozilla-foundation/common_voice_17_0", "hu", "test")
- WER: 25.58
- CER: 6.34
- Normalised WER: 21.18
- Normalised CER: 5.31
## Model description
Egyedi adatkészleten magyarta finomhangolt whisper base modell.
## Intended uses & limitations
Üzleti cálra a modell a hozzájárulásom nélkül nem használható! Magán célra szabadon felhasználható a whisper esedeti licenszfeltételei szerint! Commercial use of this fine-tuning is not permitted!
## Training and evaluation data
A modell hozzávetőleg 1200 óra gondosan válogatott magyar hanganyag alapján készült. A képzés során a tesztek a google/flerus-t használták a fejlődés ellenőrzésére.
Alatta a mozilla-foundation/common_voice_17_0 eredménye.
Egyik adatkészlet sem szerepelt a képzési adatok közt, a modell tesztanyaggal nem fertőzött!
## Training procedure
A képzés optimalizációja 3 napig futott a ray[tune] segítségével, a megtalált optimális képzési paraméterekkel a finomhangolás hozzávetőleg 17 órába telt!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.2523 | 0.3770 | 1000 | 0.9703 | 50.8988 | 46.7185 |
| 0.1859 | 0.7539 | 2000 | 0.8605 | 43.4345 | 39.4103 |
| 0.127 | 1.1309 | 3000 | 0.8378 | 40.6107 | 36.0040 |
| 0.1226 | 1.5079 | 4000 | 0.8153 | 38.9189 | 34.1842 |
| 0.1105 | 1.8848 | 5000 | 0.7847 | 36.6018 | 32.1979 |
| 0.0659 | 2.2618 | 6000 | 0.8298 | 35.3752 | 30.6379 |
| 0.0594 | 2.6388 | 7000 | 0.8132 | 34.8255 | 30.2280 |
| 0.0316 | 3.0157 | 8000 | 0.7999 | 33.8788 | 29.4814 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.3.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Lyte/Llama-3.2-3B-Overthinker
|
Lyte
| 2024-10-20T15:09:51Z | 75 | 19 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:Lyte/Reasoning-Paused",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T22:49:53Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
datasets:
- Lyte/Reasoning-Paused
pipeline_tag: text-generation
model-index:
- name: Llama-3.2-3B-Overthinker
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 64.08
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.2-3B-Overthinker
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 20.1
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.2-3B-Overthinker
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 2.64
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.2-3B-Overthinker
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.23
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.2-3B-Overthinker
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.9
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.2-3B-Overthinker
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 22.06
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.2-3B-Overthinker
name: Open LLM Leaderboard
---
# Model Overview:
- **Training Data**: This model was trained on a dataset with columns for initial reasoning, step-by-step thinking, verifications after each step, and final answers based on full context. Is it better than the original base model? Hard to say without proper evaluations, and I don’t have the resources to run them manually.
- **Context Handling**: The model benefits from larger contexts (minimum 4k up to 16k tokens, though it was trained on 32k tokens). It tends to "overthink," so providing a longer context helps it perform better.
- **Performance**: Based on my very few manual tests, the model seems to excel in conversational settings—especially for mental health, creative tasks and explaining stuff. However, I encourage you to try it out yourself using this [Colab Notebook](https://colab.research.google.com/drive/1dcBbHAwYJuQJKqdPU570Hddv_F9wzjPO?usp=sharing).
- **Dataset Note**: The publicly available dataset is only a partial version. The full dataset was originally designed for a custom Mixture of Experts (MoE) architecture, but I couldn't afford to run the full experiment.
- **Acknowledgment**: Special thanks to KingNish for reigniting my passion to revisit this project. I almost abandoned it after my first attempt a month ago. Enjoy this experimental model!
# Inference Code:
- Feel free to make the steps and verifications collapsable and the initial reasoning too, you can show only the final answer to get an o1 feel(i don't know)
- **Note:** A feature we have here is the ability to control how many steps and verifications you want.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Lyte/Llama-3.2-3B-Overthinker"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
def generate_response(prompt, max_tokens=16384, temperature=0.8, top_p=0.95, repeat_penalty=1.1, num_steps=3):
messages = [{"role": "user", "content": prompt}]
# Generate reasoning
reasoning_template = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True)
reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device)
reasoning_ids = model.generate(
**reasoning_inputs,
max_new_tokens=max_tokens // 3,
temperature=temperature,
top_p=top_p,
repetition_penalty=repeat_penalty
)
reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)
# Generate thinking (step-by-step and verifications)
messages.append({"role": "reasoning", "content": reasoning_output})
thinking_template = tokenizer.apply_chat_template(messages, tokenize=False, add_thinking_prompt=True, num_steps=num_steps)
thinking_inputs = tokenizer(thinking_template, return_tensors="pt").to(model.device)
thinking_ids = model.generate(
**thinking_inputs,
max_new_tokens=max_tokens // 3,
temperature=temperature,
top_p=top_p,
repetition_penalty=repeat_penalty
)
thinking_output = tokenizer.decode(thinking_ids[0, thinking_inputs.input_ids.shape[1]:], skip_special_tokens=True)
# Generate final answer
messages.append({"role": "thinking", "content": thinking_output})
answer_template = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
answer_inputs = tokenizer(answer_template, return_tensors="pt").to(model.device)
answer_ids = model.generate(
**answer_inputs,
max_new_tokens=max_tokens // 3,
temperature=temperature,
top_p=top_p,
repetition_penalty=repeat_penalty
)
answer_output = tokenizer.decode(answer_ids[0, answer_inputs.input_ids.shape[1]:], skip_special_tokens=True)
return reasoning_output, thinking_output, answer_output
# Example usage:
prompt = "Explain the process of photosynthesis."
response = generate_response(prompt, num_steps=5)
print("Response:", response)
```
# Uploaded model
- **Developed by:** Lyte
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Notice:
- **The problem with runnning evals is that they won't make use of the correct template and it won't be a true eval then would it? so these barely test the model.**
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Lyte__Llama-3.2-3B-Overthinker)
| Metric |Value|
|-------------------|----:|
|Avg. |19.00|
|IFEval (0-Shot) |64.08|
|BBH (3-Shot) |20.10|
|MATH Lvl 5 (4-Shot)| 2.64|
|GPQA (0-shot) | 1.23|
|MuSR (0-shot) | 3.90|
|MMLU-PRO (5-shot) |22.06|
|
noxinc/bitnet_b1_58-large-Q5_K_M-GGUF
|
noxinc
| 2024-10-20T15:08:06Z | 21 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:1bitLLM/bitnet_b1_58-large",
"base_model:quantized:1bitLLM/bitnet_b1_58-large",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-10-20T15:08:02Z |
---
license: mit
base_model: 1bitLLM/bitnet_b1_58-large
tags:
- llama-cpp
- gguf-my-repo
---
# noxinc/bitnet_b1_58-large-Q5_K_M-GGUF
This model was converted to GGUF format from [`1bitLLM/bitnet_b1_58-large`](https://huggingface.co/1bitLLM/bitnet_b1_58-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/1bitLLM/bitnet_b1_58-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo noxinc/bitnet_b1_58-large-Q5_K_M-GGUF --hf-file bitnet_b1_58-large-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo noxinc/bitnet_b1_58-large-Q5_K_M-GGUF --hf-file bitnet_b1_58-large-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo noxinc/bitnet_b1_58-large-Q5_K_M-GGUF --hf-file bitnet_b1_58-large-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo noxinc/bitnet_b1_58-large-Q5_K_M-GGUF --hf-file bitnet_b1_58-large-q5_k_m.gguf -c 2048
```
|
aashish1904/Ministral-8B-Instruct-2410-HF-Q2_K-GGUF
|
aashish1904
| 2024-10-20T14:59:20Z | 10 | 2 |
vllm
|
[
"vllm",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:TouchNight/Ministral-8B-Instruct-2410-HF",
"base_model:quantized:TouchNight/Ministral-8B-Instruct-2410-HF",
"license:other",
"region:us",
"conversational"
] | null | 2024-10-20T14:59:04Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
extra_gated_prompt: '# Mistral AI Research License
If You want to use a Mistral Model, a Derivative or an Output for any purpose that
is not expressly authorized under this Agreement, You must request a license from
Mistral AI, which Mistral AI may grant to You in Mistral AI''s sole discretion.
To discuss such a license, please contact Mistral AI via the website contact form:
https://mistral.ai/contact/
## 1. Scope and acceptance
**1.1. Scope of the Agreement.** This Agreement applies to any use, modification,
or Distribution of any Mistral Model by You, regardless of the source You obtained
a copy of such Mistral Model.
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral Model,
or by creating, using or distributing a Derivative of the Mistral Model, You agree
to be bound by this Agreement.
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement on
behalf of Your employer or another person or entity, You warrant and represent that
You have the authority to act and accept this Agreement on their behalf. In such
a case, the word "You" in this Agreement will refer to Your employer or such other
person or entity.
## 2. License
**2.1. Grant of rights**. Subject to Section 3 below, Mistral AI hereby grants
You a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable,
limited license to use, copy, modify, and Distribute under the conditions provided
in Section 2.2 below, the Mistral Model and any Derivatives made by or for Mistral
AI and to create Derivatives of the Mistral Model.
**2.2. Distribution of Mistral Model and Derivatives made by or for Mistral AI.**
Subject to Section 3 below, You may Distribute copies of the Mistral Model and/or
Derivatives made by or for Mistral AI, under the following conditions: You must
make available a copy of this Agreement to third-party recipients of the Mistral
Models and/or Derivatives made by or for Mistral AI you Distribute, it being specified
that any rights to use the Mistral Models and/or Derivatives made by or for Mistral
AI shall be directly granted by Mistral AI to said third-party recipients pursuant
to the Mistral AI Research License agreement executed between these parties; You
must retain in all copies of the Mistral Models the following attribution notice
within a "Notice" text file distributed as part of such copies: "Licensed by Mistral
AI under the Mistral AI Research License".
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3 below,
You may Distribute any Derivatives made by or for You under additional or different
terms and conditions, provided that: In any event, the use and modification of Mistral
Model and/or Derivatives made by or for Mistral AI shall remain governed by the
terms and conditions of this Agreement; You include in any such Derivatives made
by or for You prominent notices stating that You modified the concerned Mistral
Model; and Any terms and conditions You impose on any third-party recipients relating
to Derivatives made by or for You shall neither limit such third-party recipients''
use of the Mistral Model or any Derivatives made by or for Mistral AI in accordance
with the Mistral AI Research License nor conflict with any of its terms and conditions.
## 3. Limitations
**3.1. Misrepresentation.** You must not misrepresent or imply, through any means,
that the Derivatives made by or for You and/or any modified version of the Mistral
Model You Distribute under your name and responsibility is an official product of
Mistral AI or has been endorsed, approved or validated by Mistral AI, unless You
are authorized by Us to do so in writing.
**3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives (whether
or not created by Mistral AI) and Outputs for Research Purposes.
## 4. Intellectual Property
**4.1. Trademarks.** No trademark licenses are granted under this Agreement, and
in connection with the Mistral Models, You may not use any name or mark owned by
or associated with Mistral AI or any of its affiliates, except (i) as required for
reasonable and customary use in describing and Distributing the Mistral Models and
Derivatives made by or for Mistral AI and (ii) for attribution purposes as required
by this Agreement.
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are solely
responsible for the Outputs You generate and their subsequent uses in accordance
with this Agreement. Any Outputs shall be subject to the restrictions set out in
Section 3 of this Agreement.
**4.3. Derivatives.** By entering into this Agreement, You accept that any Derivatives
that You may create or that may be created for You shall be subject to the restrictions
set out in Section 3 of this Agreement.
## 5. Liability
**5.1. Limitation of liability.** In no event, unless required by applicable law
(such as deliberate and grossly negligent acts) or agreed to in writing, shall Mistral
AI be liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this Agreement
or out of the use or inability to use the Mistral Models and Derivatives (including
but not limited to damages for loss of data, loss of goodwill, loss of expected
profit or savings, work stoppage, computer failure or malfunction, or any damage
caused by malware or security breaches), even if Mistral AI has been advised of
the possibility of such damages.
**5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI from
and against any claims, damages, or losses arising out of or related to Your use
or Distribution of the Mistral Models and Derivatives.
## 6. Warranty
**6.1. Disclaimer.** Unless required by applicable law or prior agreed to by Mistral
AI in writing, Mistral AI provides the Mistral Models and Derivatives on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Mistral AI does not represent
nor warrant that the Mistral Models and Derivatives will be error-free, meet Your
or any third party''s requirements, be secure or will allow You or any third party
to achieve any kind of result or generate any kind of content. You are solely responsible
for determining the appropriateness of using or Distributing the Mistral Models
and Derivatives and assume any risks associated with Your exercise of rights under
this Agreement.
## 7. Termination
**7.1. Term.** This Agreement is effective as of the date of your acceptance of
this Agreement or access to the concerned Mistral Models or Derivatives and will
continue until terminated in accordance with the following terms.
**7.2. Termination.** Mistral AI may terminate this Agreement at any time if You
are in breach of this Agreement. Upon termination of this Agreement, You must cease
to use all Mistral Models and Derivatives and shall permanently delete any copy
thereof. The following provisions, in their relevant parts, will survive any termination
or expiration of this Agreement, each for the duration necessary to achieve its
own intended purpose (e.g. the liability provision will survive until the end of
the applicable limitation period):Sections 5 (Liability), 6(Warranty), 7 (Termination)
and 8 (General Provisions).
**7.3. Litigation.** If You initiate any legal action or proceedings against Us
or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging
that the Model or a Derivative, or any part thereof, infringe upon intellectual
property or other rights owned or licensable by You, then any licenses granted to
You under this Agreement will immediately terminate as of the date such legal action
or claim is filed or initiated.
## 8. General provisions
**8.1. Governing laws.** This Agreement will be governed by the laws of France,
without regard to choice of law principles, and the UN Convention on Contracts for
the International Sale of Goods does not apply to this Agreement.
**8.2. Competent jurisdiction.** The courts of Paris shall have exclusive jurisdiction
of any dispute arising out of this Agreement.
**8.3. Severability.** If any provision of this Agreement is held to be invalid,
illegal or unenforceable, the remaining provisions shall be unaffected thereby and
remain valid as if such provision had not been set forth herein.
## 9. Definitions
"Agreement": means this Mistral AI Research License agreement governing the access,
use, and Distribution of the Mistral Models, Derivatives and Outputs.
"Derivative": means any (i) modified version of the Mistral Model (including but
not limited to any customized or fine-tuned version thereof), (ii) work based on
the Mistral Model, or (iii) any other derivative work thereof.
"Distribution", "Distributing", "Distribute" or "Distributed": means supplying,
providing or making available, by any means, a copy of the Mistral Models and/or
the Derivatives as the case may be, subject to Section 3 of this Agreement.
"Mistral AI", "We" or "Us": means Mistral AI, a French société par actions simplifiée
registered in the Paris commercial registry under the number 952 418 325, and having
its registered seat at 15, rue des Halles, 75001 Paris.
"Mistral Model": means the foundational large language model(s), and its elements
which include algorithms, software, instructed checkpoints, parameters, source code
(inference code, evaluation code and, if applicable, fine-tuning code) and any other
elements associated thereto made available by Mistral AI under this Agreement, including,
if any, the technical documentation, manuals and instructions for the use and operation
thereof.
"Research Purposes": means any use of a Mistral Model, Derivative, or Output that
is solely for (a) personal, scientific or academic research, and (b) for non-profit
and non-commercial purposes, and not directly or indirectly connected to any commercial
activities or business operations. For illustration purposes, Research Purposes
does not include (1) any usage of the Mistral Model, Derivative or Output by individuals
or contractors employed in or engaged by companies in the context of (a) their daily
tasks, or (b) any activity (including but not limited to any testing or proof-of-concept)
that is intended to generate revenue, nor (2) any Distribution by a commercial entity
of the Mistral Model, Derivative or Output whether in return for payment or free
of charge, in any medium or form, including but not limited to through a hosted
or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
"Outputs": means any content generated by the operation of the Mistral Models or
the Derivatives from a prompt (i.e., text instructions) provided by users. For
the avoidance of doubt, Outputs do not include any components of a Mistral Models,
such as any fine-tuned versions of the Mistral Models, the weights, or parameters.
"You": means the individual or entity entering into this Agreement with Mistral
AI.
*Mistral AI processes your personal data below to provide the model and enforce
its license. If you are affiliated with a commercial entity, we may also send you
communications about our models. For more information on your rights and data handling,
please see our <a href="https://mistral.ai/terms/">privacy policy</a>.*'
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
Job title: text
I understand that I can only use the model, any derivative versions and their outputs for non-commercial research purposes: checkbox
? I understand that if I am a commercial entity, I am not permitted to use or distribute
the model internally or externally, or expose it in my own offerings without a
commercial license
: checkbox
? I understand that if I upload the model, or any derivative version, on any platform,
I must include the Mistral Research License
: checkbox
? I understand that for commercial use of the model, I can contact Mistral or use
the Mistral AI API on la Plateforme or any of our cloud provider partners
: checkbox
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Mistral Privacy Policy
: checkbox
geo: ip_location
extra_gated_description: Mistral AI processes your personal data below to provide
the model and enforce its license. If you are affiliated with a commercial entity,
we may also send you communications about our models. For more information on your
rights and data handling, please see our <a href="https://mistral.ai/terms/">privacy
policy</a>.
extra_gated_button_content: Submit
library_name: vllm
base_model: TouchNight/Ministral-8B-Instruct-2410-HF
tags:
- llama-cpp
- gguf-my-repo
---
# aashish1904/Ministral-8B-Instruct-2410-HF-Q2_K-GGUF
This model was converted to GGUF format from [`TouchNight/Ministral-8B-Instruct-2410-HF`](https://huggingface.co/TouchNight/Ministral-8B-Instruct-2410-HF) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TouchNight/Ministral-8B-Instruct-2410-HF) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aashish1904/Ministral-8B-Instruct-2410-HF-Q2_K-GGUF --hf-file ministral-8b-instruct-2410-hf-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aashish1904/Ministral-8B-Instruct-2410-HF-Q2_K-GGUF --hf-file ministral-8b-instruct-2410-hf-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aashish1904/Ministral-8B-Instruct-2410-HF-Q2_K-GGUF --hf-file ministral-8b-instruct-2410-hf-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aashish1904/Ministral-8B-Instruct-2410-HF-Q2_K-GGUF --hf-file ministral-8b-instruct-2410-hf-q2_k.gguf -c 2048
```
|
HANTAEK/klue-roberta-large-korquad-v1-qa-finetuned
|
HANTAEK
| 2024-10-20T14:58:24Z | 31 | 1 | null |
[
"pytorch",
"roberta",
"question-answering",
"ko",
"dataset:KorQuAD/squad_kor_v1",
"base_model:CurtisJeon/klue-roberta-large-korquad_v1_qa",
"base_model:finetune:CurtisJeon/klue-roberta-large-korquad_v1_qa",
"license:unknown",
"region:us"
] |
question-answering
| 2024-10-17T13:41:13Z |
---
license: unknown
datasets:
- KorQuAD/squad_kor_v1
language:
- ko
base_model:
- CurtisJeon/klue-roberta-large-korquad_v1_qa
pipeline_tag: question-answering
---
# KLUE RoBERTa Large KorQuAD v1 QA - Fine-tuned
이 모델은 [CurtisJeon/klue-roberta-large-korquad_v1_qa](https://huggingface.co/CurtisJeon/klue-roberta-large-korquad_v1_qa)를 기반으로 하여 추가 데이터로 fine-tuning한 한국어 질의응답(QA) 모델입니다.
## 모델 정보
- 기본 모델: KLUE RoBERTa Large
- 태스크: 질의응답 (Question Answering)
- 언어: 한국어
- 훈련 데이터: KorQuAD v1 + [자체 데이터]
## 모델 구조
- RobertaForQuestionAnswering 아키텍처 사용 + CNN 레이어(without a dropout)
- 24개의 hidden layers
- 1024 hidden size
- 16 attention heads
- 총 파라미터: 약 355M
## 사용 방법
이 모델은 Hugging Face Transformers 라이브러리를 사용하여 쉽게 로드하고 사용할 수 있습니다:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model_name = "HANTAEK/klue-roberta-large-korquad-v1-qa-finetuned"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
Irham13/PonXXI
|
Irham13
| 2024-10-20T14:51:52Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indolem/indobertweet-base-uncased",
"base_model:finetune:indolem/indobertweet-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-20T14:19:54Z |
---
library_name: transformers
license: apache-2.0
base_model: indolem/indobertweet-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: PonXXI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PonXXI
This model is a fine-tuned version of [indolem/indobertweet-base-uncased](https://huggingface.co/indolem/indobertweet-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3259
- Accuracy: 0.7457
- Precision: 0.7431
- Recall: 0.7429
- F1: 0.7422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8207 | 1.0 | 214 | 0.7313 | 0.6928 | 0.7083 | 0.6928 | 0.6937 |
| 0.6159 | 2.0 | 428 | 0.6846 | 0.7270 | 0.7421 | 0.7305 | 0.7250 |
| 0.4666 | 3.0 | 642 | 0.7258 | 0.7270 | 0.7282 | 0.7243 | 0.7221 |
| 0.349 | 4.0 | 856 | 0.8328 | 0.7406 | 0.7403 | 0.7368 | 0.7356 |
| 0.2752 | 5.0 | 1070 | 0.8500 | 0.7406 | 0.7377 | 0.7387 | 0.7379 |
| 0.224 | 6.0 | 1284 | 1.0037 | 0.7457 | 0.7425 | 0.7435 | 0.7424 |
| 0.1883 | 7.0 | 1498 | 1.1039 | 0.7457 | 0.7446 | 0.7435 | 0.7437 |
| 0.1518 | 8.0 | 1712 | 1.1535 | 0.7457 | 0.7439 | 0.7426 | 0.7420 |
| 0.1372 | 9.0 | 1926 | 1.3070 | 0.7304 | 0.7273 | 0.7262 | 0.7241 |
| 0.1195 | 10.0 | 2140 | 1.3259 | 0.7457 | 0.7431 | 0.7429 | 0.7422 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
keithdrexel/meta-3.2-3b-tldr-sft-hf
|
keithdrexel
| 2024-10-20T14:48:26Z | 141 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T14:45:57Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
radlab/pLLama3.2-3B-DPO
|
radlab
| 2024-10-20T14:45:53Z | 6 | 0 | null |
[
"safetensors",
"llama",
"pl",
"en",
"es",
"de",
"base_model:radlab/pLLama3.2-3B",
"base_model:finetune:radlab/pLLama3.2-3B",
"license:llama3.2",
"region:us"
] | null | 2024-10-17T07:38:06Z |
---
license: llama3.2
language:
- pl
- en
- es
- de
base_model:
- radlab/pLLama3.2-3B
---

### Intro
We have released a collection of radlab/pLLama3.2 models, which we have trained into Polish. The trained version is able to communicate more precisely with the user than the base version of meta-llama/Meta-Llama-3.2 models. As part of the collection, we provide models in 1B and 3B architecture.
Each model is available in two configurations:
- radlab/pLLama3-1B, a model in architecture 1B only after fine-tuning
- radlab/pLLama3-1B-DPO, a model in architecture 1B after fine-tuning and DPO process
- radlab/pLLama3-3B, a model in architecture 3B only after fine-tuning
- radlab/pLLama3-3B-DPO, a model in architecture 3B after fine-tuning and DPO process
### Dataset
In addition to the instruction datasets publicly available for Polish, we developed our own dataset, which contains about 650,000 instructions. This data was semi-automatically generated using other publicly available datasets.
In addition, we developed a learning dataset for the DPO process, which contained 100k examples in which we taught the model to select correctly written versions of texts from those with language errors.
### Learning
The learning process was divided into two stages:
- Post-training on a set of 650k instructions in Polish, the fine-tuning time was set to 5 epochs.
- After the FT stage, we retrained the model using DPO on 100k instructions of correct writing in Polish, in this case we set the learning time to 15k steps.
### Proposed parameters:
* temperature: 0.6
* repetition_penalty: 1.0
### Outro
Enjoy!
|
radlab/pLLama3.2-3B
|
radlab
| 2024-10-20T14:44:42Z | 5 | 0 | null |
[
"safetensors",
"llama",
"pl",
"en",
"es",
"de",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2024-10-17T07:37:21Z |
---
license: llama3.2
language:
- pl
- en
- es
- de
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---

### Intro
We have released a collection of radlab/pLLama3.2 models, which we have trained into Polish. The trained version is able to communicate more precisely with the user than the base version of meta-llama/Meta-Llama-3.2 models. As part of the collection, we provide models in 1B and 3B architecture.
Each model is available in two configurations:
- radlab/pLLama3-1B, a model in architecture 1B only after fine-tuning
- radlab/pLLama3-1B-DPO, a model in architecture 1B after fine-tuning and DPO process
- radlab/pLLama3-3B, a model in architecture 3B only after fine-tuning
- radlab/pLLama3-3B-DPO, a model in architecture 3B after fine-tuning and DPO process
### Dataset
In addition to the instruction datasets publicly available for Polish, we developed our own dataset, which contains about 650,000 instructions. This data was semi-automatically generated using other publicly available datasets.
In addition, we developed a learning dataset for the DPO process, which contained 100k examples in which we taught the model to select correctly written versions of texts from those with language errors.
### Learning
The learning process was divided into two stages:
- Post-training on a set of 650k instructions in Polish, the fine-tuning time was set to 5 epochs.
- After the FT stage, we retrained the model using DPO on 100k instructions of correct writing in Polish, in this case we set the learning time to 15k steps.
### Proposed parameters:
* temperature: 0.6
* repetition_penalty: 1.0
### Outro
Enjoy!
|
douple/luxes-yacht
|
douple
| 2024-10-20T14:39:09Z | 5 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-20T14:39:06Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LLXYY2024
---
# Luxes Yacht
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LLXYY2024` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('douple/luxes-yacht', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
sahith2004/florence-2-ft-epoch-1
|
sahith2004
| 2024-10-20T14:37:26Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-10-20T14:32:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BlackBeenie/llama-3.1-8B-Galore-openassistant-guanaco
|
BlackBeenie
| 2024-10-20T14:28:39Z | 14 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-16T18:51:10Z |
---
library_name: transformers
tags:
- trl
- sft
model-index:
- name: llama-3.1-8B-Galore-openassistant-guanaco
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 26.35
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3.1-8B-Galore-openassistant-guanaco
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 31.44
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3.1-8B-Galore-openassistant-guanaco
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 4.83
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3.1-8B-Galore-openassistant-guanaco
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.71
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3.1-8B-Galore-openassistant-guanaco
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 14.58
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3.1-8B-Galore-openassistant-guanaco
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.52
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3.1-8B-Galore-openassistant-guanaco
name: Open LLM Leaderboard
---
# Model Card for Model ID
## Model Details
## Training Details
### Training Data
[timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)
### Training Procedure
Trained with SFTTrainer with Galore quantization.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BlackBeenie__llama-3.1-8B-Galore-openassistant-guanaco)
| Metric |Value|
|-------------------|----:|
|Avg. |18.07|
|IFEval (0-Shot) |26.35|
|BBH (3-Shot) |31.44|
|MATH Lvl 5 (4-Shot)| 4.83|
|GPQA (0-shot) | 6.71|
|MuSR (0-shot) |14.58|
|MMLU-PRO (5-shot) |24.52|
|
hazemessam/esm3_ddg_v2
|
hazemessam
| 2024-10-20T14:20:15Z | 72 | 0 |
transformers
|
[
"transformers",
"safetensors",
"feature-extraction",
"custom_code",
"region:us"
] |
feature-extraction
| 2024-10-20T13:37:05Z |
---
library_name: transformers
tags: []
---
## Model Details
### Model Description
This model was part of the Evolutionary Scale BioML Hackathon.
## Uses
Used for ddG prediction for single mutation.
## How to Get Started with the Model
```python
# Make sure `esm` is installed, if not use: `pip install esm`
from transformers import AutoModel
from esm.tokenization.sequence_tokenizer import EsmSequenceTokenizer
import torch
model = AutoModel.from_pretrained("hazemessam/esm3_ddg_v2", trust_remote_code=True)
tokenizer = EsmSequenceTokenizer()
model.eval()
with torch.no_grad():
output = model(tokenized_seq1, tokenized_seq2, positions=mutation_position)
```
## Training Details
### Training Data
Training Data: https://huggingface.co/datasets/hazemessam/ddg/blob/main/S2648.csv
### Training Procedure
The results listed below are the best results for each evaluation dataset, but this checkpoint is the best checkpoint based on `Ssym` evaluation dataset
#### Training Hyperparameters
* Scheduler: Cosine
* Warmup steps: 400
* Seed: 7
* Gradient accumulation steps: 16
* Batch size: 1
* DoRA rank: 16
* DoRA alpha: 32
* Updated Layers: ["layernorm_qkv.1", "ffn.1", "ffn.3"]
* DoRA bias: "none"
[More Information Needed]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The model was evaluated on the following:
* Ssym: https://huggingface.co/datasets/hazemessam/ddg/blob/main/ssym.csv
* Ssym_r: https://huggingface.co/datasets/hazemessam/ddg/blob/main/ssym_r.csv
* P53: https://huggingface.co/datasets/hazemessam/ddg/blob/main/p53.csv
* Myoglobin: https://huggingface.co/datasets/hazemessam/ddg/blob/main/myoglobin.csv
* Myoglobin_r: https://huggingface.co/datasets/hazemessam/ddg/blob/main/myoglobin_r.csv
### Results
Ssym pearson correlation: 0.85
Ssym RMSE: 0.83
Ssym_r pearson correlation: 0.85
Ssym_r RMSE: 0.83
Myoglobin pearson correlation: 0.65
Myoglobin RMSE: 0.83
Myoglobin_r pearson correlation: 0.65
Myoglobin_r RMSE: 0.84
|
hawalurahman/mt5-small-qa_v2_enhanced
|
hawalurahman
| 2024-10-20T14:16:57Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-20T06:49:28Z |
---
library_name: transformers
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: mt5-small-qa_v2_enhanced
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-qa_v2_enhanced
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1661
- Rouge1: 0.4013
- Rouge2: 0.2476
- Rougel: 0.4014
- Rougelsum: 0.4012
- Bleu: 0.2699
- Exact Match: 0.2782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Exact Match |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:------:|:-----------:|
| 1.9754 | 1.0 | 2500 | 1.5697 | 0.2190 | 0.2009 | 0.2192 | 0.2194 | 0.2048 | 0.2054 |
| 1.2673 | 2.0 | 5000 | 1.2572 | 0.3223 | 0.2141 | 0.3222 | 0.3225 | 0.2240 | 0.2312 |
| 0.9588 | 3.0 | 7500 | 1.1536 | 0.3697 | 0.2347 | 0.3695 | 0.3696 | 0.2445 | 0.255 |
| 0.7753 | 4.0 | 10000 | 1.1177 | 0.3952 | 0.2538 | 0.3949 | 0.3950 | 0.2741 | 0.2702 |
| 0.6586 | 5.0 | 12500 | 1.1198 | 0.4119 | 0.2597 | 0.4121 | 0.4119 | 0.2770 | 0.2778 |
| 0.5849 | 6.0 | 15000 | 1.1087 | 0.3951 | 0.2495 | 0.3956 | 0.3951 | 0.2714 | 0.2788 |
| 0.5109 | 7.0 | 17500 | 1.1544 | 0.4021 | 0.2462 | 0.4019 | 0.4021 | 0.2671 | 0.2702 |
| 0.4777 | 8.0 | 20000 | 1.1463 | 0.4022 | 0.2465 | 0.4022 | 0.4020 | 0.2682 | 0.2726 |
| 0.4829 | 9.0 | 22500 | 1.1517 | 0.4016 | 0.2437 | 0.4018 | 0.4018 | 0.2621 | 0.2762 |
| 0.4481 | 10.0 | 25000 | 1.1661 | 0.4013 | 0.2476 | 0.4014 | 0.4012 | 0.2699 | 0.2782 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Hanisnabila/result3
|
Hanisnabila
| 2024-10-20T14:05:41Z | 183 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-sentiment-latest",
"base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-20T12:53:37Z |
---
library_name: transformers
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest
tags:
- generated_from_trainer
model-index:
- name: result3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result3
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7898 | 1.0 | 144 | 0.7518 |
| 0.6739 | 2.0 | 288 | 0.6244 |
| 0.5425 | 3.0 | 432 | 0.6693 |
| 0.3724 | 4.0 | 576 | 0.6817 |
| 0.3575 | 5.0 | 720 | 0.7166 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.2.2+cu118
- Datasets 3.0.1
- Tokenizers 0.20.0
|
cuongdev/vtthuc-v2
|
cuongdev
| 2024-10-20T14:05:17Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-20T14:01:46Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### vtthuc-v2 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
cryotron/chatbot_academic_2nd_Year_GUFF
|
cryotron
| 2024-10-20T14:02:15Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-20T14:00:02Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** cryotron
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Anish/CHIPSAL-COLING-TASKB-BEST-F1-0.75230-MURIL-LARGE-CASED
|
Anish
| 2024-10-20T13:56:34Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-20T13:55:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hkshawn/7b
|
hkshawn
| 2024-10-20T13:54:12Z | 16 | 1 | null |
[
"safetensors",
"qwen2",
"qwen",
"uncensored",
"text-generation",
"conversational",
"zh",
"en",
"dataset:NobodyExistsOnTheInternet/ToxicQAFinal",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:Orion-zhen/dpo-toxic-zh",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:Crystalcareai/Intel-DPO-Pairs-Norefusals",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:gpl-3.0",
"model-index",
"region:us"
] |
text-generation
| 2024-10-29T11:36:23Z |
---
language:
- zh
- en
license: gpl-3.0
tags:
- qwen
- uncensored
base_model:
- Qwen/Qwen2.5-7B-Instruct
datasets:
- NobodyExistsOnTheInternet/ToxicQAFinal
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Orion-zhen/dpo-toxic-zh
- unalignment/toxic-dpo-v0.2
- Crystalcareai/Intel-DPO-Pairs-Norefusals
pipeline_tag: text-generation
model-index:
- name: Qwen2.5-7B-Instruct-Uncensored
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 72.04
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 35.83
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 1.36
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.05
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.58
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 38.07
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
---
# Qwen2.5-7B-Instruct-Uncensored
This model is an uncensored fine-tune version of Qwen2.5-7B-Instruct. However, I can still notice that though uncensored, the model fails to generate detailed descriptions on certain extreme scenarios, which might be associated with deletion on some pretrain datasets in Qwen's pretraining stage.
Check out my roleplay&writing enhanced model based on this model: [Orion-zhen/Meissa-Qwen2.5-7B-Instruct](https://huggingface.co/Orion-zhen/Meissa-Qwen2.5-7B-Instruct)
## Traning details
I used SFT + DPO to ensure uncensorment as well as trying to maintain original model's capabilities.
- SFT:
- NobodyExistsOnTheInternet/ToxicQAFinal
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- DPO:
- Orion-zhen/dpo-toxic-zh
- unalignment/toxic-dpo-v0.2
- Crystalcareai/Intel-DPO-Pairs-Norefusals
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Orion-zhen__Qwen2.5-7B-Instruct-Uncensored)
| Metric |Value|
|-------------------|----:|
|Avg. |27.99|
|IFEval (0-Shot) |72.04|
|BBH (3-Shot) |35.83|
|MATH Lvl 5 (4-Shot)| 1.36|
|GPQA (0-shot) | 7.05|
|MuSR (0-shot) |13.58|
|MMLU-PRO (5-shot) |38.07|
|
QuantFactory/komodo-7b-base-GGUF
|
QuantFactory
| 2024-10-20T13:53:07Z | 126 | 3 |
transformers
|
[
"transformers",
"gguf",
"komodo",
"id",
"en",
"jv",
"su",
"arxiv:2403.09362",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-10-20T13:22:25Z |
---
language:
- id
- en
- jv
- su
license: llama2
library_name: transformers
tags:
- komodo
---
[](https://hf.co/QuantFactory)
# QuantFactory/komodo-7b-base-GGUF
This is quantized version of [Yellow-AI-NLP/komodo-7b-base](https://huggingface.co/Yellow-AI-NLP/komodo-7b-base) created using llama.cpp
# Original Model Card
# Model Card for Komodo-7B-Base
Komodo-7B-Base is a large language model that is developed through incremental pretraining and vocabulary expansion on top of Llama-2-7B-Base. This model can handle Indonesian, English and 11 regional languages of Indonesia.
**Disclaimer** : This is not an instruction-tuned model, further fine-tuning is needed for downstream tasks. For example, people usually utilize the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) dataset for further fine-tuning on top of Llama-2-7B-Base model. Hence, there is no prompt template for this model.
## Model Details
<h3 align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/638828121901766b88076aa1/eB0L_nmy3ZpwtGA6-vhbC.png" width="950" align="center">
</h3>
### Model Description
More details can be found in our paper: https://arxiv.org/abs/2403.09362
- **Developed by:** [Yellow.ai](https://yellow.ai/)
- **Model type:** Decoder
- **Languages:** English, Indonesian, Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Dayak Ngaju, Sundanese, Toba Batak, Lampungnese
- **License:** llama2
## Usage Example
Since this is a gated model, you need to logged in to your HF account before using the model. Below is one way to do this. You can get the HF Token from your profile (Profile -> Settings -> Access Tokens)
```
import huggingface_hub
huggingface_hub.login("YOUR_HF_TOKEN")
```
Once you are logged in, you can start download and load the model & tokenizer. We wrote a custom decoding function for Komodo-7B, that's why we need to pass the `trust_remote_code=True`. The code also works without this parameter, but decoding process will not work as expected.
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda:0" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained("Yellow-AI-NLP/komodo-7b-base",trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Yellow-AI-NLP/komodo-7b-base",trust_remote_code=True)
model = model.to(device)
```
Then, you can try using the model.
```
full_prompt = "Candi borobudur adalah"
tokens = tokenizer(full_prompt, return_tensors="pt").to(device)
output = model.generate(tokens["input_ids"], eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(output[0], skip_special_tokens=True))
# Candi borobudur adalah candi yang terletak di Magelang, Jawa Tengah.
```
## Technical Specifications
### Model Architecture and Objective
Komodo-7B is a decoder model using the Llama-2 architecture.
| Parameter | Komodo-7B |
|-----------------|:-----------:|
| Layers | 32 |
| d_model | 4096 |
| head_dim | 32 |
| Vocabulary | 35008 |
| Sequence Length | 4096 |
### Tokenizer Details
Recognizing the importance of linguistic diversity, we focused on enhancing our language model's proficiency in both Indonesian and regional languages. To achieve this, we systematically expanded the tokenizer's vocabulary by identifying and incorporating approximately 2,000 frequently used words specific to Indonesian and 1,000 words for regional languages that were absent in the Llama-2 model.
The standard method for enhancing a vocabulary typically involves developing a new tokenizer and integrating it with the existing one. This technique has shown impressive results in projects like Chinese-LLaMA and Open-Hathi. The effectiveness of this strategy can be attributed to the significant linguistic distinctions between languages such as Chinese and Hindi when compared to English. In contrast, the Indonesian language employs the same Latin script as English, which presents a different set of challenges.
We tested the traditional method, as well as a new approach where we included the top n words (not tokens) from the Indonesian vocabulary. We discovered that with the new approach, we could achieve better fertility scores by adding around 3000 new vocabulary words. Adding more than 3000 words did not significantly improve the fertility score further, but it increased the size of the embedding matrix, leading to longer training times.
More details can be found in our paper: https://arxiv.org/abs/2403.09362
### Training Data
More details can be found in our paper: https://arxiv.org/abs/2403.09362
### Training Procedure
More details can be found in our paper: https://arxiv.org/abs/2403.09362
#### Preprocessing
More details can be found in our paper: https://arxiv.org/abs/2403.09362
## Evaluation & Results
Please note that the benchmarking values below are based on our SFT Model, Komodo-7B-Instruct, while here we only release the base model, Komodo-7B-base.
| Organization | Model Name | Indo MMLU | ID-EN | XCOPA-ID | Intent Classification | Colloquial Detection | NusaX-Senti | ID-Hate Speech | TydiQA-ID | Indosum | Average |
|--------------|--------------------|-----------|-------|----------|-----------------------|----------------------|-------------|----------------|-----------|---------|---------|
| OpenAI | GPT-3.5-turbo-0301 | 51.3 | 64.5 | 70.0 | 82.0 | 64.1 | 47.2 | 68.0 | 85.3 | 41.0 | 63.7 |
| OpenAI | GPT-3.5-turbo-0613 | 52.7 | 66.8 | 88.2 | 84.0 | 75.1 | 63.3 | 63.7 | 86.4 | 40.0 | 68.9 |
| OpenAI | GPT-3.5-turbo-1106 | 53.3 | 69.7 | 89.3 | 84.0 | 64.2 | 59.8 | 56.6 | 88.0 | 42.0 | 67.4 |
| OpenAI | GPT-4-preview-1106 | 69.8 | 78.0 | 98.3 | 89.0 | 92.7 | 66.1 | 73.4 | 72.0 | 33.0 | 74.7 |
| Meta | Llama-2-7B-Chat | 30.4 | 45.6 | 41.5 | 57.0 | 31.4 | 2.9 | 41.3 | 11.7 | 34.0 | 32.9 |
| Meta | Llama-2-13B-Chat | 32.0 | 61.7 | 38.0 | 59.0 | 31.1 | 58.7 | 57.2 | 71.9 | 40.0 | 50.0 |
| Google | Gemma-7B-it | 37.4 | 73.6 | 57.7 | 77.1 | 18.8 | 44.2 | 54.8 | 73.3 | 44.0 | 53.4 |
| Mistral | Mixtral-8x7B-v0.1-Instruct | 45.2 | 57.8 | 88.7 | 86.0 | 41.1 | 52.8 | 68.8 | 90.3 | 14.0 | 60.5 |
| AISingapore | Sealion-7B-Instruct-NC | 23.9 | 26.9 | 41.3 | 37.0 | 41.8 | 30.7 | 57.3 | 65.3 | 26.0 | 38.9 |
| Cohere | Aya-101-13B | 47.7 | 47.3 | 84.0 | 64.0 | 18.9 | 74.6 | 72.7 | 81.3 | 39.0 | 58.8 |
| MBZUAI | Bactrian-X-Llama-7B | 23.6 | 43.2 | 45.3 | 42.0 | 50.3 | 44.5 | 42.4 | 65.0 | 15.0 | 41.3 |
| Alibaba | Qwen-1.5-7B-chat | 40.0 | 56.0 | 29.5 | 85.0 | 41.8 | 58.7 | 63.9 | 51.22 | 29.0 | 50.6 |
| Yellow.ai | Komodo-7B-Instruct | 43.2 | 90.5 | 79.6 | 84.0 | 73.6 | 79.3 | 56.2 | 90.3 | 43.0 | 71.1 |
<h3 align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/638828121901766b88076aa1/CJkSjsVnC8MoMolIQ_Uv3.png" width="550" align="center">
</h3>
More details can be found in our paper: https://arxiv.org/abs/2403.09362
### Infrastructure
| Training Details | Komodo-7B |
|----------------------|:------------:|
| AWS EC2 p4d.24xlarge | 1 instances |
| Nvidia A100 40GB GPU | 8 |
| Training Duration | 300 hours |
## Citation
```
@misc{owen2024komodo,
title={Komodo: A Linguistic Expedition into Indonesia's Regional Languages},
author={Louis Owen and Vishesh Tripathi and Abhay Kumar and Biddwan Ahmed},
year={2024},
eprint={2403.09362},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Model Card Authors
[Louis Owen](https://www.linkedin.com/in/louisowen/) <br>
[Vishesh Tripathi](https://www.linkedin.com/in/vishesh-tripathi/) <br>
[Abhay Kumar](https://www.linkedin.com/in/akanyaani/) <br>
[Biddwan Ahmed](https://www.linkedin.com/in/biddwan-ahmed-917333126/) <br>
|
WYCN/testm
|
WYCN
| 2024-10-20T13:50:50Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gguf",
"llama",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-20T13:31:51Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaziyarPanahi/Llama-3.1-8B-Lexi-Uncensored-V2-GGUF
|
MaziyarPanahi
| 2024-10-20T13:50:09Z | 157 | 2 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2",
"base_model:quantized:Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2",
"region:us",
"conversational"
] |
text-generation
| 2024-10-20T13:10:02Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Llama-3.1-8B-Lexi-Uncensored-V2-GGUF
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
inference: false
model_creator: Orenguteng
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-3.1-8B-Lexi-Uncensored-V2-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.1-8B-Lexi-Uncensored-V2-GGUF)
- Model creator: [Orenguteng](https://huggingface.co/Orenguteng)
- Original model: [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2)
## Description
[MaziyarPanahi/Llama-3.1-8B-Lexi-Uncensored-V2-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.1-8B-Lexi-Uncensored-V2-GGUF) contains GGUF format model files for [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
kpkom/Flipkart2
|
kpkom
| 2024-10-20T13:47:38Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-10-09T17:24:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mrcuddle/Lumimaid-v0.2-12B-Q4_K_M-GGUF
|
mrcuddle
| 2024-10-20T13:30:36Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama.cpp",
"quantized",
"NeverSleep/Lumimaid-v0.2-12B",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-16T16:29:26Z |
---
tags:
- gguf
- llama.cpp
- quantized
- NeverSleep/Lumimaid-v0.2-12B
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
---
# mrcuddle/Lumimaid-v0.2-12B-Q4_K_M-GGUF
This model was converted to GGUF format from [`NeverSleep/Lumimaid-v0.2-12B`](https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B) using llama.cpp via
[Convert Model to GGUF](https://github.com/ruslanmv/convert-model-to-GGUF).
**Key Features:**
* Quantized for reduced file size (GGUF format)
* Optimized for use with llama.cpp
* Compatible with llama-server for efficient serving
Refer to the [original model card](https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B) for more details on the base model.
## Usage with llama.cpp
**1. Install llama.cpp:**
```bash
brew install llama.cpp # For macOS/Linux
```
**2. Run Inference:**
**CLI:**
```bash
llama-cli --hf-repo mrcuddle/Lumimaid-v0.2-12B-Q4_K_M-GGUF --hf-file lumimaid-v0.2-12b-q4_k_m.gguf -p "Your prompt here"
```
**Server:**
```bash
llama-server --hf-repo mrcuddle/Lumimaid-v0.2-12B-Q4_K_M-GGUF --hf-file lumimaid-v0.2-12b-q4_k_m.gguf -c 2048
```
For more advanced usage, refer to the [llama.cpp repository](https://github.com/ggerganov/llama.cpp).
|
Manas2708/Llama-3-medical
|
Manas2708
| 2024-10-20T13:28:52Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-20T13:28:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KoFinanceLLM/KRX-Qwen2.5-7B-IT-MN
|
KoFinanceLLM
| 2024-10-20T13:16:46Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"krx",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T13:11:24Z |
---
library_name: transformers
tags:
- krx
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Guilherme34/CODER-0.5b_awq
|
Guilherme34
| 2024-10-20T13:14:37Z | 76 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-10-20T13:14:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cuongdev/hntanh-v2
|
cuongdev
| 2024-10-20T13:13:52Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-20T13:10:24Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### hntanh-v2 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
QuantFactory/Meraj-Mini-GGUF
|
QuantFactory
| 2024-10-20T13:12:27Z | 147 | 3 |
transformers
|
[
"transformers",
"gguf",
"qwen",
"text-generation-inference",
"text2text-generation",
"ar",
"en",
"arxiv:2305.18290",
"arxiv:2403.13257",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text2text-generation
| 2024-10-20T12:35:24Z |
---
license: apache-2.0
language:
- ar
- en
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text2text-generation
library_name: transformers
tags:
- qwen
- text-generation-inference
---
[](https://hf.co/QuantFactory)
# QuantFactory/Meraj-Mini-GGUF
This is quantized version of [arcee-ai/Meraj-Mini](https://huggingface.co/arcee-ai/Meraj-Mini) created using llama.cpp
# Original Model Card
<div align="center">
<img src="https://i.ibb.co/CmPSSpq/Screenshot-2024-10-06-at-9-45-06-PM.png" alt="Arcee Meraj Mini" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
</div>
Following the release of [Arcee Meraj](https://meraj.arcee.ai/), our enterprise's globally top-performing Arabic LLM, we are thrilled to unveil Arcee Meraj Mini. This open-source model, meticulously fine-tuned from [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct), is expertly designed for both Arabic and English. This model has undergone rigorous evaluation across multiple benchmarks in both languages, demonstrating top-tier performance in Arabic and competitive results in English. Arcee Meraj Mini’s primary objective is to enhance Arabic capabilities while maintaining robust English language proficiency. Benchmark results confirm that Arcee Meraj Mini excels in Arabic, with English performance comparable to leading models — perfectly aligning with our vision for balanced bilingual strength.
## Technical Details
Below is an overview of the key stages in Meraj Mini’s development:
1. **Data Preparation:** We filter candidate samples from diverse English and Arabic sources to ensure high-quality data. Some of the selected English datasets are translated into Arabic to increase the quantity of Arabic samples and improve the model’s quality in bilingual performance. Then, new [Direct Preference Optimization (DPO)](https://arxiv.org/pdf/2305.18290) datasets are continuously prepared, filtered, and translated to maintain a fresh and diverse dataset that supports better generalization across domains.
2. **Initial Training:** We train the Qwen2.5 model with 7 billion parameters using these high-quality datasets in both languages. This allows the model to handle diverse linguistic patterns from over 500 million tokens, ensuring strong performance in Arabic and English tasks.
3. **Iterative Training and Post-Training:** Iterative training and post-training iterations refine the model, enhancing its accuracy and adaptability to ensure it can perform well across varied tasks and language contexts.
4. **Evaluation:** Arcee Meraj Mini is based on training and evaluating 15 different variants to explore optimal configurations, with assessments done on both Arabic and English benchmarks and leaderboards. This step ensures the model is robust in handling both general and domain-specific tasks.
5. **Final Model Creation:** We select the best-performing variant and use the [MergeKit](https://arxiv.org/pdf/2403.13257) library to merge the configurations, resulting in the final Arcee Meraj Mini model. This model is not only optimized for language understanding but also serves as a starting point for domain adaptation in different areas.
With this process, Arcee Meraj Mini is crafted to be more than just a general-purpose language model—it’s an adaptable tool, ready to be fine-tuned for specific industries and applications, empowering users to extend its capabilities for domain-specific tasks.
## Capabilities and Use Cases
Arcee Meraj Mini is capable of solving a wide range of language tasks, including the tasks as below:
1. **Arabic Language Understanding**: Arcee Meraj Mini excels in general language comprehension, reading comprehension, and common-sense reasoning, all tailored to the Arabic language, providing strong performance in a variety of linguistic tasks.
2. **Cultural Adaptation**: The model ensures content creation that goes beyond linguistic accuracy, incorporating cultural nuances to align with Arabic norms and values, making it suitable for culturally relevant applications.
3. **Education**: It enables personalized, adaptive learning experiences for Arabic speakers by generating high-quality educational content across diverse subjects, enhancing the overall learning journey.
4. **Mathematics and Coding**: With robust support for mathematical reasoning and problem-solving, as well as code generation in Arabic, Arcee Meraj Mini serves as a valuable tool for developers and professionals in technical fields.
5. **Customer Service**: The model facilitates the development of advanced Arabic-speaking chatbots and virtual assistants, capable of managing customer queries with a high degree of natural language understanding and precision.
6. **Content Creation**: Arcee Meraj Mini generates high-quality Arabic content for various needs, from marketing materials and technical documentation to creative writing, ensuring impactful communication and engagement in the Arabic-speaking world.
## Quantized GGUF
Here are GGUF models:
- [Meraj-Mini-GGUF](https://huggingface.co/MaziyarPanahi/Meraj-Mini-GGUF)
## How to
This model uses ChatML prompt template:
```
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
```
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "مرحبا، كيف حالك؟"},
]
pipe = pipeline("text-generation", model="arcee-ai/Meraj-Mini")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("arcee-ai/Meraj-Mini")
model = AutoModelForCausalLM.from_pretrained("arcee-ai/Meraj-Mini")
```
## Evaluations
#### Open Arabic LLM Leaderboard (OALL) Benchmarks
Arcee Meraj Mini model consistently outperforms state-of-the-art models on most of the Open Arabic LLM Leaderboard (OALL) benchmarks, highlighting its improvements and effectiveness in Arabic language content, and securing the top performing position on average among the other models.
<div align="center">
<img src="https://i.ibb.co/LQ0z7fH/Screenshot-2024-10-15-at-2-53-45-PM.png" alt="Arcee Meraj Mini Open Arabic LLM Leaderboard (OALL) - table 1" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
<div align="center">
<img src="https://i.ibb.co/fM6VQR7/Screenshot-2024-10-15-at-2-53-55-PM.png" alt="Arcee Meraj Mini Open Arabic LLM Leaderboard (OALL) - table 2" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
#### Translated MMLU
We focused on the multilingual MMLU dataset, as distributed through the LM Evaluation Harness repository, to compare the multilingual strength of different models for this benchmark. Arcee Meraj Mini outperforms the other models, showcasing these models’ superior performance compared to the other state-of-the-art models.
<div align="center">
<img src="https://i.ibb.co/dfwW1W5/W-B-Chart-10-15-2024-2-07-12-PM.png" alt="Arcee Meraj Mini Trnalsated MMLU" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
#### English Benchmarks:
Arcee Meraj Mini performs comparably to state-of-the-art models, demonstrating how the model retains its English language knowledge and capabilities while learning Arabic.
<div align="center">
<img src="https://i.ibb.co/mTcLFzt/W-B-Chart-10-15-2024-2-15-57-PM.png" alt="Arcee Meraj Mini Winogrande" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
<div align="center">
<img src="https://i.ibb.co/GRBjjGN/W-B-Chart-10-15-2024-2-17-34-PM.png" alt="Arcee Meraj Mini Arc Challenge" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
<div align="center">
<img src="https://i.ibb.co/98s0qTf/W-B-Chart-10-15-2024-2-17-46-PM.png" alt="Arcee Meraj Mini TruthfulQA" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
<div align="center">
<img src="https://i.ibb.co/yqvRK3L/W-B-Chart-10-15-2024-2-17-57-PM.png" alt="Arcee Meraj Mini GSM8K" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 80%; height: auto;">
</div>
## Model Usage
For a detailed explanation of the model's capabilities, architecture, and applications, please refer to our blog post: https://blog.arcee.ai/arcee-meraj-mini-2/
To test the model directly, you can try it out using this Google Colab notebook: https://colab.research.google.com/drive/1hXXyNM-X0eKwlZ5OwqhZfO0U8CBq8pFO?usp=sharing
## Acknowledgements
We are grateful to the open-source AI community for their continuous contributions and to the Qwen team for their foundational efforts on the Qwen2.5 model series.
## Future Directions
As we release the Arcee Meraj Mini to the public, we invite researchers, developers, and businesses to engage with the Arcee Meraj Mini model, particularly in enhancing support for the Arabic language and fostering domain adaptation. We are committed to advancing open-source AI technology and invite the community to explore, contribute, and build upon Arcee Meraj Mini.
|
SkylerChew/speecht5_tts_genshin
|
SkylerChew
| 2024-10-20T13:02:10Z | 77 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"eg",
"dataset:simon3000/genshin-voice",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-10-20T09:52:11Z |
---
library_name: transformers
language:
- eg
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- simon3000/genshin-voice
model-index:
- name: SpeechT5 TTS English Genshin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS English Genshin
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the genshin-voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6689 | 1.9550 | 1000 | 0.5954 |
| 0.6139 | 3.9101 | 2000 | 0.5623 |
| 0.6054 | 5.8651 | 3000 | 0.5481 |
| 0.5932 | 7.8201 | 4000 | 0.5421 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Gummybear05/wav2vec2-E50_freq_speed
|
Gummybear05
| 2024-10-20T12:44:35Z | 23 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-20T11:00:15Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-E50_freq_speed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-E50_freq_speed
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2867
- Cer: 26.4509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 36.1912 | 0.1289 | 200 | 4.9281 | 100.0 |
| 4.8744 | 0.2579 | 400 | 4.6344 | 100.0 |
| 4.7389 | 0.3868 | 600 | 4.6396 | 100.0 |
| 4.7053 | 0.5158 | 800 | 4.6142 | 100.0 |
| 4.6271 | 0.6447 | 1000 | 4.5886 | 98.9779 |
| 4.5325 | 0.7737 | 1200 | 4.3545 | 97.4213 |
| 4.0987 | 0.9026 | 1400 | 3.3740 | 62.2768 |
| 3.1619 | 1.0316 | 1600 | 2.8281 | 48.0733 |
| 2.8074 | 1.1605 | 1800 | 2.4434 | 44.3257 |
| 2.5099 | 1.2895 | 2000 | 2.2456 | 40.7542 |
| 2.3202 | 1.4184 | 2200 | 2.0216 | 38.0169 |
| 2.2438 | 1.5474 | 2400 | 1.8903 | 35.2796 |
| 2.0245 | 1.6763 | 2600 | 1.8335 | 34.7098 |
| 1.9285 | 1.8053 | 2800 | 1.7468 | 34.4690 |
| 1.83 | 1.9342 | 3000 | 1.5999 | 31.0503 |
| 1.6842 | 2.0632 | 3200 | 1.5379 | 30.3454 |
| 1.5576 | 2.1921 | 3400 | 1.4967 | 29.8755 |
| 1.4787 | 2.3211 | 3600 | 1.3937 | 28.0721 |
| 1.458 | 2.4500 | 3800 | 1.3861 | 27.7079 |
| 1.3927 | 2.5790 | 4000 | 1.2945 | 26.1631 |
| 1.3578 | 2.7079 | 4200 | 1.3155 | 27.2968 |
| 1.3148 | 2.8369 | 4400 | 1.2744 | 26.1454 |
| 1.3245 | 2.9658 | 4600 | 1.2867 | 26.4509 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
ashokpoudel/retail-banking-llm-chatbot-translation-0.0.1
|
ashokpoudel
| 2024-10-20T12:42:40Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-20T12:40:30Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** ashokpoudel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
elisaklunder/finetuned_bert_for_asap_sas_essayset_9
|
elisaklunder
| 2024-10-20T12:42:27Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-10-20T12:42:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
elisaklunder/finetuned_bert_for_asap_sas_essayset_8
|
elisaklunder
| 2024-10-20T12:42:03Z | 161 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-10-20T12:41:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
elisaklunder/finetuned_bert_for_asap_sas_essayset_7
|
elisaklunder
| 2024-10-20T12:41:43Z | 159 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-10-20T12:41:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xonic48/codeparrot-ds
|
xonic48
| 2024-10-20T12:41:00Z | 143 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T05:17:00Z |
---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
elisaklunder/finetuned_bert_for_asap_sas_essayset_5
|
elisaklunder
| 2024-10-20T12:40:37Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-10-20T12:40:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
elisaklunder/finetuned_bert_for_asap_sas_essayset_4
|
elisaklunder
| 2024-10-20T12:40:15Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-10-20T12:39:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Odin-9B-i1-GGUF
|
mradermacher
| 2024-10-20T12:40:06Z | 101 | 1 |
transformers
|
[
"transformers",
"gguf",
"chat",
"en",
"dataset:anthracite-org/c2_logs_16k_llama_v1.1",
"dataset:NewEden/Claude-Instruct-5K",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:lodrick-the-lafted/kalo-opus-instruct-3k-filtered",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:anthracite-org/kalo_opus_misc_240827",
"dataset:anthracite-org/kalo_misc_part2",
"base_model:Delta-Vector/Odin-9B",
"base_model:quantized:Delta-Vector/Odin-9B",
"license:agpl-3.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-13T09:59:20Z |
---
base_model: Delta-Vector/Odin-9B
datasets:
- anthracite-org/c2_logs_16k_llama_v1.1
- NewEden/Claude-Instruct-5K
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- lodrick-the-lafted/kalo-opus-instruct-3k-filtered
- anthracite-org/nopm_claude_writing_fixed
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- anthracite-org/kalo_opus_misc_240827
- anthracite-org/kalo_misc_part2
language:
- en
library_name: transformers
license: agpl-3.0
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Delta-Vector/Odin-9B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Odin-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-9B-i1-GGUF/resolve/main/Odin-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
elisaklunder/finetuned_bert_for_asap_sas_essayset_3
|
elisaklunder
| 2024-10-20T12:39:54Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-10-20T12:39:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thangtrungnguyen/vietnamese-regional-accent-classification-model
|
thangtrungnguyen
| 2024-10-20T12:36:20Z | 133 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-10-07T08:11:07Z |
---
library_name: transformers
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: vietnamese-regional-accent-classification-model
results:
- task:
name: Audio Classification
type: audio-classification
metrics:
- name: F1
type: f1
value: 0.8217287598030195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vietnamese-regional-accent-classification-model
This model achieves the following results on the evaluation set:
- Loss: 0.5951
- F1: 0.8217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | F1 | Validation Loss |
|:-------------:|:-----:|:----:|:------:|:---------------:|
| 1.0507 | 1.0 | 44 | 0.7563 | 0.8615 |
| 0.8157 | 2.0 | 88 | 0.7804 | 0.7120 |
| 0.7555 | 3.0 | 132 | 0.7981 | 0.6501 |
| 0.7006 | 4.0 | 176 | 0.7635 | 0.6767 |
| 0.6825 | 5.0 | 220 | 0.8005 | 0.6370 |
| 0.6595 | 6.0 | 264 | 0.7735 | 0.6832 |
| 0.6634 | 7.0 | 308 | 0.8078 | 0.6044 |
| 0.627 | 8.0 | 352 | 0.7873 | 0.6399 |
| 0.603 | 9.0 | 396 | 0.8255 | 0.5825 |
| 0.5977 | 10.0 | 440 | 0.8180 | 0.5931 |
| 0.5635 | 11.0 | 484 | 0.8217 | 0.5951 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
MaziyarPanahi/Llama-3.2-3B-Instruct-abliterated-GGUF
|
MaziyarPanahi
| 2024-10-20T12:30:29Z | 172 | 1 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"region:us",
"conversational"
] |
text-generation
| 2024-10-20T12:11:46Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Llama-3.2-3B-Instruct-abliterated-GGUF
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
inference: false
model_creator: huihui-ai
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-3.2-3B-Instruct-abliterated-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Instruct-abliterated-GGUF)
- Model creator: [huihui-ai](https://huggingface.co/huihui-ai)
- Original model: [huihui-ai/Llama-3.2-3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated)
## Description
[MaziyarPanahi/Llama-3.2-3B-Instruct-abliterated-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3.2-3B-Instruct-abliterated-GGUF) contains GGUF format model files for [huihui-ai/Llama-3.2-3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
QuantFactory/SecurityLLM-GGUF
|
QuantFactory
| 2024-10-20T12:16:54Z | 167 | 4 |
transformers
|
[
"transformers",
"gguf",
"security",
"cybersecwithai",
"threat",
"vulnerability",
"infosec",
"zysec.ai",
"cyber security",
"ai4security",
"llmsecurity",
"cyber",
"malware analysis",
"exploitdev",
"ai4good",
"aisecurity",
"cybersec",
"cybersecurity",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-20T11:42:56Z |
---
library_name: transformers
license: apache-2.0
tags:
- security
- cybersecwithai
- threat
- vulnerability
- infosec
- zysec.ai
- cyber security
- ai4security
- llmsecurity
- cyber
- malware analysis
- exploitdev
- ai4good
- aisecurity
- threat
- cybersec
- cybersecurity
---
[](https://hf.co/QuantFactory)
# QuantFactory/SecurityLLM-GGUF
This is quantized version of [ZySec-AI/SecurityLLM](https://huggingface.co/ZySec-AI/SecurityLLM) created using llama.cpp
# Original Model Card
# ZySec-7B
ZySec-7B, stands as a pivotal innovation for security professionals, leveraging the advanced capabilities of HuggingFace's Zephyr language model series. This AI model is crafted to be an omnipresent cybersecurity ally, offering on-demand, expert guidance in cybersecurity issues. Picture ZySec-7B as an ever-present digital teammate, adept at navigating the complexities of security challenges.
The efficacy of ZySec-7B lies in its comprehensive training across numerous cybersecurity fields, providing a deep and wide-ranging understanding of the sector. ZySec is developed using the DPO technique, utilizing a varied dataset encompassing critical topics such as:
- Sophisticated areas like Attack Surface Threats, Cloud Security, and the Cyber Kill Chain.
- Key compliance and regulatory frameworks, including CIS Controls, FedRAMP, PCI DSS, and ISO/IEC 27001.
- Practical aspects like Cloud Secure Migration, Data Exfiltration Techniques, and Security Incident Handling.
- Crucial strategic fields such as Security Governance, Risk Management, and Security Architecture Review.
ZySec-7B's training spans over 30 unique domains, each enriched with thousands of data points, delivering unparalleled expertise.
As the first of its kind in an open-source, AI-driven cybersecurity series, ZySec-7B transcends the conventional role of a support tool, redefining organizational security approaches. Its open-source nature not only invites community contributions but also enhances its flexibility and transparency in managing vast cybersecurity data. ZySec-7B is instrumental in providing vital, actionable insights for strategic decision-making and advanced risk management. More than a mere software, ZySec-7B is a community-enhanced strategic tool, equipping your team to proactively confront and stay ahead of the dynamic landscape of cyber threats and regulatory demands.
# For suggestions please use [Road Map](https://zysec-ai.productlift.dev/t/roadmap)
<img src="https://huggingface.co/aihub-app/ZySec-7B-v1/resolve/main/ZySec-7B-dataset-composition.png?download=true" alt="Dataset Distribution" width="90%"/>
Details of dataset distribution here - [Dataset Distribution](https://huggingface.co/aihub-app/ZySec-7B/resolve/main/ZySec-7B-dataset-composition.png?download=true)
Fully compatible with [LM Studio](https://lmstudio.ai). Search for “Zysec” and here is what you get. Here is a sample output of ZySec writing email to John about database security using LM Studio:
<img src="https://huggingface.co/aihub-app/ZySec-7B-v1/resolve/main/sample-output.png" alt="Sample Output" width="90%"/>
---
The training is funded by [ZySec AI](https://www.zysec.app), the mobile app for Cyber Security professionals.
Official GGUF version is hosted here - [ZySec-7B-v1-GGUF on HuggingFace](https://huggingface.co/aihub-app/ZySec-7B-v1-GGUF)
## [ZySec AI: Unleashing the Potential of the ZySec Series Model](https://github.com/ZySec-AI/ZySec)
Project ZySec, an integral part of ZySec AI, stands at the forefront of integrating Artificial Intelligence into Cybersecurity. Centered around the innovative ZySec 7B model, it's designed to revolutionize the cybersecurity landscape with AI-driven solutions. ZySec AI isn't just a tool, it's a transformative approach, blending AI's cutting-edge capabilities with the unique intricacies of cybersecurity, while ensuring privacy and security.
### Discover the Key Features of Project ZySec
- **AI-Driven Cybersecurity:** Tap into the power of the ZySec 7B model, a bespoke AI solution fine-tuned for cybersecurity.
- **24/7 Expert Assistance:** Benefit from round-the-clock support and expert advice, guaranteeing smooth operations during any SOC shift.
- **Efficient Playbook Access:** Streamline your workflow with quick and easy access to playbooks and documents, enhancing information retrieval.
- **Standards Explorer:** Navigate various standards with ease, akin to a seasoned expert's proficiency.
- **Ongoing Internet Research:** Leverage AI-enabled, thorough internet research for exhaustive insights. (Note: Internet use is optional and specific to this feature).
### About Project ZySec by ZySec AI
ZySec AI an opensource project with a vision towards fusioning of Cybersecurity with Artificial Intelligence. Our goal is to transform the way security professionals engage with technology. More than a mere tool, ZySec AI symbolizes a comprehensive strategy to augment security operations, merging the innovative essence of AI with cybersecurity's distinctive challenges, always ensuring privacy and security.
https://github.com/ZySec-AI/ZySec
### The ZySec Roadmap
https://github.com/ZySec-AI/.github/blob/main/roadmap.md
|
anno2021/distilbert-base-uncased-finetuned-emotion
|
anno2021
| 2024-10-20T12:12:39Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-20T12:05:29Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2101
- Accuracy: 0.9295
- F1: 0.9293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8395 | 1.0 | 250 | 0.3061 | 0.9105 | 0.9097 |
| 0.2528 | 2.0 | 500 | 0.2101 | 0.9295 | 0.9293 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
frtcek95/qwen2.5-coder-text2nosql
|
frtcek95
| 2024-10-20T11:53:07Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T11:46:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kerk86/bevzyukn-1
|
kerk86
| 2024-10-20T11:44:36Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-20T11:44:07Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: BevzyukN
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# BevzyukN_1
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `BevzyukN` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
ssdv8/output
|
ssdv8
| 2024-10-20T11:32:15Z | 22 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-17T15:16:42Z |
---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - ssdv8/output
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
legionlm/strawberry-llama-3.2-3b
|
legionlm
| 2024-10-20T11:22:16Z | 141 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:legionlm/strawberry",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T03:31:35Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
datasets:
- legionlm/strawberry
---
# Uploaded model
- **Developed by:** legionlm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cuongdev/4nguoi
|
cuongdev
| 2024-10-20T11:04:45Z | 37 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-20T10:59:37Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### 4nguoi Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
SebastianG-J/distilbert-base-uncased-distilled-clinc
|
SebastianG-J
| 2024-10-20T10:57:17Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-20T10:51:44Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1825
- Accuracy: 0.9490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.557855667877436e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 14
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6183 | 1.0 | 318 | 0.4643 | 0.8413 |
| 0.3173 | 2.0 | 636 | 0.2425 | 0.9352 |
| 0.1955 | 3.0 | 954 | 0.2108 | 0.9474 |
| 0.1696 | 4.0 | 1272 | 0.1982 | 0.9455 |
| 0.1591 | 5.0 | 1590 | 0.1954 | 0.9471 |
| 0.1535 | 6.0 | 1908 | 0.1935 | 0.9445 |
| 0.1505 | 7.0 | 2226 | 0.1876 | 0.9506 |
| 0.1479 | 8.0 | 2544 | 0.1886 | 0.9477 |
| 0.146 | 9.0 | 2862 | 0.1861 | 0.9477 |
| 0.1446 | 10.0 | 3180 | 0.1855 | 0.9487 |
| 0.1433 | 11.0 | 3498 | 0.1846 | 0.9468 |
| 0.1424 | 12.0 | 3816 | 0.1829 | 0.9497 |
| 0.1416 | 13.0 | 4134 | 0.1824 | 0.9487 |
| 0.1409 | 14.0 | 4452 | 0.1825 | 0.9490 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Diego2703/tfg_tercerft
|
Diego2703
| 2024-10-20T10:56:32Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"internlm2",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2024-10-20T10:52:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Lexora-Medium-7B-GGUF
|
mradermacher
| 2024-10-20T10:41:40Z | 52 | 1 |
transformers
|
[
"transformers",
"gguf",
"it",
"en",
"dataset:DeepMount00/Sonnet-3.5-ITA-INSTRUCTION",
"dataset:DeepMount00/Sonnet-3.5-ITA-DPO",
"base_model:DeepMount00/Lexora-Medium-7B",
"base_model:quantized:DeepMount00/Lexora-Medium-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-24T20:36:19Z |
---
base_model: DeepMount00/Lexora-Medium-7B
datasets:
- DeepMount00/Sonnet-3.5-ITA-INSTRUCTION
- DeepMount00/Sonnet-3.5-ITA-DPO
language:
- it
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DeepMount00/Lexora-Medium-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Lexora-Medium-7B-GGUF/resolve/main/Lexora-Medium-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
allknowingroger/WeirdSlerp-30B
|
allknowingroger
| 2024-10-20T10:41:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct",
"base_model:merge:VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct",
"base_model:rombodawg/Rombos-LLM-V2.6-Nemotron-70b",
"base_model:merge:rombodawg/Rombos-LLM-V2.6-Nemotron-70b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T10:06:36Z |
---
base_model:
- rombodawg/Rombos-LLM-V2.6-Nemotron-70b
- VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [rombodawg/Rombos-LLM-V2.6-Nemotron-70b](https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Nemotron-70b)
* [VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: rombodawg/Rombos-LLM-V2.6-Nemotron-70b
layer_range: [0, 32]
- model: VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct
layer_range: [0, 32]
merge_method: slerp
base_model: rombodawg/Rombos-LLM-V2.6-Nemotron-70b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float32
```
|
Dmitry-lab/my_model
|
Dmitry-lab
| 2024-10-20T10:40:13Z | 72 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-10-14T06:36:57Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Dmitry-lab/my_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Dmitry-lab/my_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5564
- Validation Loss: 1.8377
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4595 | 2.1986 | 0 |
| 1.8084 | 1.8377 | 1 |
| 1.5635 | 1.8377 | 2 |
| 1.5681 | 1.8377 | 3 |
| 1.5564 | 1.8377 | 4 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Diego2703/tfg_segundoft
|
Diego2703
| 2024-10-20T10:39:26Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"internlm2",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] |
feature-extraction
| 2024-10-20T10:37:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/RocRacoon-3b-GGUF
|
mradermacher
| 2024-10-20T10:32:39Z | 144 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:aixonlab/RocRacoon-3b",
"base_model:quantized:aixonlab/RocRacoon-3b",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-18T07:14:25Z |
---
base_model: aixonlab/RocRacoon-3b
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/aixonlab/RocRacoon-3b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/RocRacoon-3b-GGUF/resolve/main/RocRacoon-3b.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ppicazo/autotrain-5ica5-rokd7
|
ppicazo
| 2024-10-20T10:31:31Z | 8 | 0 | null |
[
"tensorboard",
"safetensors",
"vit",
"autotrain",
"image-classification",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"region:us"
] |
image-classification
| 2024-10-20T08:22:12Z |
---
tags:
- autotrain
- image-classification
base_model: google/vit-base-patch16-224-in21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.0028711396735161543
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
|
TheImam/Burkan_2
|
TheImam
| 2024-10-20T10:22:46Z | 36 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-20T10:17:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
meandyou200175/e5-finetune
|
meandyou200175
| 2024-10-20T10:20:14Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:43804",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-base",
"base_model:finetune:intfloat/multilingual-e5-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-20T10:19:44Z |
---
base_model: intfloat/multilingual-e5-base
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:43804
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Nhờ bác sĩ cho biết việc lựa chọn đóng đinh nội tủy và nẹp vít
để kết hợp xương đòn dựa trên cơ sở nào ạ? Ca phẫu thuật thường kéo dài trong
bao lâu? Bệnh nhân nằm viện mấy ngày?
sentences:
- ' Chào em, là bệnh mãn tính phải điều trị suốt đời, phải kiên nhẫn và kiên trì
nên đôi khi lượng đường trong cơ thể không ổn định. Lúc đi khám xét nghiệm thì
ổn do bản thân biết mai đi khám nên sẽ kiêng ăn, ăn ít... còn bệnh lâu dài nên
trong ngày đôi khi thèm chút này hay thích ăn chút kia, quên uống thuốc, suy
nghĩ, mất ngủ cũng làm đường không ổn định. Đường trong cơ thể lúc lên lúc xuống
dễ đưa đến biến chứng. Em hay thấy bệnh nhân tiểu đường tháo khớp ngón chân, ngón
tay, đôi khi tháo khớp gối, khớp háng, đây là do tê liệt hệ thần kinh nên khi
va chạm bệnh nhân không phát hiện. Đến khi phát hiện thì đã nhiễm trùng nặng phải
tháo khớp. Theo BS mẹ em có khả năng do biến chứng tiểu đường vì mẹ em bị bệnh
khá lâu nên ít nhiều ảnh hưởng thần kinh bị tê liệt gây đau. Em nên nhớ dặn mẹ
đi tái khám và điều trị cho thật ổn định nhé! Thân mến!'
- ' Để lựa chọn phương pháp đóng đinh nội tủy hay nẹp vít cho bệnh nhân cần dựa
vào nhiều yếu tố. Trong lòng tủy xương có một cái ống, nếu lòng tủy bệnh nhân
nhỏ mà đường gãy không bị gãy thành nhiều mảnh thì nên lựa chọn phương pháp đóng
đinh. Phương pháp này có nhược điểm dễ bị lộ phần đinh khi đinh vừa đóng, chưa
chắc vào xương. Tuy nhiên, ưu điểm là khi đóng đinh, đường mổ sẽ nhỏ, đơn giản.
Đối với nẹp vít, đường mổ dài hơn nhưng phần nắn chỉnh sẽ tuyệt đối, vững chắc
hơn. Nhìn chung, giữa 2 phương pháp thời gian mổ không khác biệt nhau nhiều, từ
30-45 phút sẽ hoàn thành cuộc phẫu thuật kết hợp xương. Tại bệnh viện Nhân dân
115, sau khi bệnh nhân được làm phẫu thuật có thể xuất viện rất sớm trong vòng
khoảng 3-5 ngày, tùy theo đường mổ lớn hay nhỏ. Giữa việc lựa chọn phẫu thuật
hay bảo tồn, đinh nội tủy hay nẹp vít phụ thuộc vào lòng tủy của bệnh nhân và
thói quen, sự đánh giá của phẫu thuật viên. Cá nhân tôi thường lựa chọn phương
pháp phẫu thuật nẹp vít sẽ cho kết quả nắn chỉnh tốt, chắc hơn và bệnh nhân không
bị biến chứng trồi đinh về sau. Thân mến.'
- Chào em, Tình trạng người mệt mỏi, khó thở, tim đập nhanh xảy ra khi không gắng
sức có thể do nhiều nguyên nhân, gồm tim mạch, hô hấp, thần kinh cơ, tiêu hóa
(chủ yếu là ống tiêu hóa trên), tâm lý, bệnh lý nội tiết tố… Viêm dạ dày trào
ngược có thể gây các triệu chứng này do dịch acid trào ngược từ dạ dày lên thực
quản kích thích thần kinh tim. Mặt khác bệnh dạ dày là bệnh có thể tái phát, điều
trị hết bệnh rồi thì bệnh vẫn có thể tái lại. Do đó, nếu em đã khám tim mạch và
hô hấp bình thường, để biết có phải mình mệt mỏi do bệnh dạ dày gây ra hay không
thì tốt nhất là em khám chuyên khoa nội tiêu hóa và điều trị trào ngược dạ dày
thực quản thử, nếu triệu chứng cải thiện nhanh chóng thì chính hắn là nguyên nhân,
em nhé.
- source_sentence: Tôi bị tình trạng nuốt nước miếng có cảm giác bị vướng ở cổ, không
đau rát, không ho sốt, ăn uống bình thường đã 1 ngày nay. Chỉ có nuốt nước miếng
là có cảm giác vướng thôi, lỗ tai bên trái thì cảm giác ngứa nhẹ. Xin hỏi là bệnh
gì vậy ạ?
sentences:
- "Em Lan thân mến, Hiện nay, xét nghiệm được xem là một xét nghiệm\r\nthường quy,\
\ nên thai kỳ của em cũng rất cần được làm những xét nghiệm này mặc\r\ndù gia\
\ đình em không có bệnh lý bất thường. Tuy nhiên, thai kỳ của em đã qua thời gian\
\ làm xét nghiệm Double test, bây\r\ngiờ em phải chờ đến lúc thai được 16 – 18\
\ tuần tuổi, làm xét nghiệm Triple test\r\nem nhé! Chúc em và bé khỏe mạnh!"
- 'Trường hợp thoái hóa cột sống thắt lưng gây đau mỏi liên tục dù đã dùng thuốc
giảm đau liều cao Chào em, Thoái hóa khớp, thoái hóa cột sống là tiến trình lão
hóa không thể tránh khỏi của con người, đặc biệt có thể xảy ra sớm và nhanh hơn
ở người nữ sau mãn kinh, sinh nở nhiều, suy dinh dưỡng hay ăn uống thiếu chất
khoáng, lao động vất vả lúc còn trẻ. Trường hợp thoái hóa cột sống thắt lưng gây
đau mỏi liên tục dù đã dùng thuốc giảm đau liều cao, đặc biệt là đau lan xuống
hai chân, tê yếu hai chân thì cần chụp MRI cột sống để tầm soát thoát vị đĩa đệm
chèn ép tủy sống. Trường hợp của em, mới phát hiện thoái hóa cột sống thắt lưng
gần đây, cũng mới uống thuốc 1 tuần và không duy trì nữa, việc đau lưng vẫn còn
âm ỉ nhưng không lan xuống hai chân thì chưa đến mức cần chụp MRI cột sống thắt
lưng. Nhưng mà, em cần tích cực điều trị để bệnh thoái hóa cột sống thắt lưng
không tiến triển nặng hơn. Bệnh này trị khỏi hoàn toàn là không thể, vì sinh lão
bệnh tử không thể cải hoàn, nhưng mà việc điều trị tích cực sẽ giúp khống chế
được bệnh, giảm đau và giảm tốc độ tiến triển của bệnh. Về việc sử dụng thuốc,
dù là thuốc Tây hay thuốc Đông y, em cũng cần phải thăm khám bs ck cơ xương khớp
(Tây y) hay ck y học cổ truyền (Đông y) để được kê thuốc phù hợp. các thuốc thường
dùng là giảm đau, giãn cơ, bổ sung vi khoáng chất (canxi, vitamin D3, magie...).
Bên cạnh đó, về phương pháp giảm đau hỗ trợ không dùng thuốc, em nên chú ý: -
Chú ý thay đổi tư thế trong quá trình làm việc, không giữ mãi một tư thế trong
nhiều giờ liền. Ngồi làm việc đúng tư thế để tránh các bệnh cột sống. - Vận động
đúng cách, khi vác vật nặng không vặn cột sống. - Thường xuyên tập thể dục rèn
luyện để cột sống vững chắc, cơ thể dẻo dai, bơi cũng được mà yoga là tốt nhất.
- Ăn uống khoa học, xây dựng chế độ dinh dưỡng hợp lý, tăng cường nhóm thực phẩm
giàu canxi, vitamin D, omega 3… giúp nâng cao độ chắc khỏe của đĩa đệm cũng như
xương khớp. - Duy trì cân nặng bình thường, tránh để tăng cân quá mức. - Tư thế
ngủ: nằm ngửa trên ván cứng hay nệm bông ép chặt, tránh nệm lò xo hay nệm cao
su quá mềm, có thể đệm ở vùng khoeo làm co nhẹ khớp gối và khớp háng, nên nằm
đầu thấp không gối sẽ tốt cho cột sống cổ. - Có thể thực hiện điều trị vật lý
và các liệu pháp phản xạ: bao gồm phương pháp nhiệt như chườm nóng (túi nước,
muối rang, cám rang, lá lốt, lá ngải cứu nóng); dùng các dòng điện tại khoa vật
lý trị liệu, điều trị bằng laser; châm cứu, kéo cơ để hỗ trợ giảm đau cơ cạnh
sống. Trân trọng!'
- Chào bạn, Nuốt vướng ở cổ thường gặp trong một số bệnh lý viêm nhiễm hầu họng
như viêm họng, viêm amidan mạn, trào ngược dạ dày thực quản, hội chứng chảy mũi
sau… Đây là có thể là triệu chứng đầu tiên báo hiệu một đợt bùng phát cấp tính
của viêm nhiễm hô hấp trên do triệu chứng mới chỉ xuất hiện 1 ngày. Bạn nên khám
bác sĩ Tai mũi họng để thăm khám trực tiếp, đánh giá và kê toa điều trị bạn nhé!
Thân mến.
- source_sentence: Chào bác sĩ, em bị gãy xương gót, đã đóng đinh đến nay được gần
5 tuần. Vậy 6 tuần em tháo đinh được chưa ạ?
sentences:
- ' Chào em, gồm 2 trị số, trị số lớn nhất gọi là huyết áp tâm thu, bình thường
< 140 và > 90 mmHg; trị số thấp nhất gọi là huyết áp tâm trương, bình thường <
90 và > 60 mmHg. Huyết áp có thể tăng khi căng thẳng, do lo lắng, do hội chứng
áo choàng trắng (khi vào bv, khi gặp bác sĩ thì huyết áp cao), bệnh lý viêm nhiễm,
do cafe, khi khó thở... nhìn chung là các stress đối với cơ thể. Như vậy, huyết
áp ghi nhận ở những lúc cơ thể đang lo lắng, bồn chồn, có bệnh thì sẽ không phản
ánh chính xác được huyết áp dao động bình thường của người bệnh. Do vậy em nên
khám chuyên khoa tim mạch, bác sĩ sẽ thăm khám và làm xét nghiệm kiểm tra xem
em có các dấu chứng của tăng huyết áp hay không (như dày thành tim, tiểu đạm,
đo huyết áp 24 giờ...) để xác định em có tăng huyết áp hay không và điều trị thích
hợp. Những triệu chứng hoa mắt, chóng mặt, đau đầu, đau 1 bên mắt, tiểu nhiều
có thể là do bệnh tăng huyết áp gây ra (ảnh hưởng lên mạch máu não, lên thận...)
hoặc là 1 bệnh lý khác như thiếu máu, rối loạn tiền đình, viêm nhiễm hệ thống,
viêm mũi xoang, bệnh lý mạch máu não... (và tăng huyết áp chỉ là phản ứng của
cơ thể khi có stress). Để tìm ra bệnh và giải quyết nỗi lo về bệnh, em nên đến
bệnh viện để kiểm tra sức khỏe em nhé. Thân mến! '
- ' Chào em, Thời điểm 6 tuần là quá sớm để rút đinh cố định xương gót (trừ trường
hợp khung cố định xương bên ngoài). Tháo đinh vít kim loại chỉ bắt buộc thực hiện
sớm trong những trường hợp bất thường như gãy vít, nhiễm trùng, khớp giả... gây
ra các triệu chứng bất thường với bệnh nhân mà thôi. Em nên tái khám tại chuyên
khoa Chấn thương Chỉnh hình để bác sĩ kiểm tra lại việc lành xương của em tốt
chưa và dặn em lịch trình rút đinh phù hợp, em nhé. Thân mến.'
- K dạ dày không điều trị tiên lượng sống khá ngắn Chào em, K dạ dày là ung thư
dạ dày. Bệnh ung thư dạ dày là bệnh lý ác tính và có chỉ định phẫu thuật cắt khối
u – cắt dạ dày khi còn có thể cắt được. Nếu đã phát hiện ung thư dạ dày mà không
điều trị phẫu thuật thì thời gian sống của bệnh nhân trung bình là 6 tháng đến
1 năm tùy loại ung thư dạ dày, khi ung thư tiến triển di căn có thể gây nhiều
đau đớn hơn. Hiện tại chị em đang bị suy nhược cơ thể nhiều, không ăn uống được,
đau nhiều do ung thư dạ dày là có chỉ định vào bệnh viện nằm điều trị luôn rồi,
chứ không thể nào lấy thuốc mà không tới phòng khám được đâu. Vô bệnh viện chị
em sẽ được truyền dịch, chích thuốc, nâng thể trạng lên rồi mới tính đến chuyện
điều trị khối ung thư kia. Em đưa chị em đến bệnh viện càng sớm càng tốt, tốt
nhất là bệnh viện Ung bướu, em nhé.
- source_sentence: "Thưa bác sĩ,\r\n\r\nEm bị đục thủy tinh thể do chấn thương và\
\ vừa mổ mắt về và em cũng bị cận thị. Thời gian khoảng 1 tuần em thấy mắt mình\
\ nhìn chỉ rõ hơn được 1 phần nào. Nhìn xa thì vẫn thấy nhưng vẫn mờ mờ. Bác sĩ\
\ cho em lời khuyên nên làm cách nào và mắt em có thể sáng lại như bình thường\
\ được không ạ?\r\n\r\nEm xin chân thành cảm ơn! (Minh Tiến - Bình Định)"
sentences:
- Bạn Minh Tiến thân mến, Hiện nay phẫu thuật đục thủy tinh thể đã được y học nói
chung và ngành Nhãn khoa Việt Nam thực hiện hoàn chỉnh đến mức tuyệt vời. Phẫu
thuật này được xem như một cuộc cách mạng rất đáng tự hào của ngành nhãn khoa.
Hàng ngày có thể tới hàng ngàn ca phẫu thuật đem lại ánh sáng cho người mù lòa
đục thể thủy tinh tại Việt Nam. Nói như vậy để giúp cho bạn hiểu rõ phẫu thuật
này các bác sĩ Việt Nam thực hiện rất thường xuyên và rất tốt. Tuy nhiên, với
mắt đục thủy tinh thể do chấn thương của bạn là ca phẫu thuật tương đối không
đơn giản. Thêm vào đó ngoài đục thủy tinh thể do chấn thương, mắt bạn cũng có
thể kèm theo tổn thương ở các bộ phận khác của mắt mà trước mổ bác sĩ khó có thể
chẩn đoán được. Với hai lý do nêu trên, nên đôi khi mắt mổ khó có thể tốt theo
ý muốn của cả bệnh nhân lẫn thầy thuốc. Bạn cần có thời gian theo dõi và điều
trị tiếp sau mổ. Sau thời gian ổn định khoảng 1 tháng, bạn cần đo thử kính xem
có cải thiện thị lực thêm không? Chúc bạn may mắn!
- Chào em, Bình thường các hạch trong cơ thể không sưng to lên đến mức có thể sờ
chạm hay nhận biết được. Vì thế, hạch sưng lên, hay thường gọi là nổi hạch, là
một triệu chứng bất thường của cơ thể. Cho nên, em lo lắng là đúng khi phát hiện
hạch ở vùng cổ. Hạch bạch huyết đóng vai trò quan trọng đối với hoạt động của
hệ miễn dịch. Chúng chứa các tế bào miễn dịch như lympho bào, đại thực bào...
có chức năng miễn dịch chống lại các yếu tố lạ như vi khuẩn, virus, kí sinh trùng...
xâm nhập vào cơ thể. Trong quá trình đó các hạch có thể bị viêm và sưng lên. Một
số trường hợp hạch sưng có thể là hạch ung thư hoặc di căn. Đặc điểm của hạch
viêm là nhỏ, số lượng ít, bờ tròn đều, không phát triển theo thời gian, không
xâm lấn da xung quanh. Thông thường đối với hạch viêm thì nguồn viêm có thể tấn
công tại hạch, cũng có khi là hạch viêm phản ứng với ổ viêm nhiễm cạnh đó, điều
trị hết viêm thì hạch sẽ lặn dần, có thể lặn chậm hơn vài tuần đến vài tháng,
có một số loại hạch cũng là hạch viêm nhưng mà chỉ giảm kích thước rồi cứ "lì"
vậy luôn - không lặn hẳn nhưng không còn sưng như trước và vẫn giữ hình ảnh của
hạch viêm, cũng có loại hạch viêm sau lại chuyển sang xơ chai hóa như sẹo cũ và
không lặn. Như vậy, em có 1 hạch vùng cổ đã được xác định là hạch viêm thông qua
sinh thiết hạch cách đây 10 năm. Trong vòng 10 năm nay, hạch cổ đó không có triệu
chứng bất thường. Gần đây, hạch cổ đó có biểu hiện viêm trở lại, mặc dù em uống
thuốc (tự mua) thì hạch hết sưng đau, nhưng em cũng cần khám lại bên chuyên khoa
ung bướu để kiểm tra tổng quát lại 1 lần, tìm nguyên nhân gây kích thích hạch
viêm này tái hoạt động, xem là nguyên nhân lành tính hay tiềm ẩn nguyên nhân khác
(vì lần kiểm tra trước đã cách đây 10 năm rồi), em nhé.
- ' Chào em, Trường hợp em mô tả là những bất thường của hệ hô hấp có thể là bệnh
lý tai mũi họng hay hô hấp dưới như viêm phổi, viêm phế quản, em cần đến các cơ
sở y tế chuyên sâu tai mũi họng hay hô hấp để khám thêm. Những biểu hiện đó hoàn
toàn không có cơ sở nghĩ . Thân mến!'
- source_sentence: Bác sĩ cho em hỏi, em bị rạn nứt xương gót chân bên phải. Em bị
hơn 1 tháng nay rồi. Em bỏ thuốc lá. Em muốn hỏi bác sĩ thông thường bó bột hơn
hay thuốc lá hơn? Như của em khoảng bao lâu thì khỏi? Và giờ em vẫn chưa đi được
bác sĩ ạ. Em cảm ơn.
sentences:
- 'Câu hỏi của em rất chân thành. Tự ý thức quyết tâm cai nghiệm là điều đáng quý.
Nếu em tiếp tục sử dụng thì tình trạng sẽ tồi tệ hơn rất nhiều. Ba yếu tố quan
trọng nhất và tiến hành đồng thời để cai nghiện thành công, đó là: 1. Ý chí 2.
Sự hiểu biết thấu đáo 3. Môi trường thân thiện. Các Trung tâm cai nghiện sẽ giúp
em phần 2 và phần 3, từ đó sẽ củng cố phần 1 của em. Trường hợp ở nhà mà em tự
cai, thực hành mỗi ngày với 3 điều kiện trên, em sẽ thành công như nhiều bạn khác.
Không nên nôn nóng, sốt ruột. Trước tiên em phải thuộc lòng và thực hành những
quy tắc này thành thói quen và áp dụng suốt đời. Nhiều trường hợp cai được vài
năm vẫn tái nghiện. Do đó, nên tránh xa những "nguồn" khiến em tái nghiện, tránh
xa bạn bè nghiện ngập em nhé. Chúc em quyết tâm và đem lại niềm vui cho bố mẹ.'
- Chào em, Thứ nhất, bắt buộc phải có phim Xquang để biết em có thực sự nứt xương
gót hay bị gãy phức tạp hơn, vì nhiều trường hợp tưởng chỉ nứt xương thôi nhưng
thật ra là vỡ phức tạp, phải phẫu thuật mới nhanh ổn được. Thứ hai, theo nguyên
tắc điều trị nứt gãy xương là phải cố định tốt để can xương mọc ra, chỗ nứt gãy
mới được nối liền. Do đó, nếu bó bột thì chân sẽ được cố định liên tục trong 4-6
tuần, còn bó lá thì phải thay thường xuyên, mỗi lần thay là 1 lần xê dịch nên
xương khó lành. Tốt hơn hết em nên đến Bệnh viện Chấn thương Chỉnh hình để được
kiểm tra và điều trị thích hợp, em nhé. Thân mến.
- Chào bạn, Qua hình ảnh sang thương và mô tả triệu chứng, bệnh lý của bạn có khả
năng là chàm hay còn gọi là viêm da dị ứng với đặc điểm là viêm và nổi mụn nhỏ,
ngứa ngáy. Nguyên nhân của chàm hiện nay chưa rõ nhưng có thể do cơ địa dị ứng
(người mắc hen, viêm mũi dị ứng có nguy cơ cao mắc chàm), do kích thích của hóa
chất như nước rửa chén, bột giặt, cao su, kim loại, chất liệu giày dép (chàm tiếp
xúc),... Thời tiết lạnh, stress, đổ mồ hôi nhiều và phấn hoa... cũng là những
nguyên nhân có thể khiến da bị chàm. Chàm cũng có thể gặp ở người bị suy van tĩnh
mạch, giãn tĩnh mạch chân khiến tình trạng bệnh dai dẳng, kém đáp ứng điều trị.
Điều trị chàm thường phải sử dụng một số loại thuốc bôi da kéo dài, có thể để
lại tác dụng phụ, do đó bạn nên khám BS Da liễu để kê toa loại thuốc phù hợp.
Ngoài ra, bạn nên chú ý xem có yếu tố nào thường kích thích khởi phát chàm để
tránh cho bệnh tái phát bạn nhé! Thân mến.
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision d13f1b27baf31030b7fd040960d60d909913633f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("meandyou200175/e5-finetune")
# Run inference
sentences = [
'Bác sĩ cho em hỏi, em bị rạn nứt xương gót chân bên phải. Em bị hơn 1 tháng nay rồi. Em bỏ thuốc lá. Em muốn hỏi bác sĩ thông thường bó bột hơn hay thuốc lá hơn? Như của em khoảng bao lâu thì khỏi? Và giờ em vẫn chưa đi được bác sĩ ạ. Em cảm ơn.',
'Chào em, Thứ nhất, bắt buộc phải có phim Xquang để biết em có thực sự nứt xương gót hay bị gãy phức tạp hơn, vì nhiều trường hợp tưởng chỉ nứt xương thôi nhưng thật ra là vỡ phức tạp, phải phẫu thuật mới nhanh ổn được. Thứ hai, theo nguyên tắc điều trị nứt gãy xương là phải cố định tốt để can xương mọc ra, chỗ nứt gãy mới được nối liền. Do đó, nếu bó bột thì chân sẽ được cố định liên tục trong 4-6 tuần, còn bó lá thì phải thay thường xuyên, mỗi lần thay là 1 lần xê dịch nên xương khó lành. Tốt hơn hết em nên đến Bệnh viện Chấn thương Chỉnh hình để được kiểm tra và điều trị thích hợp, em nhé. Thân mến.',
'Chào bạn, Qua hình ảnh sang thương và mô tả triệu chứng, bệnh lý của bạn có khả năng là chàm hay còn gọi là viêm da dị ứng với đặc điểm là viêm và nổi mụn nhỏ, ngứa ngáy. Nguyên nhân của chàm hiện nay chưa rõ nhưng có thể do cơ địa dị ứng (người mắc hen, viêm mũi dị ứng có nguy cơ cao mắc chàm), do kích thích của hóa chất như nước rửa chén, bột giặt, cao su, kim loại, chất liệu giày dép (chàm tiếp xúc),... Thời tiết lạnh, stress, đổ mồ hôi nhiều và phấn hoa... cũng là những nguyên nhân có thể khiến da bị chàm. Chàm cũng có thể gặp ở người bị suy van tĩnh mạch, giãn tĩnh mạch chân khiến tình trạng bệnh dai dẳng, kém đáp ứng điều trị. Điều trị chàm thường phải sử dụng một số loại thuốc bôi da kéo dài, có thể để lại tác dụng phụ, do đó bạn nên khám BS Da liễu để kê toa loại thuốc phù hợp. Ngoài ra, bạn nên chú ý xem có yếu tố nào thường kích thích khởi phát chàm để tránh cho bệnh tái phát bạn nhé! Thân mến.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0365 | 100 | 1.9653 | - |
| 0.0730 | 200 | 0.5908 | - |
| 0.1096 | 300 | 0.1976 | - |
| 0.1461 | 400 | 0.1503 | - |
| 0.1826 | 500 | 0.118 | - |
| 0.2191 | 600 | 0.1347 | - |
| 0.2557 | 700 | 0.1303 | - |
| 0.2922 | 800 | 0.1133 | - |
| 0.3287 | 900 | 0.1208 | - |
| 0.3652 | 1000 | 0.0909 | 0.0738 |
| 0.4018 | 1100 | 0.0901 | - |
| 0.4383 | 1200 | 0.1026 | - |
| 0.4748 | 1300 | 0.1049 | - |
| 0.5113 | 1400 | 0.079 | - |
| 0.5478 | 1500 | 0.0963 | - |
| 0.5844 | 1600 | 0.0994 | - |
| 0.6209 | 1700 | 0.0858 | - |
| 0.6574 | 1800 | 0.0948 | - |
| 0.6939 | 1900 | 0.0776 | - |
| 0.7305 | 2000 | 0.0822 | 0.0691 |
| 0.7670 | 2100 | 0.0872 | - |
| 0.8035 | 2200 | 0.0687 | - |
| 0.8400 | 2300 | 0.0713 | - |
| 0.8766 | 2400 | 0.0746 | - |
| 0.9131 | 2500 | 0.085 | - |
| 0.9496 | 2600 | 0.0809 | - |
| 0.9861 | 2700 | 0.0868 | - |
| 1.0226 | 2800 | 0.07 | - |
| 1.0592 | 2900 | 0.0572 | - |
| 1.0957 | 3000 | 0.0651 | 0.0558 |
| 1.1322 | 3100 | 0.0487 | - |
| 1.1687 | 3200 | 0.0554 | - |
| 1.2053 | 3300 | 0.0551 | - |
| 1.2418 | 3400 | 0.0524 | - |
| 1.2783 | 3500 | 0.0563 | - |
| 1.3148 | 3600 | 0.0394 | - |
| 1.3514 | 3700 | 0.0492 | - |
| 1.3879 | 3800 | 0.0239 | - |
| 1.4244 | 3900 | 0.0359 | - |
| 1.4609 | 4000 | 0.0343 | 0.0483 |
| 1.4974 | 4100 | 0.0239 | - |
| 1.5340 | 4200 | 0.0246 | - |
| 1.5705 | 4300 | 0.0323 | - |
| 1.6070 | 4400 | 0.0233 | - |
| 1.6435 | 4500 | 0.0198 | - |
| 1.6801 | 4600 | 0.0263 | - |
| 1.7166 | 4700 | 0.0232 | - |
| 1.7531 | 4800 | 0.0263 | - |
| 1.7896 | 4900 | 0.0201 | - |
| 1.8262 | 5000 | 0.0155 | 0.0506 |
| 1.8627 | 5100 | 0.0185 | - |
| 1.8992 | 5200 | 0.0241 | - |
| 1.9357 | 5300 | 0.0215 | - |
| 1.9722 | 5400 | 0.0301 | - |
| 2.0088 | 5500 | 0.0229 | - |
| 2.0453 | 5600 | 0.018 | - |
| 2.0818 | 5700 | 0.0178 | - |
| 2.1183 | 5800 | 0.02 | - |
| 2.1549 | 5900 | 0.0164 | - |
| 2.1914 | 6000 | 0.0155 | 0.0446 |
| 2.2279 | 6100 | 0.0202 | - |
| 2.2644 | 6200 | 0.0131 | - |
| 2.3009 | 6300 | 0.0159 | - |
| 2.3375 | 6400 | 0.0183 | - |
| 2.3740 | 6500 | 0.0081 | - |
| 2.4105 | 6600 | 0.0119 | - |
| 2.4470 | 6700 | 0.0108 | - |
| 2.4836 | 6800 | 0.0128 | - |
| 2.5201 | 6900 | 0.0068 | - |
| 2.5566 | 7000 | 0.0107 | 0.0425 |
| 2.5931 | 7100 | 0.0086 | - |
| 2.6297 | 7200 | 0.0073 | - |
| 2.6662 | 7300 | 0.0072 | - |
| 2.7027 | 7400 | 0.0056 | - |
| 2.7392 | 7500 | 0.0069 | - |
| 2.7757 | 7600 | 0.0077 | - |
| 2.8123 | 7700 | 0.0054 | - |
| 2.8488 | 7800 | 0.0055 | - |
| 2.8853 | 7900 | 0.0087 | - |
| 2.9218 | 8000 | 0.006 | 0.0457 |
| 2.9584 | 8100 | 0.0065 | - |
| 2.9949 | 8200 | 0.0112 | - |
| 3.0314 | 8300 | 0.0065 | - |
| 3.0679 | 8400 | 0.0045 | - |
| 3.1045 | 8500 | 0.007 | - |
| 3.1410 | 8600 | 0.0053 | - |
| 3.1775 | 8700 | 0.0053 | - |
| 3.2140 | 8800 | 0.0062 | - |
| 3.2505 | 8900 | 0.0055 | - |
| 3.2871 | 9000 | 0.0074 | 0.0414 |
| 3.3236 | 9100 | 0.0061 | - |
| 3.3601 | 9200 | 0.0047 | - |
| 3.3966 | 9300 | 0.0034 | - |
| 3.4332 | 9400 | 0.0037 | - |
| 3.4697 | 9500 | 0.0043 | - |
| 3.5062 | 9600 | 0.0035 | - |
| 3.5427 | 9700 | 0.0043 | - |
| 3.5793 | 9800 | 0.0035 | - |
| 3.6158 | 9900 | 0.0035 | - |
| 3.6523 | 10000 | 0.0028 | 0.0395 |
| 3.6888 | 10100 | 0.0029 | - |
| 3.7253 | 10200 | 0.0032 | - |
| 3.7619 | 10300 | 0.003 | - |
| 3.7984 | 10400 | 0.0024 | - |
| 3.8349 | 10500 | 0.0035 | - |
| 3.8714 | 10600 | 0.0031 | - |
| 3.9080 | 10700 | 0.0028 | - |
| 3.9445 | 10800 | 0.0027 | - |
| 3.9810 | 10900 | 0.0038 | - |
| 4.0175 | 11000 | 0.0026 | 0.0392 |
| 4.0541 | 11100 | 0.0022 | - |
| 4.0906 | 11200 | 0.0025 | - |
| 4.1271 | 11300 | 0.0023 | - |
| 4.1636 | 11400 | 0.0022 | - |
| 4.2001 | 11500 | 0.0026 | - |
| 4.2367 | 11600 | 0.0028 | - |
| 4.2732 | 11700 | 0.0022 | - |
| 4.3097 | 11800 | 0.0027 | - |
| 4.3462 | 11900 | 0.0023 | - |
| 4.3828 | 12000 | 0.0016 | 0.0384 |
| 4.4193 | 12100 | 0.0022 | - |
| 4.4558 | 12200 | 0.0018 | - |
| 4.4923 | 12300 | 0.002 | - |
| 4.5289 | 12400 | 0.0017 | - |
| 4.5654 | 12500 | 0.002 | - |
| 4.6019 | 12600 | 0.0021 | - |
| 4.6384 | 12700 | 0.0019 | - |
| 4.6749 | 12800 | 0.0016 | - |
| 4.7115 | 12900 | 0.0013 | - |
| 4.7480 | 13000 | 0.0022 | 0.0367 |
| 4.7845 | 13100 | 0.0016 | - |
| 4.8210 | 13200 | 0.0013 | - |
| 4.8576 | 13300 | 0.0019 | - |
| 4.8941 | 13400 | 0.002 | - |
| 4.9306 | 13500 | 0.0015 | - |
| 4.9671 | 13600 | 0.0017 | - |
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.2.0
- Transformers: 4.45.1
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
bunnycore/Llama-3.2-3B-Pure-RP-Q5_K_M-GGUF
|
bunnycore
| 2024-10-20T10:13:50Z | 5 | 2 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Llama-3.2-3B-Pure-RP",
"base_model:quantized:bunnycore/Llama-3.2-3B-Pure-RP",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-20T10:13:30Z |
---
base_model: bunnycore/Llama-3.2-3B-Pure-RP
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# bunnycore/Llama-3.2-3B-Pure-RP-Q5_K_M-GGUF
This model was converted to GGUF format from [`bunnycore/Llama-3.2-3B-Pure-RP`](https://huggingface.co/bunnycore/Llama-3.2-3B-Pure-RP) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Llama-3.2-3B-Pure-RP) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo bunnycore/Llama-3.2-3B-Pure-RP-Q5_K_M-GGUF --hf-file llama-3.2-3b-pure-rp-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo bunnycore/Llama-3.2-3B-Pure-RP-Q5_K_M-GGUF --hf-file llama-3.2-3b-pure-rp-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo bunnycore/Llama-3.2-3B-Pure-RP-Q5_K_M-GGUF --hf-file llama-3.2-3b-pure-rp-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo bunnycore/Llama-3.2-3B-Pure-RP-Q5_K_M-GGUF --hf-file llama-3.2-3b-pure-rp-q5_k_m-imat.gguf -c 2048
```
|
ashokpoudel/retail-banking-llm-chatbot-translation
|
ashokpoudel
| 2024-10-20T10:02:30Z | 13 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-20T09:58:29Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** ashokpoudel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bbunzeck/baby_llama
|
bbunzeck
| 2024-10-20T10:02:11Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:nilq/babylm-10M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-14T21:46:40Z |
---
datasets:
- nilq/babylm-10M
language:
- en
---
This autoregressive model belongs to a series of rather small language models trained on the [BabyLM](https://babylm.github) data:
- the [baby_llama](https://huggingface.co/bbunzeck/baby_llama) model has few parameters and was trained on a small data set (10M tokens)
- the [**t**eenie_llama](https://huggingface.co/bbunzeck/teenie_llama) model has the same number of parameters but was trained on more **t**okens of text (100M)
- the [**w**eenie_llama](https://huggingface.co/bbunzeck/weenie_llama) model was trained on the small data set, but has more parameters/**w**eights
- the [**tw**eenie_llama](https://huggingface.co/bbunzeck/tweenie_llama) model features both -- more **t**okens (the larger data set) and more **w**eights (*viz.* parameters)
| | baby_llama | teenie_llama | weenie_llama | tweenie_llama |
|-----------------|-----------|-------------|-------------|--------------|
| Parameters | 2.97M | 2.97M | 11.44M | 11.44M |
| hidden layers | 8 | 8 | 16 | 16 |
| Attention heads | 8 | 8 | 16 | 16 |
| Embedding size | 128 | 128 | 256 | 256 |
| Context size | 128 | 128 | 256 | 256 |
| Vocab size | 16k | 16k | 16k | 16k |
If you use this model in your research, please cite the following publication:
```
@inproceedings{bunzeck-zarriess-2024-fifty,
title = "Fifty shapes of {BL}i{MP}: syntactic learning curves in language models are not uniform, but sometimes unruly",
author = "Bunzeck, Bastian and
Zarrie{\ss}, Sina",
editor = "Qiu, Amy and
Noble, Bill and
Pagmar, David and
Maraev, Vladislav and
Ilinykh, Nikolai",
booktitle = "Proceedings of the 2024 CLASP Conference on Multimodality and Interaction in Language Learning",
month = oct,
year = "2024",
address = "Gothenburg, Sweden",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.clasp-1.7",
pages = "39--55",
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.