modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-20 18:29:57
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 566
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-20 18:29:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mergekit-community/mergekit-slerp-anaazls
|
mergekit-community
| 2024-09-27T17:01:05Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:abacusai/Smaug-34B-v0.1",
"base_model:merge:abacusai/Smaug-34B-v0.1",
"base_model:anthracite-org/magnum-v3-34b",
"base_model:merge:anthracite-org/magnum-v3-34b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T16:39:21Z |
---
base_model:
- anthracite-org/magnum-v3-34b
- abacusai/Smaug-34B-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [anthracite-org/magnum-v3-34b](https://huggingface.co/anthracite-org/magnum-v3-34b)
* [abacusai/Smaug-34B-v0.1](https://huggingface.co/abacusai/Smaug-34B-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: abacusai/Smaug-34B-v0.1
- model: anthracite-org/magnum-v3-34b
merge_method: slerp
base_model: abacusai/Smaug-34B-v0.1
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
Xu-Ouyang/pythia-1b-deduped-int4-step57000-AWQ
|
Xu-Ouyang
| 2024-09-27T17:00:45Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-26T02:59:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-1b-deduped-int4-step2000-AWQ
|
Xu-Ouyang
| 2024-09-27T16:56:48Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-26T02:55:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hensam92/wk-llama3.2-1b
|
hensam92
| 2024-09-27T16:56:39Z | 157 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"llama-3",
"meta",
"facebook",
"unsloth",
"mlx",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T16:33:48Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- mlx
---
# hensam92/wk-llama3.2-1b
The Model [hensam92/wk-llama3.2-1b](https://huggingface.co/hensam92/wk-llama3.2-1b) was converted to MLX format from [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) using mlx-lm version **0.18.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("hensam92/wk-llama3.2-1b")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
RichardErkhov/google_-_gemma-2-27b-it-gguf
|
RichardErkhov
| 2024-09-27T16:53:49Z | 1,026 | 0 | null |
[
"gguf",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-27T08:35:49Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-2-27b-it - GGUF
- Model creator: https://huggingface.co/google/
- Original model: https://huggingface.co/google/gemma-2-27b-it/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gemma-2-27b-it.Q2_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q2_K.gguf) | Q2_K | 9.73GB |
| [gemma-2-27b-it.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.IQ3_XS.gguf) | IQ3_XS | 10.76GB |
| [gemma-2-27b-it.IQ3_S.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.IQ3_S.gguf) | IQ3_S | 11.33GB |
| [gemma-2-27b-it.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q3_K_S.gguf) | Q3_K_S | 11.33GB |
| [gemma-2-27b-it.IQ3_M.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.IQ3_M.gguf) | IQ3_M | 11.6GB |
| [gemma-2-27b-it.Q3_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q3_K.gguf) | Q3_K | 12.5GB |
| [gemma-2-27b-it.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q3_K_M.gguf) | Q3_K_M | 12.5GB |
| [gemma-2-27b-it.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q3_K_L.gguf) | Q3_K_L | 13.52GB |
| [gemma-2-27b-it.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.IQ4_XS.gguf) | IQ4_XS | 13.92GB |
| [gemma-2-27b-it.Q4_0.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q4_0.gguf) | Q4_0 | 14.56GB |
| [gemma-2-27b-it.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.IQ4_NL.gguf) | IQ4_NL | 14.65GB |
| [gemma-2-27b-it.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q4_K_S.gguf) | Q4_K_S | 14.66GB |
| [gemma-2-27b-it.Q4_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q4_K.gguf) | Q4_K | 15.5GB |
| [gemma-2-27b-it.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q4_K_M.gguf) | Q4_K_M | 15.5GB |
| [gemma-2-27b-it.Q4_1.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q4_1.gguf) | Q4_1 | 16.07GB |
| [gemma-2-27b-it.Q5_0.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q5_0.gguf) | Q5_0 | 17.59GB |
| [gemma-2-27b-it.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q5_K_S.gguf) | Q5_K_S | 17.59GB |
| [gemma-2-27b-it.Q5_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q5_K.gguf) | Q5_K | 18.08GB |
| [gemma-2-27b-it.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q5_K_M.gguf) | Q5_K_M | 18.08GB |
| [gemma-2-27b-it.Q5_1.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q5_1.gguf) | Q5_1 | 19.1GB |
| [gemma-2-27b-it.Q6_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q6_K.gguf) | Q6_K | 20.81GB |
| [gemma-2-27b-it.Q8_0.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-2-27b-it-gguf/blob/main/gemma-2-27b-it.Q8_0.gguf) | Q8_0 | 26.95GB |
Original model description:
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-2-27b
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-27b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-27b-it",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda", # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
# Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model 27b --preset speed
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-27b-it", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-27b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
|
busetolunay/brawl
|
busetolunay
| 2024-09-27T16:51:24Z | 15 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-09-27T16:50:55Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
widget:
- text: '[trigger] style, A tall, slender elf with emerald green eyes and silver
hair braided down to the waist. They are dressed in ornate golden armor adorned
with leaf motifs, and a flowing teal cape that catches the wind. Holding a
bow made of ancient wood, they exude an aura of grace and vigilance.'
output:
url: samples/1727455693708__000002500_0.jpg
- text: '[trigger] style, A steampunk-inspired robot with bronze gears and pipes
visible on its exterior. It wears a top hat with goggles and a tattered scarf
around its neck. In one hand, it holds a pocket watch, and in the other, a
small umbrella. Its glowing eyes give it a curious and adventurous expression.'
output:
url: samples/1727455703861__000002500_1.jpg
- text: '[trigger] style, A fierce warrior princess with a determined gaze. She
has fiery red hair tied back with a silver circlet, and wears intricate armor
made of dragon scales. Holding a gleaming sword with runes etched into the
blade, she stands ready to defend her kingdom.'
output:
url: samples/1727455714046__000002500_2.jpg
- text: '[trigger] style, A mysterious magician shrouded in shadows. He wears a
long dark cloak with glowing runes along the edges, and a wide-brimmed hat
that hides his eyes. Floating around him are ethereal playing cards, each
emitting a soft glow. His smirk suggests he knows secrets untold.'
output:
url: samples/1727455724261__000002500_3.jpg
- text: '[trigger] style, A futuristic astronaut exploring an alien planet. Clad
in a sleek white spacesuit with blue luminescent lines, they carry a transparent
helmet showing curious eyes gazing at the strange flora. A floating drone
accompanies them, capturing data and images.'
output:
url: samples/1727455734445__000002500_4.jpg
- text: '[trigger] style, A jovial pirate captain with a hearty laugh. He sports
a tricorn hat adorned with feathers, a striped shirt under a rugged coat,
and boots that have seen many voyages. With a parrot on his shoulder and a
map in hand, he''s ready for his next adventure.'
output:
url: samples/1727455744641__000002500_5.jpg
- text: '[trigger] style, A gentle giant of a tree creature, with branches for arms
and roots for legs. Leaves and flowers bloom across its body, and small woodland
animals perch on its shoulders. Its kind eyes and slow movements convey a
deep connection with nature.'
output:
url: samples/1727455754853__000002500_6.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: bra2wl
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# brawl_flux_lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `bra2wl` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/busetolunay/brawl/tree/main) them in the Files & versions tab.
|
Xu-Ouyang/pythia-2.8b-deduped-int4-step98000-AWQ
|
Xu-Ouyang
| 2024-09-27T16:47:36Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-26T02:50:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-2.8b-deduped-int4-step2000-AWQ
|
Xu-Ouyang
| 2024-09-27T16:33:40Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-26T02:27:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
2exist1/whisper-medium-meeting
|
2exist1
| 2024-09-27T16:33:37Z | 6 | 0 | null |
[
"safetensors",
"whisper",
"license:apache-2.0",
"region:us"
] | null | 2024-09-27T16:14:05Z |
---
license: apache-2.0
---
|
mergekit-community/L3.1-Artemis-faustus-8B
|
mergekit-community
| 2024-09-27T16:32:59Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:DreadPoor/Aurora_faustus-8B-LINEAR",
"base_model:merge:DreadPoor/Aurora_faustus-8B-LINEAR",
"base_model:mergekit-community/L3.1-Artemis-d-8B",
"base_model:merge:mergekit-community/L3.1-Artemis-d-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T16:27:50Z |
---
base_model:
- mergekit-community/L3.1-Artemis-d-8B
- DreadPoor/Aurora_faustus-8B-LINEAR
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [mergekit-community/L3.1-Artemis-d-8B](https://huggingface.co/mergekit-community/L3.1-Artemis-d-8B)
* [DreadPoor/Aurora_faustus-8B-LINEAR](https://huggingface.co/DreadPoor/Aurora_faustus-8B-LINEAR)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mergekit-community/L3.1-Artemis-d-8B
merge_method: slerp
base_model: DreadPoor/Aurora_faustus-8B-LINEAR
parameters:
t:
- value: [0.2, 0.2, 0.4, 0.4, 0.55, 0.55, 0.45, 0.45, 0.288, 0.288]
dtype: bfloat16
```
|
ngwgsang/bartpho-word-large-visp-s5
|
ngwgsang
| 2024-09-27T16:21:25Z | 93 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-27T16:20:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mergekit-community/mergekit-slerp-xirdwrw
|
mergekit-community
| 2024-09-27T16:21:17Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:abacusai/Smaug-34B-v0.1",
"base_model:merge:abacusai/Smaug-34B-v0.1",
"base_model:anthracite-org/magnum-v3-34b",
"base_model:merge:anthracite-org/magnum-v3-34b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T15:59:31Z |
---
base_model:
- anthracite-org/magnum-v3-34b
- abacusai/Smaug-34B-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [anthracite-org/magnum-v3-34b](https://huggingface.co/anthracite-org/magnum-v3-34b)
* [abacusai/Smaug-34B-v0.1](https://huggingface.co/abacusai/Smaug-34B-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: anthracite-org/magnum-v3-34b
- model: abacusai/Smaug-34B-v0.1
merge_method: slerp
base_model: anthracite-org/magnum-v3-34b
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
pilotj/distilbert-base-uncased-fibe-v7-finetuned_rerun
|
pilotj
| 2024-09-27T16:20:37Z | 203 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:biggy-smiley/distilbert-base-uncased-fibe-v7",
"base_model:finetune:biggy-smiley/distilbert-base-uncased-fibe-v7",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T15:56:39Z |
---
library_name: transformers
base_model: biggy-smiley/distilbert-base-uncased-fibe-v7
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-fibe-v7-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-fibe-v7-finetuned
This model is a fine-tuned version of [biggy-smiley/distilbert-base-uncased-fibe-v7](https://huggingface.co/biggy-smiley/distilbert-base-uncased-fibe-v7) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.8349
- eval_runtime: 22.7907
- eval_samples_per_second: 114.082
- eval_steps_per_second: 1.799
- epoch: 6.3215
- step: 10500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
EmanDev/news_summary_model_trained_on_reduced_data
|
EmanDev
| 2024-09-27T16:20:28Z | 99 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:gsarti/it5-small-news-summarization",
"base_model:finetune:gsarti/it5-small-news-summarization",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-27T13:04:25Z |
---
library_name: transformers
license: apache-2.0
base_model: gsarti/it5-small-news-summarization
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: news_summary_model_trained_on_reduced_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# news_summary_model_trained_on_reduced_data
This model is a fine-tuned version of [gsarti/it5-small-news-summarization](https://huggingface.co/gsarti/it5-small-news-summarization) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.1141
- Rouge2: 0.0402
- Rougel: 0.1005
- Rougelsum: 0.1018
- Generated Length: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:|
| No log | 1.0 | 9 | nan | 0.1141 | 0.0402 | 0.1005 | 0.1018 | 19.0 |
| No log | 2.0 | 18 | nan | 0.1141 | 0.0402 | 0.1005 | 0.1018 | 19.0 |
| No log | 3.0 | 27 | nan | 0.1141 | 0.0402 | 0.1005 | 0.1018 | 19.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
p84rn/tarpulagpt-1
|
p84rn
| 2024-09-27T16:14:04Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gguf",
"llama",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-09-27T11:55:43Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shopitalic/noir-candle-rafael
|
shopitalic
| 2024-09-27T16:12:42Z | 6 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-09-27T16:12:39Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# noir swiss
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/buhofausto/noir-swiss/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
ars1122/llava-next-resume-parser
|
ars1122
| 2024-09-27T16:05:41Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-09-27T15:27:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chohtet/Mistral-Small-Instruct-2409-H3-VLLM
|
chohtet
| 2024-09-27T16:03:56Z | 28 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Mistral-Small-Instruct-2409-bnb-4bit",
"base_model:finetune:unsloth/Mistral-Small-Instruct-2409-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T15:53:59Z |
---
base_model: unsloth/Mistral-Small-Instruct-2409-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
# Uploaded model
- **Developed by:** chohtet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Small-Instruct-2409-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
frett/chinese_extract_longbert
|
frett
| 2024-09-27T16:03:23Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"custom_code",
"base_model:OctopusMind/longbert-embedding-8k-zh",
"base_model:finetune:OctopusMind/longbert-embedding-8k-zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-09-27T15:12:56Z |
---
library_name: transformers
license: apache-2.0
base_model: OctopusMind/longbert-embedding-8k-zh
tags:
- generated_from_trainer
model-index:
- name: chinese_extract_longbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese_extract_longbert
This model is a fine-tuned version of [OctopusMind/longbert-embedding-8k-zh](https://huggingface.co/OctopusMind/longbert-embedding-8k-zh) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
shopitalic/remi-throw-gray-rafael
|
shopitalic
| 2024-09-27T15:59:35Z | 7 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-09-27T15:59:31Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# remi gray
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/buhofausto/remi-gray/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
leap-llm/Meta-Llama-3.1-8B-Instruct-sft-intercode-bash-iter0
|
leap-llm
| 2024-09-27T15:58:22Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-15T14:03:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Knut-J/xlm-roberta-base-finetuned-panx-all
|
Knut-J
| 2024-09-27T15:57:12Z | 8 | 0 | null |
[
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2024-09-27T15:53:51Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1721
- F1: 0.8525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2974 | 1.0 | 835 | 0.2015 | 0.8069 |
| 0.1575 | 2.0 | 1670 | 0.1687 | 0.8432 |
| 0.1027 | 3.0 | 2505 | 0.1721 | 0.8525 |
### Framework versions
- Transformers 4.42.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
trl-lib/Qwen2-0.5B-DPO
|
trl-lib
| 2024-09-27T15:54:37Z | 13 | 4 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/Capybara-Preferences",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-26T14:56:38Z |
---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: trl-lib/Capybara-Preferences
library_name: transformers
model_name: dpo-qwen2
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for dpo-qwen2
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [trl-lib/Capybara-Preferences](https://huggingface.co/datasets/trl-lib/Capybara-Preferences) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/dpo-qwen2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/trl/runs/8g0pylqi)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.0.dev0
- Pytorch: 2.4.1
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jialei12138/Qwen-Qwen1.5-1.8B-1727452360
|
jialei12138
| 2024-09-27T15:52:45Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | 2024-09-27T15:52:40Z |
---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.0
|
Knut-J/xlm-roberta-base-finetuned-panx-it
|
Knut-J
| 2024-09-27T15:52:38Z | 5 | 0 | null |
[
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2024-09-27T15:51:23Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2714
- F1: 0.8212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7111 | 1.0 | 70 | 0.3311 | 0.7243 |
| 0.2918 | 2.0 | 140 | 0.2697 | 0.7947 |
| 0.1795 | 3.0 | 210 | 0.2714 | 0.8212 |
### Framework versions
- Transformers 4.42.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Knut-J/xlm-roberta-base-finetuned-panx-fr
|
Knut-J
| 2024-09-27T15:51:22Z | 6 | 0 | null |
[
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2024-09-27T15:49:31Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2792
- F1: 0.8358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.572 | 1.0 | 191 | 0.3533 | 0.7615 |
| 0.2769 | 2.0 | 382 | 0.2787 | 0.8173 |
| 0.1834 | 3.0 | 573 | 0.2792 | 0.8358 |
### Framework versions
- Transformers 4.42.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Knut-J/xlm-roberta-base-finetuned-panx-de-fr
|
Knut-J
| 2024-09-27T15:49:25Z | 5 | 0 | null |
[
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2024-09-27T15:46:13Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1626
- F1: 0.8598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2852 | 1.0 | 715 | 0.1750 | 0.8236 |
| 0.1458 | 2.0 | 1430 | 0.1585 | 0.8533 |
| 0.0934 | 3.0 | 2145 | 0.1626 | 0.8598 |
### Framework versions
- Transformers 4.42.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
tattabio/gLM2_150M
|
tattabio
| 2024-09-27T15:34:12Z | 265 | 1 | null |
[
"safetensors",
"gLM2",
"custom_code",
"dataset:tattabio/OMG",
"arxiv:2303.09540",
"license:apache-2.0",
"region:us"
] | null | 2024-09-14T15:27:15Z |
---
datasets:
- tattabio/OMG
license: apache-2.0
---
# gLM2_150M
gLM2 is a mixed-modality genomic language model, trained on the [`OMG Dataset`](https://huggingface.co/datasets/tattabio/OMG).
The model encodes a genomic scaffold with both both amino-acid and DNA tokens.
gLM2 is trained at two scales: 150M and 650M parameters (available at [`tattabio/gLM2_650M`](https://huggingface.co/tattabio/gLM2_650M)).
See [https://github.com/TattaBio/gLM2](https://github.com/TattaBio/gLM2) for inference scripts.
### Model Description
gLM2 is a transformer encoder trained with the masked language modeling objective.
It encodes a genomic contig as a sequence of protein coding sequences (CDS) and DNA inter-genic sequences (IGS).
CDS elements are tokenized using per-amino acid tokens, and IGS elements are tokenized using per-nucleotide tokens.
- To encode the genomic strand, we prepended each genomic element with a special token, either `<+>` or `<->` to indicate the positive and negative strands.
- To avoid collision between amino acid and nucleotide tokens, the tokenizer expects all amino acids to be uppercase, and all nucleotides to be lowercase.
UPDATE(09/2024): We updated the model with longer context length (4096 tokens vs. 2048 tokens) and per-nucleotide IGS tokenization instead of BPE.
## Getting Started
```python
import torch
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('tattabio/gLM2_150M', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained('tattabio/gLM2_150M', trust_remote_code=True)
# A contig with two proteins and an inter-genic sequence.
# NOTE: Nucleotides should always be lowercase, and prepended with `<+>`.
sequence = "<+>MALTKVEKRNRIKRRVRGKISGTQASPRLSVYKSNK<+>aatttaaggaa<->MLGIDNIERVKPGGLELVDRLVAVNRVTKVTKGGRAFGFSAIVVVGNED"
# Tokenize the sequence.
encodings = tokenizer([sequence], return_tensors='pt')
# Extract embeddings.
with torch.no_grad():
embeddings = model(encodings.input_ids.cuda(), output_hidden_states=True).last_hidden_state
```
### Training Data
gLM2 is trained on the [`OMG`](https://huggingface.co/datasets/tattabio/OMG) dataset.
To improve the dataset balance and remove near-duplicate examples, the data is tokenized and pruned by applying Semantic Deduplication [SemDedup](https://arxiv.org/abs/2303.09540).
We use an embedding distance threshold of 2e-3, resulting in 49% of the dataset being pruned.
## Training Details
- Pretraining tokens: 315B
- Context length: 4096
- Masking rate: 30%
- Learning rate: 1e-3
- Optimizer: AdamW (betas = (0.9, 0.95))
- Mixed precision training: bfloat16
- Weight decay: 0.1
## Citation
**BioRxiv:**
[https://www.biorxiv.org/content/10.1101/2024.08.14.607850](https://www.biorxiv.org/content/10.1101/2024.08.14.607850)
**BibTeX:**
```@article {Cornman2024.08.14.607850,
author = {Cornman, Andre and West-Roberts, Jacob and Camargo, Antonio Pedro and Roux, Simon and Beracochea, Martin and Mirdita, Milot and Ovchinnikov, Sergey and Hwang, Yunha},
title = {The OMG dataset: An Open MetaGenomic corpus for mixed-modality genomic language modeling},
elocation-id = {2024.08.14.607850},
year = {2024},
doi = {10.1101/2024.08.14.607850},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2024/08/17/2024.08.14.607850},
eprint = {https://www.biorxiv.org/content/early/2024/08/17/2024.08.14.607850.full.pdf},
journal = {bioRxiv}
}
|
selectorseb/s2-oracle-llama3.1_test_4bnb
|
selectorseb
| 2024-09-27T15:33:57Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T15:29:33Z |
---
base_model: unsloth/llama-3-8b-instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** selectorseb
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tattabio/gLM2_650M
|
tattabio
| 2024-09-27T15:33:26Z | 5,114 | 3 | null |
[
"safetensors",
"gLM2",
"custom_code",
"dataset:tattabio/OMG",
"arxiv:2303.09540",
"license:apache-2.0",
"region:us"
] | null | 2024-09-23T02:04:54Z |
---
datasets:
- tattabio/OMG
license: apache-2.0
---
# gLM2_650M
gLM2 is a mixed-modality genomic language model, trained on the [`OMG Dataset`](https://huggingface.co/datasets/tattabio/OMG).
The model encodes a genomic scaffold with both both amino-acid and DNA tokens.
gLM2 is trained at two scales: 150M (available at [`tattabio/gLM2_150M`](https://huggingface.co/tattabio/gLM2_150M)) and 650M parameters.
See [https://github.com/TattaBio/gLM2](https://github.com/TattaBio/gLM2) for inference scripts.
### Model Description
gLM2 is a transformer encoder trained with the masked language modeling objective.
It encodes a genomic contig as a sequence of protein coding sequences (CDS) and DNA inter-genic sequences (IGS).
CDS elements are tokenized using per-amino acid tokens, and IGS elements are tokenized using per-nucleotide tokens.
- To encode the genomic strand, we prepended each genomic element with a special token, either `<+>` or `<->` to indicate the positive and negative strands.
- To avoid collision between amino acid and nucleotide tokens, the tokenizer expects all amino acids to be uppercase, and all nucleotides to be lowercase.
UPDATE(09/2024): We updated the model with longer context length (4096 tokens vs. 2048 tokens) and per-nucleotide IGS tokenization instead of BPE.
## Getting Started
```python
import torch
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('tattabio/gLM2_650M', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained('tattabio/gLM2_650M', trust_remote_code=True)
# A contig with two proteins and an inter-genic sequence.
# NOTE: Nucleotides should always be lowercase, and prepended with `<+>`.
sequence = "<+>MALTKVEKRNRIKRRVRGKISGTQASPRLSVYKSNK<+>aatttaaggaa<->MLGIDNIERVKPGGLELVDRLVAVNRVTKVTKGGRAFGFSAIVVVGNED"
# Tokenize the sequence.
encodings = tokenizer([sequence], return_tensors='pt')
# Extract embeddings.
with torch.no_grad():
embeddings = model(encodings.input_ids.cuda(), output_hidden_states=True).last_hidden_state
```
### Training Data
gLM2 is trained on the [`OMG`](https://huggingface.co/datasets/tattabio/OMG) dataset.
To improve the dataset balance and remove near-duplicate examples, the data is tokenized and pruned by applying Semantic Deduplication [SemDedup](https://arxiv.org/abs/2303.09540).
We use an embedding distance threshold of 2e-3, resulting in 49% of the dataset being pruned.
## Training Details
- Pretraining tokens: 315B
- Context length: 4096
- Masking rate: 30%
- Learning rate: 1e-3
- Optimizer: AdamW (betas = (0.9, 0.95))
- Mixed precision training: bfloat16
- Weight decay: 0.1
## Citation
**BioRxiv:**
[https://www.biorxiv.org/content/10.1101/2024.08.14.607850](https://www.biorxiv.org/content/10.1101/2024.08.14.607850)
**BibTeX:**
```@article {Cornman2024.08.14.607850,
author = {Cornman, Andre and West-Roberts, Jacob and Camargo, Antonio Pedro and Roux, Simon and Beracochea, Martin and Mirdita, Milot and Ovchinnikov, Sergey and Hwang, Yunha},
title = {The OMG dataset: An Open MetaGenomic corpus for mixed-modality genomic language modeling},
elocation-id = {2024.08.14.607850},
year = {2024},
doi = {10.1101/2024.08.14.607850},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2024/08/17/2024.08.14.607850},
eprint = {https://www.biorxiv.org/content/early/2024/08/17/2024.08.14.607850.full.pdf},
journal = {bioRxiv}
}
|
zeeshanali01/lora_model
|
zeeshanali01
| 2024-09-27T15:29:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T18:20:08Z |
---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** zeeshanali01
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
QuantFactory/EuroLLM-1.7B-Instruct-GGUF
|
QuantFactory
| 2024-09-27T15:27:20Z | 115 | 2 | null |
[
"gguf",
"en",
"de",
"es",
"fr",
"it",
"pt",
"pl",
"nl",
"tr",
"sv",
"cs",
"el",
"hu",
"ro",
"fi",
"uk",
"sl",
"sk",
"da",
"lt",
"lv",
"et",
"bg",
"no",
"ca",
"hr",
"ga",
"mt",
"gl",
"zh",
"ru",
"ko",
"ja",
"ar",
"hi",
"base_model:utter-project/EuroLLM-1.7B",
"base_model:quantized:utter-project/EuroLLM-1.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-27T15:18:35Z |
---
license: apache-2.0
language:
- en
- de
- es
- fr
- it
- pt
- pl
- nl
- tr
- sv
- cs
- el
- hu
- ro
- fi
- uk
- sl
- sk
- da
- lt
- lv
- et
- bg
- 'no'
- ca
- hr
- ga
- mt
- gl
- zh
- ru
- ko
- ja
- ar
- hi
base_model:
- utter-project/EuroLLM-1.7B
---
[](https://hf.co/QuantFactory)
# QuantFactory/EuroLLM-1.7B-Instruct-GGUF
This is quantized version of [utter-project/EuroLLM-1.7B-Instruct](https://huggingface.co/utter-project/EuroLLM-1.7B-Instruct) created using llama.cpp
# Original Model Card
## *Model updated on September 24*
# Model Card for EuroLLM-1.7B-Instruct
This is the model card for the first instruction tuned model of the EuroLLM series: EuroLLM-1.7B-Instruct. You can also check the pre-trained version: [EuroLLM-1.7B](https://huggingface.co/utter-project/EuroLLM-1.7B).
- **Developed by:** Unbabel, Instituto Superior Técnico, University of Edinburgh, Aveni, University of Paris-Saclay, University of Amsterdam, Naver Labs, Sorbonne Université.
- **Funded by:** European Union.
- **Model type:** A 1.7B parameter instruction tuned multilingual transfomer LLM.
- **Language(s) (NLP):** Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian.
- **License:** Apache License 2.0.
## Model Details
The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages.
EuroLLM-1.7B is a 1.7B parameter model trained on 4 trillion tokens divided across the considered languages and several data sources: Web data, parallel data (en-xx and xx-en), and high-quality datasets.
EuroLLM-1.7B-Instruct was further instruction tuned on EuroBlocks, an instruction tuning dataset with focus on general instruction-following and machine translation.
### Model Description
EuroLLM uses a standard, dense Transformer architecture:
- We use grouped query attention (GQA) with 8 key-value heads, since it has been shown to increase speed at inference time while maintaining downstream performance.
- We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster.
- We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks.
- We use rotary positional embeddings (RoPE) in every layer, since these have been shown to lead to good performances while allowing the extension of the context length.
For pre-training, we use 256 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 3,072 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision.
Here is a summary of the model hyper-parameters:
| | |
|--------------------------------------|----------------------|
| Sequence Length | 4,096 |
| Number of Layers | 24 |
| Embedding Size | 2,048 |
| FFN Hidden Size | 5,632 |
| Number of Heads | 16 |
| Number of KV Heads (GQA) | 8 |
| Activation Function | SwiGLU |
| Position Encodings | RoPE (\Theta=10,000) |
| Layer Norm | RMSNorm |
| Tied Embeddings | No |
| Embedding Parameters | 0.262B |
| LM Head Parameters | 0.262B |
| Non-embedding Parameters | 1.133B |
| Total Parameters | 1.657B |
## Run the model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "utter-project/EuroLLM-1.7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = '<|im_start|>system\n<|im_end|>\n<|im_start|>user\nTranslate the following English source text to Portuguese:\nEnglish: I am a language model for european languages. \nPortuguese: <|im_end|>\n<|im_start|>assistant\n'
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
## Results
### Machine Translation
We evaluate EuroLLM-1.7B-Instruct on several machine translation benchmarks: FLORES-200, WMT-23, and WMT-24 comparing it with [Gemma-2B](https://huggingface.co/google/gemma-2b) and [Gemma-7B](https://huggingface.co/google/gemma-7b) (also instruction tuned on EuroBlocks).
The results show that EuroLLM-1.7B is substantially better than Gemma-2B in Machine Translation and competitive with Gemma-7B.
#### Flores-200
| Model | AVG | AVG en-xx | AVG xx-en | en-ar | en-bg | en-ca | en-cs | en-da | en-de | en-el | en-es-latam | en-et | en-fi | en-fr | en-ga | en-gl | en-hi | en-hr | en-hu | en-it | en-ja | en-ko | en-lt | en-lv | en-mt | en-nl | en-no | en-pl | en-pt-br | en-ro | en-ru | en-sk | en-sl | en-sv | en-tr | en-uk | en-zh-cn | ar-en | bg-en | ca-en | cs-en | da-en | de-en | el-en | es-latam-en | et-en | fi-en | fr-en | ga-en | gl-en | hi-en | hr-en | hu-en | it-en | ja-en | ko-en | lt-en | lv-en | mt-en | nl-en | no-en | pl-en | pt-br-en | ro-en | ru-en | sk-en | sl-en | sv-en | tr-en | uk-en | zh-cn-en |
|--------------------------------|------|-----------|-----------|-------|-------|-------|-------|-------|-------|-------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|----------|
| EuroLLM-1.7B-Instruct |86.89 | 86.53 | 87.25 | 85.17 | 89.42 | 84.72 | 89.13 | 89.47 | 86.90 | 87.60 | 86.29 | 88.95 | 89.40 | 87.69 | 74.89 | 86.41 | 76.92 | 84.79 | 86.78 | 88.17 | 89.76 | 87.70 | 87.27 | 87.62 | 67.84 | 87.10 | 90.00 | 88.18 | 89.29 | 89.49 | 88.32 | 88.18 | 86.85 | 90.00 | 87.31 | 87.89 | 86.60 | 86.34 | 87.45 | 87.57 | 87.95 | 89.72 | 88.80 | 87.00 | 86.77 | 88.34 | 89.09 | 88.95 | 82.69 | 87.80 | 88.37 | 86.71 | 87.20 | 87.81 | 86.79 | 86.79 | 85.62 | 86.48 | 81.10 | 86.97 | 90.25 | 85.75 | 89.20 | 88.88 | 86.00 | 87.38 | 86.76 | 89.61 | 87.94 |
| Gemma-2B-EuroBlocks | 81.59 | 78.97 | 84.21 | 76.68 | 82.73 | 83.14 | 81.63 | 84.63 | 83.15 | 79.42 | 84.05 | 72.58 | 79.73 | 84.97 | 40.50 | 82.13 | 67.79 | 80.53 | 78.36 | 84.90 | 87.43 | 82.98 | 72.29 | 68.68 | 58.55 | 83.13 | 86.15 | 82.78 | 86.79 | 83.14 | 84.61 | 78.18 | 75.37 | 80.89 | 78.38 | 84.38 | 84.35 | 83.88 | 85.77 | 86.85 | 86.31 | 88.24 | 88.12 | 84.79 | 84.90 | 82.51 | 86.32 | 88.29 | 54.78 | 86.53 | 85.83 | 85.41 | 85.18 | 86.77 | 85.78 | 84.99 | 81.65 | 81.78 | 67.27 | 85.92 | 89.07 | 84.14 | 88.07 | 87.17 | 85.23 | 85.09 | 83.95 | 87.57 | 84.77 |
| Gemma-7B-EuroBlocks |85.27 | 83.90 | 86.64 | 86.38 | 87.87 | 85.74 | 84.25 | 85.69 | 81.49 | 85.52 | 86.93 | 62.83 | 84.96 | 75.34 | 84.93 | 83.91 | 86.92 | 88.19 | 86.11 | 81.73 | 80.55 | 66.85 | 85.31 | 89.36 | 85.87 | 88.62 | 88.06 | 86.67 | 84.79 | 82.71 | 86.45 | 85.19 | 86.67 | 85.77 | 86.36 | 87.21 | 88.09 | 87.17 | 89.40 | 88.26 | 86.74 | 86.73 | 87.25 | 88.87 | 88.81 | 72.45 | 87.62 | 87.86 | 87.08 | 87.01 | 87.58 | 86.92 | 86.70 | 85.10 | 85.74 | 77.81 | 86.83 | 90.40 | 85.41 | 89.04 | 88.77 | 86.13 | 86.67 | 86.32 | 89.27 | 87.92 |
#### WMT-23
| Model | AVG | AVG en-xx | AVG xx-en | AVG xx-xx | en-de | en-cs | en-uk | en-ru | en-zh-cn | de-en | uk-en | ru-en | zh-cn-en | cs-uk |
|--------------------------------|------|-----------|-----------|-----------|-------|-------|-------|-------|----------|-------|-------|-------|----------|-------|
| EuroLLM-1.7B-Instruct | 82.91 | 83.20 | 81.77 | 86.82 | 81.56 | 85.23 | 81.30 | 82.47 | 83.61 | 85.03 | 84.06 | 85.25 | 81.31 | 78.83 | 79.42 | 86.82 |
| Gemma-2B-EuroBlocks | 79.96 | 79.01 | 80.86 | 81.15 | 76.82 | 76.05 | 77.92 | 78.98 | 81.58 | 82.73 | 82.71 | 83.99 | 80.35 | 78.27 | 78.99 | 81.15 |
| Gemma-7B-EuroBlocks | 82.76 | 82.26 | 82.70 | 85.98 | 81.37 | 82.42 | 81.54 | 82.18 | 82.90 | 83.17 | 84.29 | 85.70 | 82.46 | 79.73 | 81.33 | 85.98 |
#### WMT-24
| Model | AVG | AVG en-xx | AVG xx-xx | en-de | en-es-latam | en-cs | en-ru | en-uk | en-ja | en-zh-cn | en-hi | cs-uk | ja-zh-cn |
|---------|------|------|-------|----|---|-------|-------|--------|--------|-------|-------|-------|-----|
| EuroLLM-1.7B-Instruct|79.32 | 79.32 | 79.34 | 79.42 | 80.67 | 80.55 | 78.65 | 80.12 | 82.96 | 80.60 | 71.59 | 83.48 | 75.20 |
|Gemma-2B-EuroBlocks| 74.72 | 74.41 | 75.97 | 74.93 | 78.81 | 70.54 | 74.90 | 75.84 | 79.48 | 78.06 | 62.70 | 79.87 | 72.07 |
|Gemma-7B-EuroBlocks| 78.67 | 78.34 | 80.00 | 78.88 | 80.47 | 78.55 | 78.55 | 80.12 | 80.55 | 78.90 | 70.71 | 84.33 | 75.66 |
### General Benchmarks
We also compare EuroLLM-1.7B with [TinyLlama-v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) and [Gemma-2B](https://huggingface.co/google/gemma-2b) on 3 general benchmarks: Arc Challenge and Hellaswag.
For the non-english languages we use the [Okapi](https://aclanthology.org/2023.emnlp-demo.28.pdf) datasets.
Results show that EuroLLM-1.7B is superior to TinyLlama-v1.1 and similar to Gemma-2B on Hellaswag but worse on Arc Challenge. This can be due to the lower number of parameters of EuroLLM-1.7B (1.133B non-embedding parameters against 1.981B).
#### Arc Challenge
| Model | Average | English | German | Spanish | French | Italian | Portuguese | Chinese | Russian | Dutch | Arabic | Swedish | Hindi | Hungarian | Romanian | Ukrainian | Danish | Catalan |
|--------------------|---------|---------|--------|---------|--------|---------|------------|---------|---------|-------|--------|---------|--------|-----------|----------|-----------|--------|---------|
| EuroLLM-1.7B | 0.3496 | 0.4061 | 0.3464 | 0.3684 | 0.3627 | 0.3738 | 0.3855 | 0.3521 | 0.3208 | 0.3507 | 0.3045 | 0.3605 | 0.2928 | 0.3271 | 0.3488 | 0.3516 | 0.3513 | 0.3396 |
| TinyLlama-v1.1 | 0.2650 | 0.3712 | 0.2524 | 0.2795 | 0.2883 | 0.2652 | 0.2906 | 0.2410 | 0.2669 | 0.2404 | 0.2310 | 0.2687 | 0.2354 | 0.2449 | 0.2476 | 0.2524 | 0.2494 | 0.2796 |
| Gemma-2B | 0.3617 | 0.4846 | 0.3755 | 0.3940 | 0.4080 | 0.3687 | 0.3872 | 0.3726 | 0.3456 | 0.3328 | 0.3122 | 0.3519 | 0.2851 | 0.3039 | 0.3590 | 0.3601 | 0.3565 | 0.3516 |
#### Hellaswag
| Model | Average | English | German | Spanish | French | Italian | Portuguese | Russian | Dutch | Arabic | Swedish | Hindi | Hungarian | Romanian | Ukrainian | Danish | Catalan |
|--------------------|---------|---------|--------|---------|--------|---------|------------|---------|--------|--------|---------|--------|-----------|----------|-----------|--------|---------|
| EuroLLM-1.7B | 0.4744 | 0.4760 | 0.6057 | 0.4793 | 0.5337 | 0.5298 | 0.5085 | 0.5224 | 0.4654 | 0.4949 | 0.4104 | 0.4800 | 0.3655 | 0.4097 | 0.4606 | 0.436 | 0.4702 | 0.4445 |
| TinyLlama-v1.1 |0.3674 | 0.6248 | 0.3650 | 0.4137 | 0.4010 | 0.3780 | 0.3892 | 0.3494 | 0.3588 | 0.2880 | 0.3561 | 0.2841 | 0.3073 | 0.3267 | 0.3349 | 0.3408 | 0.3613 |
| Gemma-2B |0.4666 | 0.7165 | 0.4756 | 0.5414 | 0.5180 | 0.4841 | 0.5081 | 0.4664 | 0.4655 | 0.3868 | 0.4383 | 0.3413 | 0.3710 | 0.4316 | 0.4291 | 0.4471 | 0.4448 |
## Bias, Risks, and Limitations
EuroLLM-1.7B-Instruct has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
|
grabbe-gymnasium-detmold/grabbe-ai-llama-3-2-1b
|
grabbe-gymnasium-detmold
| 2024-09-27T15:20:25Z | 58 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-09-27T15:15:36Z |
---
base_model: unsloth/llama-3.2-1b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** grabbe-gymnasium-detmold
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
simmo/llama3.2-pyfim-3b
|
simmo
| 2024-09-27T15:19:07Z | 9 | 0 | null |
[
"safetensors",
"gguf",
"llama",
"unsloth",
"trl",
"sft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-26T10:19:52Z |
---
license: apache-2.0
tags:
- unsloth
- trl
- sft
---
|
SantiagoMJ/Mistral-7b-retie-serV2
|
SantiagoMJ
| 2024-09-27T15:17:52Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T15:13:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kejian/gemma-2b-gsm8k
|
kejian
| 2024-09-27T15:11:53Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T15:05:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jfiekdjdk/Qwen2.5-14B-Instruct-abliterated-4.0bpw-h6-exl2
|
jfiekdjdk
| 2024-09-27T15:08:31Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"abliterated",
"uncensored",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-09-27T15:03:55Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-14B-Instruct
tags:
- chat
- abliterated
- uncensored
---
# huihui-ai/Qwen2.5-14B-Instruct-abliterated
This is an uncensored version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "huihui-ai/Qwen2.5-14B-Instruct-abliterated"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize conversation context
initial_messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
]
messages = initial_messages.copy() # Copy the initial conversation context
# Enter conversation loop
while True:
# Get user input
user_input = input("User: ").strip() # Strip leading and trailing spaces
# If the user types '/exit', end the conversation
if user_input.lower() == "/exit":
print("Exiting chat.")
break
# If the user types '/clean', reset the conversation context
if user_input.lower() == "/clean":
messages = initial_messages.copy() # Reset conversation context
print("Chat history cleared. Starting a new conversation.")
continue
# If input is empty, prompt the user and continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Build the chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and prepare it for the model
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
# Extract model output, removing special tokens
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Add the model's response to the conversation
messages.append({"role": "assistant", "content": response})
# Print the model's response
print(f"Qwen: {response}")
```
## Evaluations
Evaluation is ongoing, to be continued later.
|
S-ch/distilbert-base-uncased-finetuned-imdb
|
S-ch
| 2024-09-27T14:59:05Z | 209 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-09-27T14:46:32Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4869
- Model Preparation Time: 0.0027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|
| 2.6773 | 1.0 | 157 | 2.4911 | 0.0027 |
| 2.5839 | 2.0 | 314 | 2.4472 | 0.0027 |
| 2.5277 | 3.0 | 471 | 2.4799 | 0.0027 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 3.0.0
- Tokenizers 0.19.1
|
SongTonyLi/Phi-3.5-mini-instruct-CPT-D1_chosen-then-DPO-D2a-dpo-mix-shuffled5
|
SongTonyLi
| 2024-09-27T14:55:39Z | 88 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"trl",
"dpo",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T01:48:00Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/LiquidCrystal_V3-20B-GGUF
|
mradermacher
| 2024-09-27T14:49:34Z | 155 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Elfrino/LiquidCrystal_V3-20B",
"base_model:quantized:Elfrino/LiquidCrystal_V3-20B",
"endpoints_compatible",
"region:us"
] | null | 2024-09-27T02:03:25Z |
---
base_model: Elfrino/LiquidCrystal_V3-20B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Elfrino/LiquidCrystal_V3-20B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.Q2_K.gguf) | Q2_K | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.IQ3_XS.gguf) | IQ3_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.IQ3_S.gguf) | IQ3_S | 8.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.Q3_K_S.gguf) | Q3_K_S | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.IQ3_M.gguf) | IQ3_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.Q3_K_M.gguf) | Q3_K_M | 9.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.Q3_K_L.gguf) | Q3_K_L | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.IQ4_XS.gguf) | IQ4_XS | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.Q4_K_S.gguf) | Q4_K_S | 11.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.Q4_K_M.gguf) | Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.Q5_K_S.gguf) | Q5_K_S | 13.9 | |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.Q5_K_M.gguf) | Q5_K_M | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.Q6_K.gguf) | Q6_K | 16.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LiquidCrystal_V3-20B-GGUF/resolve/main/LiquidCrystal_V3-20B.Q8_0.gguf) | Q8_0 | 21.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF
|
bartowski
| 2024-09-27T14:45:20Z | 580 | 3 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"base_model:nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B",
"base_model:quantized:nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-09-27T14:03:28Z |
---
base_model: nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Mistral-Nemo-Gutenberg-Doppel-12B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3825">b3825</a> for quantization.
Original model: https://huggingface.co/nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<s>[INST]{prompt}[/INST]
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Mistral-Nemo-Gutenberg-Doppel-12B-f16.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-f16.gguf) | f16 | 24.50GB | false | Full F16 weights. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q8_0.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q8_0.gguf) | Q8_0 | 13.02GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q6_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q6_K_L.gguf) | Q6_K_L | 10.38GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q6_K.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q6_K.gguf) | Q6_K | 10.06GB | false | Very high quality, near perfect, *recommended*. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q5_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q5_K_L.gguf) | Q5_K_L | 9.14GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q5_K_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q5_K_M.gguf) | Q5_K_M | 8.73GB | false | High quality, *recommended*. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q5_K_S.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q5_K_S.gguf) | Q5_K_S | 8.52GB | false | High quality, *recommended*. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q4_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q4_K_L.gguf) | Q4_K_L | 7.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q4_K_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q4_K_M.gguf) | Q4_K_M | 7.48GB | false | Good quality, default size for must use cases, *recommended*. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q3_K_XL.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q3_K_XL.gguf) | Q3_K_XL | 7.15GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q4_K_S.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q4_K_S.gguf) | Q4_K_S | 7.12GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q4_0.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q4_0.gguf) | Q4_0 | 7.09GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q4_0_8_8.gguf) | Q4_0_8_8 | 7.07GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q4_0_4_8.gguf) | Q4_0_4_8 | 7.07GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q4_0_4_4.gguf) | Q4_0_4_4 | 7.07GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-IQ4_XS.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-IQ4_XS.gguf) | IQ4_XS | 6.74GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q3_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q3_K_L.gguf) | Q3_K_L | 6.56GB | false | Lower quality but usable, good for low RAM availability. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q3_K_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q3_K_M.gguf) | Q3_K_M | 6.08GB | false | Low quality. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-IQ3_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-IQ3_M.gguf) | IQ3_M | 5.72GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q3_K_S.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q3_K_S.gguf) | Q3_K_S | 5.53GB | false | Low quality, not recommended. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q2_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q2_K_L.gguf) | Q2_K_L | 5.45GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-IQ3_XS.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-IQ3_XS.gguf) | IQ3_XS | 5.31GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-Q2_K.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-Q2_K.gguf) | Q2_K | 4.79GB | false | Very low quality but surprisingly usable. |
| [Mistral-Nemo-Gutenberg-Doppel-12B-IQ2_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF/blob/main/Mistral-Nemo-Gutenberg-Doppel-12B-IQ2_M.gguf) | IQ2_M | 4.44GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF --include "Mistral-Nemo-Gutenberg-Doppel-12B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Mistral-Nemo-Gutenberg-Doppel-12B-GGUF --include "Mistral-Nemo-Gutenberg-Doppel-12B-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Mistral-Nemo-Gutenberg-Doppel-12B-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mergekit-community/OpenGPT-3
|
mergekit-community
| 2024-09-27T14:32:41Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:mergekit-community/BetterGPT2",
"base_model:finetune:mergekit-community/BetterGPT2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T14:31:46Z |
---
base_model:
- mergekit-community/BetterGPT2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mergekit-community/BetterGPT2](https://huggingface.co/mergekit-community/BetterGPT2) as a base.
### Models Merged
The following models were included in the merge:
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mergekit-community/BetterGPT2
parameters:
density: 0.5
weight: 0.5
- model: mergekit-community/BetterGPT2
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: mergekit-community/BetterGPT2
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
mateiaassAI/T5_MEID-new-MT-RONACC-MT-16
|
mateiaassAI
| 2024-09-27T14:26:46Z | 128 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-27T14:26:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantFactory/Vikhr-Llama-3.2-1B-Instruct-GGUF
|
QuantFactory
| 2024-09-27T14:26:44Z | 105 | 2 |
transformers
|
[
"transformers",
"gguf",
"ru",
"en",
"dataset:Vikhrmodels/GrandMaster-PRO-MAX",
"arxiv:2405.13929",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-27T14:19:39Z |
---
library_name: transformers
model_name: Vikhr-Llama-3.2-1B-instruct
base_model:
- meta-llama/Llama-3.2-1B-Instruct
language:
- ru
- en
license: llama3.2
datasets:
- Vikhrmodels/GrandMaster-PRO-MAX
---
[](https://hf.co/QuantFactory)
# QuantFactory/Vikhr-Llama-3.2-1B-Instruct-GGUF
This is quantized version of [Vikhrmodels/Vikhr-Llama-3.2-1B-Instruct](https://huggingface.co/Vikhrmodels/Vikhr-Llama-3.2-1B-Instruct) created using llama.cpp
# Original Model Card
# 💨📱 Vikhr-Llama-3.2-1B-instruct
#### RU
Инструктивная модель на основе Llama-3.2-1B-Instruct, обученная на русскоязычном датасете GrandMaster-PRO-MAX. В 5 раз эффективнее базовой модели, и идеально подходит для запуска на слабых или мобильных устройствах.
#### EN
Instructive model based on Llama-3.2-1B-Instruct, trained on the Russian-language dataset GrandMaster-PRO-MAX. It is 5 times more efficient than the base model, making it perfect for deployment on low-power or mobile devices.
## GGUF
- [Vikhrmodels/Vikhr-Llama-3.2-1B-instruct-GGUF](https://huggingface.co/Vikhrmodels/Vikhr-Llama-3.2-1B-instruct-GGUF)
## Особенности:
- 📚 Основа / Base: [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
- 🇷🇺 Специализация / Specialization: **RU**
- 💾 Датасет / Dataset: [GrandMaster-PRO-MAX](https://huggingface.co/datasets/Vikhrmodels/GrandMaster-PRO-MAX)
## Попробовать / Try now:
[](https://colab.research.google.com/drive/1bJpLmplDGkMbfOLO2CH6IO-2uUZEaknf?usp=sharing)
## Описание:
#### RU
Vikhr-Llama-3.2-1B-instruct — это компактная языковая модель, обученная на датасете GrandMaster-PRO-MAX, специально доученная для обработки русского языка. Эффективность модели в 5 раз превышает базовую модель, а её размер не превышает 3GB, что делает её отличным выбором для запуска на слабых и мобильных устройствах.
#### EN
Vikhr-Llama-3.2-1B-instruct is a compact language model trained on the GrandMaster-PRO-MAX dataset, specifically designed for processing the Russian language. Its efficiency is 5 times higher than the base model, and its size does not exceed 3GB, making it an excellent choice for deployment on low-power and mobile devices.
## Обучение / Train:
#### RU
Для создания **Vikhr-Llama-3.2-1B-instruct** использовался метод SFT (Supervised Fine-Tuning). Мы обучили модель на синтетическом датасете **Vikhrmodels/GrandMaster-PRO-MAX** (150k инструкций) с поддержкой CoT (Chain-Of-Thought), используя промпты для GPT-4-turbo.
Скрипт для запуска SFT можно найти в нашей библиотеке на GitHub: [effective_llm_alignment](https://github.com/VikhrModels/effective_llm_alignment/).
#### EN
To create **Vikhr-Llama-3.2-1B-instruct**, the SFT (Supervised Fine-Tuning) method was used. We trained the model on a synthetic dataset **Vikhrmodels/GrandMaster-PRO-MAX** (150k instructions) with support for CoT (Chain-Of-Thought), utilizing prompts for GPT-4-turbo.
The script for running SFT can be found in our GitHub repository: [effective_llm_alignment](https://github.com/VikhrModels/effective_llm_alignment/).
## Пример кода для запуска / Sample code to run:
**Рекомендуемая температура для генерации: 0.3** / **Recommended generation temperature: 0.3**.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Загрузка модели и токенизатора
model_name = "Vikhrmodels/Vikhr-Llama-3.2-1B-instruct"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Подготовка входного текста
input_text = "Напиши очень краткую рецензию о книге гарри поттер."
# Токенизация и генерация текста
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(
input_ids,
max_length=1512,
temperature=0.3,
num_return_sequences=1,
no_repeat_ngram_size=2,
top_k=50,
top_p=0.95,
)
# Декодирование и вывод результата
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
#### Ответ модели / Model response:
> **Краткая рецензия на книгу "Гарри Поттер"**
>
> "Гарри Поттер" — это серия книг, написанная Дж. К. Роулинг, которая стала культовой в мире детских литературы. Книги рассказывают о жизни и приключениях молодого ученика по имени Гарри Поттер, который стал знаменитым по своей способности к магии.
>
> **Основные моменты:**
>
> 1. **Введение в мир Гарри Поттера:** Книги начинаются с описания Гарри, его семьи и школы, где он изучает магию. Гарри — необычный ученик, который не имеет магических способностей, но обладает уникальным умом и способностью к решению проблем.
>
> 2. **Социальные и политические аспекты:** В книгах рассматриваются социальные и политические аспекты, такие как правительство, магические общества, и их взаимодействие.
>
> 3. **Магические приключения:** Гарри и его друзья, включая Рон и Хэл, сталкиваются с множеством магических угроз, включая злодеев, такие как Волшебный Войнук и Сатан.
>
> 4. **Развитие персонажей:** В книгах развиваются персонажи, их мотивации и отношения с другими персонажами.
>
> 5. **Философские и моральные вопросы:** Книги затрагивают темы, такие как вера, доброта, справедливость и моральные дилеммы.
>
> **Заключение:**
>
> "Гарри Поттер" — это не только история о молодом ученике, но и глубокое исследование человеческого опыта, социальных норм и моральных дилемм. Книги привлекают читателей своими захватывающими сюжетами, яркими персонажами и глубокими философскими размышлениями. Они являются не только увлекательным приключением, но и важным источником вдохновения для многих людей.
## Метрики на ru_arena_general / Metrics on ru_arena_general
| **Model** | **Score** | **95% CI** | **Avg Tokens** | **Std Tokens** | **LC Score** |
| ------------------------------------------- | --------- | --------------- | -------------- | -------------- | ------------ |
| kolibri-vikhr-mistral-0427 | 22.41 | +1.6 / -1.6 | 489.89 | 566.29 | 46.04 |
| storm-7b | 20.62 | +2.0 / -1.6 | 419.32 | 190.85 | 45.78 |
| neural-chat-7b-v3-3 | 19.04 | +2.0 / -1.7 | 927.21 | 1211.62 | 45.56 |
| **Vikhrmodels-Vikhr-Llama-3.2-1B-instruct** | **19.04** | **+1.3 / -1.6** | **958.63** | **1297.33** | **45.56** |
| gigachat_lite | 17.2 | +1.4 / -1.4 | 276.81 | 329.66 | 45.29 |
| Vikhrmodels-vikhr-qwen-1.5b-it | 13.19 | +1.4 / -1.6 | 2495.38 | 741.45 | 44.72 |
| meta-llama-Llama-3.2-1B-Instruct | 4.04 | +0.8 / -0.6 | 1240.53 | 1783.08 | 43.42 |
### Авторы / Authors
- Sergei Bratchikov, [NLP Wanderer](https://t.me/nlpwanderer), [Vikhr Team](https://t.me/vikhrlabs)
- Nikolay Kompanets, [LakoMoor](https://t.me/lakomoor), [Vikhr Team](https://t.me/vikhrlabs)
- Konstantin Korolev, [Vikhr Team](https://t.me/vikhrlabs)
- Aleksandr Nikolich, [Vikhr Team](https://t.me/vikhrlabs)
```
@article{nikolich2024vikhr,
title={Vikhr: The Family of Open-Source Instruction-Tuned Large Language Models for Russian},
author={Aleksandr Nikolich and Konstantin Korolev and Sergey Bratchikov and Nikolay Kompanets and Artem Shelmanov},
journal={arXiv preprint arXiv:2405.13929},
year={2024},
url={https://arxiv.org/pdf/2405.13929}
}
```
|
QuantFactory/EuroLLM-1.7B-GGUF
|
QuantFactory
| 2024-09-27T14:17:13Z | 90 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"de",
"es",
"fr",
"it",
"pt",
"pl",
"nl",
"tr",
"sv",
"cs",
"el",
"hu",
"ro",
"fi",
"uk",
"sl",
"sk",
"da",
"lt",
"lv",
"et",
"bg",
"no",
"ca",
"hr",
"ga",
"mt",
"gl",
"zh",
"ru",
"ko",
"ja",
"ar",
"hi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-27T14:08:24Z |
---
license: apache-2.0
language:
- en
- de
- es
- fr
- it
- pt
- pl
- nl
- tr
- sv
- cs
- el
- hu
- ro
- fi
- uk
- sl
- sk
- da
- lt
- lv
- et
- bg
- 'no'
- ca
- hr
- ga
- mt
- gl
- zh
- ru
- ko
- ja
- ar
- hi
library_name: transformers
---
[](https://hf.co/QuantFactory)
# QuantFactory/EuroLLM-1.7B-GGUF
This is quantized version of [utter-project/EuroLLM-1.7B](https://huggingface.co/utter-project/EuroLLM-1.7B) created using llama.cpp
# Original Model Card
## *Model updated on September 24*
# Model Card for EuroLLM-1.7B
This is the model card for the first pre-trained model of the EuroLLM series: EuroLLM-1.7B. You can also check the instruction tuned version: [EuroLLM-1.7B-Instruct](https://huggingface.co/utter-project/EuroLLM-1.7B-Instruct).
- **Developed by:** Unbabel, Instituto Superior Técnico, University of Edinburgh, Aveni, University of Paris-Saclay, University of Amsterdam, Naver Labs, Sorbonne Université.
- **Funded by:** European Union.
- **Model type:** A 1.7B parameter multilingual transfomer LLM.
- **Language(s) (NLP):** Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian.
- **License:** Apache License 2.0.
## Model Details
The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages.
EuroLLM-1.7B is a 1.7B parameter model trained on 4 trillion tokens divided across the considered languages and several data sources: Web data, parallel data (en-xx and xx-en), and high-quality datasets.
EuroLLM-1.7B-Instruct was further instruction tuned on EuroBlocks, an instruction tuning dataset with focus on general instruction-following and machine translation.
### Model Description
EuroLLM uses a standard, dense Transformer architecture:
- We use grouped query attention (GQA) with 8 key-value heads, since it has been shown to increase speed at inference time while maintaining downstream performance.
- We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster.
- We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks.
- We use rotary positional embeddings (RoPE) in every layer, since these have been shown to lead to good performances while allowing the extension of the context length.
For pre-training, we use 256 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 3,072 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision.
Here is a summary of the model hyper-parameters:
| | |
|--------------------------------------|----------------------|
| Sequence Length | 4,096 |
| Number of Layers | 24 |
| Embedding Size | 2,048 |
| FFN Hidden Size | 5,632 |
| Number of Heads | 16 |
| Number of KV Heads (GQA) | 8 |
| Activation Function | SwiGLU |
| Position Encodings | RoPE (\Theta=10,000) |
| Layer Norm | RMSNorm |
| Tied Embeddings | No |
| Embedding Parameters | 0.262B |
| LM Head Parameters | 0.262B |
| Non-embedding Parameters | 1.133B |
| Total Parameters | 1.657B |
## Run the model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "utter-project/EuroLLM-1.7B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "English: My name is EuroLLM. Portuguese:"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
## Results
### Machine Translation
We evaluate EuroLLM-1.7B-Instruct on several machine translation benchmarks: FLORES-200, WMT-23, and WMT-24 comparing it with [Gemma-2B](https://huggingface.co/google/gemma-2b) and [Gemma-7B](https://huggingface.co/google/gemma-7b) (also instruction tuned on EuroBlocks).
The results show that EuroLLM-1.7B is substantially better than Gemma-2B in Machine Translation and competitive with Gemma-7B.
#### Flores-200
| Model | AVG | AVG en-xx | AVG xx-en | en-ar | en-bg | en-ca | en-cs | en-da | en-de | en-el | en-es-latam | en-et | en-fi | en-fr | en-ga | en-gl | en-hi | en-hr | en-hu | en-it | en-ja | en-ko | en-lt | en-lv | en-mt | en-nl | en-no | en-pl | en-pt-br | en-ro | en-ru | en-sk | en-sl | en-sv | en-tr | en-uk | en-zh-cn | ar-en | bg-en | ca-en | cs-en | da-en | de-en | el-en | es-latam-en | et-en | fi-en | fr-en | ga-en | gl-en | hi-en | hr-en | hu-en | it-en | ja-en | ko-en | lt-en | lv-en | mt-en | nl-en | no-en | pl-en | pt-br-en | ro-en | ru-en | sk-en | sl-en | sv-en | tr-en | uk-en | zh-cn-en |
|--------------------------------|------|-----------|-----------|-------|-------|-------|-------|-------|-------|-------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|----------|
| EuroLLM-1.7B-Instruct |86.89 | 86.53 | 87.25 | 85.17 | 89.42 | 84.72 | 89.13 | 89.47 | 86.90 | 87.60 | 86.29 | 88.95 | 89.40 | 87.69 | 74.89 | 86.41 | 76.92 | 84.79 | 86.78 | 88.17 | 89.76 | 87.70 | 87.27 | 87.62 | 67.84 | 87.10 | 90.00 | 88.18 | 89.29 | 89.49 | 88.32 | 88.18 | 86.85 | 90.00 | 87.31 | 87.89 | 86.60 | 86.34 | 87.45 | 87.57 | 87.95 | 89.72 | 88.80 | 87.00 | 86.77 | 88.34 | 89.09 | 88.95 | 82.69 | 87.80 | 88.37 | 86.71 | 87.20 | 87.81 | 86.79 | 86.79 | 85.62 | 86.48 | 81.10 | 86.97 | 90.25 | 85.75 | 89.20 | 88.88 | 86.00 | 87.38 | 86.76 | 89.61 | 87.94 |
| Gemma-2B-EuroBlocks | 81.59 | 78.97 | 84.21 | 76.68 | 82.73 | 83.14 | 81.63 | 84.63 | 83.15 | 79.42 | 84.05 | 72.58 | 79.73 | 84.97 | 40.50 | 82.13 | 67.79 | 80.53 | 78.36 | 84.90 | 87.43 | 82.98 | 72.29 | 68.68 | 58.55 | 83.13 | 86.15 | 82.78 | 86.79 | 83.14 | 84.61 | 78.18 | 75.37 | 80.89 | 78.38 | 84.38 | 84.35 | 83.88 | 85.77 | 86.85 | 86.31 | 88.24 | 88.12 | 84.79 | 84.90 | 82.51 | 86.32 | 88.29 | 54.78 | 86.53 | 85.83 | 85.41 | 85.18 | 86.77 | 85.78 | 84.99 | 81.65 | 81.78 | 67.27 | 85.92 | 89.07 | 84.14 | 88.07 | 87.17 | 85.23 | 85.09 | 83.95 | 87.57 | 84.77 |
| Gemma-7B-EuroBlocks |85.27 | 83.90 | 86.64 | 86.38 | 87.87 | 85.74 | 84.25 | 85.69 | 81.49 | 85.52 | 86.93 | 62.83 | 84.96 | 75.34 | 84.93 | 83.91 | 86.92 | 88.19 | 86.11 | 81.73 | 80.55 | 66.85 | 85.31 | 89.36 | 85.87 | 88.62 | 88.06 | 86.67 | 84.79 | 82.71 | 86.45 | 85.19 | 86.67 | 85.77 | 86.36 | 87.21 | 88.09 | 87.17 | 89.40 | 88.26 | 86.74 | 86.73 | 87.25 | 88.87 | 88.81 | 72.45 | 87.62 | 87.86 | 87.08 | 87.01 | 87.58 | 86.92 | 86.70 | 85.10 | 85.74 | 77.81 | 86.83 | 90.40 | 85.41 | 89.04 | 88.77 | 86.13 | 86.67 | 86.32 | 89.27 | 87.92 |
#### WMT-23
| Model | AVG | AVG en-xx | AVG xx-en | AVG xx-xx | en-de | en-cs | en-uk | en-ru | en-zh-cn | de-en | uk-en | ru-en | zh-cn-en | cs-uk |
|--------------------------------|------|-----------|-----------|-----------|-------|-------|-------|-------|----------|-------|-------|-------|----------|-------|
| EuroLLM-1.7B-Instruct | 82.91 | 83.20 | 81.77 | 86.82 | 81.56 | 85.23 | 81.30 | 82.47 | 83.61 | 85.03 | 84.06 | 85.25 | 81.31 | 78.83 | 79.42 | 86.82 |
| Gemma-2B-EuroBlocks | 79.96 | 79.01 | 80.86 | 81.15 | 76.82 | 76.05 | 77.92 | 78.98 | 81.58 | 82.73 | 82.71 | 83.99 | 80.35 | 78.27 | 78.99 | 81.15 |
| Gemma-7B-EuroBlocks | 82.76 | 82.26 | 82.70 | 85.98 | 81.37 | 82.42 | 81.54 | 82.18 | 82.90 | 83.17 | 84.29 | 85.70 | 82.46 | 79.73 | 81.33 | 85.98 |
#### WMT-24
| Model | AVG | AVG en-xx | AVG xx-xx | en-de | en-es-latam | en-cs | en-ru | en-uk | en-ja | en-zh-cn | en-hi | cs-uk | ja-zh-cn |
|---------|------|------|-------|----|---|-------|-------|--------|--------|-------|-------|-------|-----|
| EuroLLM-1.7B-Instruct|79.32 | 79.32 | 79.34 | 79.42 | 80.67 | 80.55 | 78.65 | 80.12 | 82.96 | 80.60 | 71.59 | 83.48 | 75.20 |
|Gemma-2B-EuroBlocks| 74.72 | 74.41 | 75.97 | 74.93 | 78.81 | 70.54 | 74.90 | 75.84 | 79.48 | 78.06 | 62.70 | 79.87 | 72.07 |
|Gemma-7B-EuroBlocks| 78.67 | 78.34 | 80.00 | 78.88 | 80.47 | 78.55 | 78.55 | 80.12 | 80.55 | 78.90 | 70.71 | 84.33 | 75.66 |
### General Benchmarks
We also compare EuroLLM-1.7B with [TinyLlama-v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) and [Gemma-2B](https://huggingface.co/google/gemma-2b) on 3 general benchmarks: Arc Challenge and Hellaswag.
For the non-english languages we use the [Okapi](https://aclanthology.org/2023.emnlp-demo.28.pdf) datasets.
Results show that EuroLLM-1.7B is superior to TinyLlama-v1.1 and similar to Gemma-2B on Hellaswag but worse on Arc Challenge. This can be due to the lower number of parameters of EuroLLM-1.7B (1.133B non-embedding parameters against 1.981B).
#### Arc Challenge
| Model | Average | English | German | Spanish | French | Italian | Portuguese | Chinese | Russian | Dutch | Arabic | Swedish | Hindi | Hungarian | Romanian | Ukrainian | Danish | Catalan |
|--------------------|---------|---------|--------|---------|--------|---------|------------|---------|---------|-------|--------|---------|--------|-----------|----------|-----------|--------|---------|
| EuroLLM-1.7B | 0.3496 | 0.4061 | 0.3464 | 0.3684 | 0.3627 | 0.3738 | 0.3855 | 0.3521 | 0.3208 | 0.3507 | 0.3045 | 0.3605 | 0.2928 | 0.3271 | 0.3488 | 0.3516 | 0.3513 | 0.3396 |
| TinyLlama-v1.1 | 0.2650 | 0.3712 | 0.2524 | 0.2795 | 0.2883 | 0.2652 | 0.2906 | 0.2410 | 0.2669 | 0.2404 | 0.2310 | 0.2687 | 0.2354 | 0.2449 | 0.2476 | 0.2524 | 0.2494 | 0.2796 |
| Gemma-2B | 0.3617 | 0.4846 | 0.3755 | 0.3940 | 0.4080 | 0.3687 | 0.3872 | 0.3726 | 0.3456 | 0.3328 | 0.3122 | 0.3519 | 0.2851 | 0.3039 | 0.3590 | 0.3601 | 0.3565 | 0.3516 |
#### Hellaswag
| Model | Average | English | German | Spanish | French | Italian | Portuguese | Russian | Dutch | Arabic | Swedish | Hindi | Hungarian | Romanian | Ukrainian | Danish | Catalan |
|--------------------|---------|---------|--------|---------|--------|---------|------------|---------|--------|--------|---------|--------|-----------|----------|-----------|--------|---------|
| EuroLLM-1.7B | 0.4744 | 0.4760 | 0.6057 | 0.4793 | 0.5337 | 0.5298 | 0.5085 | 0.5224 | 0.4654 | 0.4949 | 0.4104 | 0.4800 | 0.3655 | 0.4097 | 0.4606 | 0.436 | 0.4702 | 0.4445 |
| TinyLlama-v1.1 |0.3674 | 0.6248 | 0.3650 | 0.4137 | 0.4010 | 0.3780 | 0.3892 | 0.3494 | 0.3588 | 0.2880 | 0.3561 | 0.2841 | 0.3073 | 0.3267 | 0.3349 | 0.3408 | 0.3613 |
| Gemma-2B |0.4666 | 0.7165 | 0.4756 | 0.5414 | 0.5180 | 0.4841 | 0.5081 | 0.4664 | 0.4655 | 0.3868 | 0.4383 | 0.3413 | 0.3710 | 0.4316 | 0.4291 | 0.4471 | 0.4448 |
## Bias, Risks, and Limitations
EuroLLM-1.7B has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
|
mradermacher/XwinXtended-20B-i1-GGUF
|
mradermacher
| 2024-09-27T14:14:14Z | 43 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Elfrino/XwinXtended-20B",
"base_model:quantized:Elfrino/XwinXtended-20B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-09-27T11:03:20Z |
---
base_model: Elfrino/XwinXtended-20B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Elfrino/XwinXtended-20B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/XwinXtended-20B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-IQ2_S.gguf) | i1-IQ2_S | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-IQ2_M.gguf) | i1-IQ2_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-Q2_K.gguf) | i1-Q2_K | 7.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-IQ3_S.gguf) | i1-IQ3_S | 8.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-IQ3_M.gguf) | i1-IQ3_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-Q4_0.gguf) | i1-Q4_0 | 11.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 13.9 | |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/XwinXtended-20B-i1-GGUF/resolve/main/XwinXtended-20B.i1-Q6_K.gguf) | i1-Q6_K | 16.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nadavo11/actions_model5
|
nadavo11
| 2024-09-27T14:07:29Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-09-27T14:06:26Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf
|
RichardErkhov
| 2024-09-27T14:03:50Z | 5 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-09-27T10:20:16Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma2-mitra-bo-instruct - GGUF
- Model creator: https://huggingface.co/buddhist-nlp/
- Original model: https://huggingface.co/buddhist-nlp/gemma2-mitra-bo-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gemma2-mitra-bo-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q2_K.gguf) | Q2_K | 3.54GB |
| [gemma2-mitra-bo-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.IQ3_XS.gguf) | IQ3_XS | 3.86GB |
| [gemma2-mitra-bo-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.IQ3_S.gguf) | IQ3_S | 4.04GB |
| [gemma2-mitra-bo-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q3_K_S.gguf) | Q3_K_S | 4.04GB |
| [gemma2-mitra-bo-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.IQ3_M.gguf) | IQ3_M | 4.19GB |
| [gemma2-mitra-bo-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q3_K.gguf) | Q3_K | 4.43GB |
| [gemma2-mitra-bo-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q3_K_M.gguf) | Q3_K_M | 4.43GB |
| [gemma2-mitra-bo-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q3_K_L.gguf) | Q3_K_L | 4.78GB |
| [gemma2-mitra-bo-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.IQ4_XS.gguf) | IQ4_XS | 4.86GB |
| [gemma2-mitra-bo-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q4_0.gguf) | Q4_0 | 5.07GB |
| [gemma2-mitra-bo-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.IQ4_NL.gguf) | IQ4_NL | 5.1GB |
| [gemma2-mitra-bo-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q4_K_S.gguf) | Q4_K_S | 5.1GB |
| [gemma2-mitra-bo-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q4_K.gguf) | Q4_K | 5.37GB |
| [gemma2-mitra-bo-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q4_K_M.gguf) | Q4_K_M | 5.37GB |
| [gemma2-mitra-bo-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q4_1.gguf) | Q4_1 | 5.55GB |
| [gemma2-mitra-bo-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q5_0.gguf) | Q5_0 | 6.04GB |
| [gemma2-mitra-bo-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q5_K_S.gguf) | Q5_K_S | 6.04GB |
| [gemma2-mitra-bo-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q5_K.gguf) | Q5_K | 6.19GB |
| [gemma2-mitra-bo-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q5_K_M.gguf) | Q5_K_M | 6.19GB |
| [gemma2-mitra-bo-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q5_1.gguf) | Q5_1 | 6.52GB |
| [gemma2-mitra-bo-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q6_K.gguf) | Q6_K | 7.07GB |
| [gemma2-mitra-bo-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/buddhist-nlp_-_gemma2-mitra-bo-instruct-gguf/blob/main/gemma2-mitra-bo-instruct.Q8_0.gguf) | Q8_0 | 9.15GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wzebrowski/Llama3.1-8B-Reasoner-v0_3
|
wzebrowski
| 2024-09-27T13:52:40Z | 60 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T13:45:24Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** wzebrowski
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
huimanho/test1
|
huimanho
| 2024-09-27T13:50:26Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T13:50:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf
|
RichardErkhov
| 2024-09-27T13:49:23Z | 9 | 0 | null |
[
"gguf",
"arxiv:2407.10671",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-27T12:56:03Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-1.5B-Instruct - GGUF
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-1.5B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q2_K.gguf) | Q2_K | 0.63GB |
| [Qwen2.5-1.5B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [Qwen2.5-1.5B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.IQ3_S.gguf) | IQ3_S | 0.71GB |
| [Qwen2.5-1.5B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [Qwen2.5-1.5B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [Qwen2.5-1.5B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q3_K.gguf) | Q3_K | 0.77GB |
| [Qwen2.5-1.5B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [Qwen2.5-1.5B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [Qwen2.5-1.5B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [Qwen2.5-1.5B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q4_0.gguf) | Q4_0 | 0.87GB |
| [Qwen2.5-1.5B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [Qwen2.5-1.5B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [Qwen2.5-1.5B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q4_K.gguf) | Q4_K | 0.92GB |
| [Qwen2.5-1.5B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [Qwen2.5-1.5B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q4_1.gguf) | Q4_1 | 0.95GB |
| [Qwen2.5-1.5B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q5_0.gguf) | Q5_0 | 1.02GB |
| [Qwen2.5-1.5B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [Qwen2.5-1.5B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q5_K.gguf) | Q5_K | 1.05GB |
| [Qwen2.5-1.5B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [Qwen2.5-1.5B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q5_1.gguf) | Q5_1 | 1.1GB |
| [Qwen2.5-1.5B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q6_K.gguf) | Q6_K | 1.19GB |
| [Qwen2.5-1.5B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-1.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-1.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 1.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-1.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
optimum-internal-testing/tiny-random-prophetnet
|
optimum-internal-testing
| 2024-09-27T13:48:17Z | 3,944 | 0 | null |
[
"pytorch",
"prophetnet",
"license:apache-2.0",
"region:us"
] | null | 2024-09-27T13:46:02Z |
---
license: apache-2.0
---
|
djohari/test_model_upload
|
djohari
| 2024-09-27T13:48:01Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T13:47:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Stoneshi1985/test32
|
Stoneshi1985
| 2024-09-27T13:47:53Z | 93 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T13:47:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
felixwf/bert-base-uncased-emotion
|
felixwf
| 2024-09-27T13:46:32Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T13:46:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
spannala123/model
|
spannala123
| 2024-09-27T13:39:01Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T13:38:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dotyfake/Doty_viXTTS
|
dotyfake
| 2024-09-27T13:29:38Z | 5 | 0 |
transformers
|
[
"transformers",
"text-to-speech",
"vi",
"dataset:capleaf/viVoice",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2024-07-19T22:54:07Z |
---
license: other
license_name: coqui-public-model-license
license_link: https://coqui.ai/cpml
pipeline_tag: text-to-speech
datasets:
- capleaf/viVoice
language:
- vi
---
# viⓍTTS
viⓍTTS là mô hình tạo sinh giọng nói cho phép bạn sao chép giọng nói sang các ngôn ngữ khác nhau chỉ bằng cách sử dụng một đoạn âm thanh nhanh dài 6 giây. Mô hình này được tiếp tục đào tạo từ mô hình [XTTS-v2.0.3](https://huggingface.co/coqui/XTTS-v2) bằng cách mở rộng tokenizer sang tiếng Việt và huấn luyện trên tập dữ liệu [viVoice](https://huggingface.co/datasets/thinhlpg/viVoice).
viⓍTTS is a voice generation model that lets you clone voices into different languages by using just a quick 6-second audio clip. This model is fine-tuned from the [XTTS-v2.0.3](https://huggingface.co/coqui/XTTS-v2) model by expanding the tokenizer to Vietnamese and fine-tuning on the [viVoice](https://huggingface.co/datasets/thinhlpg/viVoice) dataset.
### Languages
viXTTS supports 18 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt),
Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko)
Hindi (hi), **Vietnamese (vi)**.
### Known Limitations
- Incompatibility with the [original TTS library](https://github.com/coqui-ai/TTS) (a pull request will be made later).
- Subpar performance for input sentences under 10 words in Vietnamese language (yielding inconsistent output and odd trailing sounds).
- This model is only fine-tuned in Vietnamese. The model's effectiveness with languages other than Vietnamese hasn't been tested, potentially reducing quality.
### Demo
Please checkout [this repo](https://github.com/thinhlpg/vixtts-demo)
### Usage
For a quick usage, please checkout [this notebook](https://colab.research.google.com/drive/1q9vA7mDyvK_u0ijDDNuycDoUUbryM3p3?usp=sharing)
### License
This model is licensed under [Coqui Public Model License](https://coqui.ai/cpml).
### Contact
Fine-tuned by Thinh Le at FPT University HCMC, as a component of [Non La](https://huggingface.co/capleaf)'s graduation thesis.
Contact:
- You can message me directly on Facebook: <https://fb.com/thinhlpg/> (preferred 🤗)
- GitHub: <https://github.com/thinhlpg>
- Email: <[email protected]> or <[email protected]>
|
Treza12/Biomistral-Class0-TestFull2
|
Treza12
| 2024-09-27T13:24:49Z | 60 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-09-27T13:23:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mateiaassAI/T5_MEID-new-MT-RONACC-MT-12
|
mateiaassAI
| 2024-09-27T13:24:04Z | 127 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-27T13:23:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Priyanka-Balivada/Russian-BERT-Finetune
|
Priyanka-Balivada
| 2024-09-27T13:17:38Z | 90 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T13:03:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ball0428/gemma-2b-jeonse_fraud
|
ball0428
| 2024-09-27T12:49:24Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"gemma2",
"unsloth",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-09-26T06:59:23Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
neelan/dummy-model
|
neelan
| 2024-09-27T12:44:09Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-09-23T10:17:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Vikhrmodels/Vikhr-Llama-3.2-1B-instruct-GGUF
|
Vikhrmodels
| 2024-09-27T12:21:27Z | 1,329 | 10 |
llamacpp
|
[
"llamacpp",
"gguf",
"instruct",
"text-generation",
"ru",
"en",
"dataset:Vikhrmodels/GrandMaster-PRO-MAX",
"arxiv:2405.13929",
"base_model:Vikhrmodels/Vikhr-Llama-3.2-1B-Instruct",
"base_model:quantized:Vikhrmodels/Vikhr-Llama-3.2-1B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-09-27T11:57:06Z |
---
library_name: llamacpp
model_name: Vikhr-Gemma-2B-instruct
base_model:
- Vikhrmodels/Vikhr-Llama-3.2-1B
language:
- ru
- en
license: llama3.2
tags:
- instruct
datasets:
- Vikhrmodels/GrandMaster-PRO-MAX
pipeline_tag: text-generation
---
# 💨📱 Vikhr-Llama-3.2-1B-instruct
#### RU
Инструктивная модель на основе Llama-3.2-1B-Instruct, обученная на русскоязычном датасете GrandMaster-PRO-MAX. В 5 раз эффективнее базовой модели, и идеально подходит для запуска на слабых или мобильных устройствах.
#### EN
Instructive model based on Llama-3.2-1B-Instruct, trained on the Russian-language dataset GrandMaster-PRO-MAX. It is 5 times more efficient than the base model, making it perfect for deployment on low-power or mobile devices.
- [HF model](https://huggingface.co/Vikhrmodels/Vikhr-Llama-3.2-1B)
**Рекомендуемая температура для генерации: 0.3** / **Recommended generation temperature: 0.3**.
## Метрики на ru_arena_general / Metrics on ru_arena_general
| **Model** | **Score** | **95% CI** | **Avg Tokens** | **Std Tokens** | **LC Score** |
| ------------------------------------------- | --------- | --------------- | -------------- | -------------- | ------------ |
| kolibri-vikhr-mistral-0427 | 22.41 | +1.6 / -1.6 | 489.89 | 566.29 | 46.04 |
| storm-7b | 20.62 | +2.0 / -1.6 | 419.32 | 190.85 | 45.78 |
| neural-chat-7b-v3-3 | 19.04 | +2.0 / -1.7 | 927.21 | 1211.62 | 45.56 |
| **Vikhrmodels-Vikhr-Llama-3.2-1B-instruct** | **19.04** | **+1.3 / -1.6** | **958.63** | **1297.33** | **45.56** |
| gigachat_lite | 17.2 | +1.4 / -1.4 | 276.81 | 329.66 | 45.29 |
| Vikhrmodels-vikhr-qwen-1.5b-it | 13.19 | +1.4 / -1.6 | 2495.38 | 741.45 | 44.72 |
| meta-llama-Llama-3.2-1B-Instruct | 4.04 | +0.8 / -0.6 | 1240.53 | 1783.08 | 43.42 |
### Авторы / Authors
- Sergei Bratchikov, [NLP Wanderer](https://t.me/nlpwanderer), [Vikhr Team](https://t.me/vikhrlabs)
- Nikolay Kompanets, [LakoMoor](https://t.me/lakomoor), [Vikhr Team](https://t.me/vikhrlabs)
- Konstantin Korolev, [Vikhr Team](https://t.me/vikhrlabs)
- Aleksandr Nikolich, [Vikhr Team](https://t.me/vikhrlabs)
```
@article{nikolich2024vikhr,
title={Vikhr: The Family of Open-Source Instruction-Tuned Large Language Models for Russian},
author={Aleksandr Nikolich and Konstantin Korolev and Sergey Bratchikov and Nikolay Kompanets and Artem Shelmanov},
journal={arXiv preprint arXiv:2405.13929},
year={2024},
url={https://arxiv.org/pdf/2405.13929}
}
|
pcuenq/Qwen2.5-0.5B-Instruct-with-new-merges-serialization
|
pcuenq
| 2024-09-27T12:12:44Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T12:12:29Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
Othniel74/legalcase_outcomepred_model_v1
|
Othniel74
| 2024-09-27T12:01:46Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T11:12:52Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: legalcase_outcomepred_model_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalcase_outcomepred_model_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3580
- Accuracy: 0.3340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.4956 | 0.9981 | 132 | 2.0711 | 0.3174 |
| 1.5006 | 1.9962 | 264 | 2.0215 | 0.2848 |
| 1.4925 | 2.9943 | 396 | 2.0069 | 0.2796 |
| 1.429 | 4.0 | 529 | 1.9503 | 0.2947 |
| 1.2188 | 4.9981 | 661 | 2.1001 | 0.3240 |
| 1.0163 | 5.9962 | 793 | 2.1491 | 0.3297 |
| 0.8554 | 6.9943 | 925 | 2.2008 | 0.3236 |
| 0.7692 | 8.0 | 1058 | 2.2889 | 0.3316 |
| 0.7553 | 8.9981 | 1190 | 2.3550 | 0.3349 |
| 0.6845 | 9.9811 | 1320 | 2.3580 | 0.3340 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
deeplife/scimilarity_model
|
deeplife
| 2024-09-27T11:48:15Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-03-22T11:58:15Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
# SCimilarity Model
## Model Details
- **Model Name**: SCimilarity
- **Version**: 1.0 [deeplife version]
- **Type**: Metric learning framework for single-cell RNA-seq data
- **Paper**: [Scalable querying of human cell atlases via a foundational model reveals commonalities across fibrosis-associated macrophages
](https://www.biorxiv.org/content/10.1101/2023.07.18.549537v1)
- **Original Implementation**: [SCimilarity GitHub Repository](https://github.com/genentech/scimilarity)
## Model Description
SCimilarity is a metric learning framework that learns and searches a unified and interpretable representation of single-cell RNA-seq data. It enables annotation of cell types and instant querying for cell states across tens of millions of profiles. In the context of DeepLife ML Infra, we focus on its cell embedding capabilities.
### Abstract
Single-cell RNA-seq (scRNA-seq) studies have profiled over 100 million human cells across diseases, developmental stages, and perturbations to date. A singular view of this vast and growing expression landscape could help reveal novel associations between cell states and diseases, discover cell states in unexpected tissue contexts, and relate in vivo cells to in vitro models. However, these require a common, scalable representation of cell profiles from across the body, a general measure of their similarity, and an efficient way to query these data. Here, we present SCimilarity, a metric learning framework to learn and search a unified and interpretable representation that annotates cell types and instantaneously queries for a cell state across tens of millions of profiles. We demonstrate SCimilarity on a 22.7 million cell corpus assembled across 399 published scRNA-seq studies, showing accurate integration, annotation and querying. We experimentally validated SCimilarity by querying across tissues for a macrophage subset originally identified in interstitial lung disease, and showing that cells with similar profiles are found in other fibrotic diseases, tissues, and a 3D hydrogel system, which we then repurposed to yield this cell state in vitro. SCimilarity serves as a foundational model for single cell gene expression data and enables researchers to query for similar cellular states across the entire human body, providing a powerful tool for generating novel biological insights from the growing Human Cell Atlas.
### Key Features
- Generates unified embeddings for single-cell expression profiles
- Enables efficient querying and annotation across large-scale datasets
- Generalizes to new studies without retraining
- Supports discovery of novel cell state associations across diseases and tissues
## Intended Use
SCimilarity is designed for researchers working with single-cell RNA sequencing (scRNA-seq) data. Within the DeepLife ML Infra framework, it can be used for:
- Generating cell embeddings from scRNA-seq data
- Querying for similar cell states across large datasets
- Annotating cell types in new datasets
- Discovering novel associations between cell states and diseases
## Training Data
The model was trained on a corpus of 22.7 million cells assembled from 399 published scRNA-seq studies. For detailed information about the training data, please refer to the original paper.
## Performance
SCimilarity has demonstrated:
- Accurate integration and annotation across a large corpus of cells
- Efficient querying for similar cell states across tissues and diseases
- Ability to reveal novel biological insights, as validated experimentally
For specific performance metrics, please refer to the original paper.
## Limitations
- The model's performance may vary for cell types or states that are underrepresented in the training data
- As with any embedding model, care should be taken when interpreting similarities, especially across different experimental conditions or protocols
## Ethical Considerations
Users should be aware that while the data used to train SCimilarity is from public sources, it represents human tissue samples and should be treated with appropriate respect and consideration. Researchers using this model should adhere to ethical guidelines for human subjects research.
## Usage
To use the SCimilarity model within the DeepLife ML Infra:
1. Install the package:
```
pip install deeplife-mlinfra
```
2. Import and use the model:
```python
import anndata as ad
from huggingface_hub import hf_hub_download
from dl_models.models.scimilarity.model import SCimilarityEmbedModel
from dl_models.models.scimilarity.processor import SCimilarityProcessor
# Load the model and preprocessor
model = SCimilarityEmbedModel.from_pretrained("deeplife/scimilarity_model")
preprocessor = SCimilarityProcessor.from_pretrained("deeplife/scimilarity_model")
model.eval()
# Load your data (example using a sample dataset)
filepath = hf_hub_download(
repo_id="deeplife/h5ad_samples",
filename="GSE136831small.h5ad",
repo_type="dataset",
)
adata = ad.read_h5ad(filepath)
# Preprocess and create a dataloader
dataloader = preprocessor.transform_to_dataloader(adata, batch_size=256)
# Get embeddings
for batch in dataloader:
embed = model.get_cell_embeddings(batch)
break # This gets embeddings for the first batch
# You can now use these embeddings for downstream tasks
```
For visualization of the embeddings, you can use techniques like PCA or UMAP:
```python
import numpy as np
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
import umap
# Convert embed to numpy
embed_np = embed.detach().cpu().numpy()
# Perform PCA
pca = PCA(n_components=2)
embed_pca = pca.fit_transform(embed_np)
# Perform UMAP
umap_reducer = umap.UMAP(n_components=2, random_state=42)
embed_umap = umap_reducer.fit_transform(embed_np)
# Plot the results
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 8))
# PCA plot
scatter1 = ax1.scatter(embed_pca[:, 0], embed_pca[:, 1], alpha=0.7)
ax1.set_title('SCimilarity Embeddings - PCA')
ax1.set_xlabel('PC1')
ax1.set_ylabel('PC2')
plt.colorbar(scatter1, ax=ax1)
# UMAP plot
scatter2 = ax2.scatter(embed_umap[:, 0], embed_umap[:, 1], alpha=0.7)
ax2.set_title('SCimilarity Embeddings - UMAP')
ax2.set_xlabel('UMAP1')
ax2.set_ylabel('UMAP2')
plt.colorbar(scatter2, ax=ax2)
plt.tight_layout()
plt.show()
```
For more detailed usage instructions, please refer to the [documentation](https://github.com/deeplifeai/deeplife-mlinfra).
## Citation
If you use this model in your research, please cite both the original SCimilarity paper and the DeepLife ML Infra package:
```
@article{yoo2023scimilarity,
title={SCimilarity: a scalable and universal cell state similarity metric for single cell RNA-sequencing data},
author={Yoo, Byungjin and Nawy, Tal and Hu, Yuanjie and Szeto, Gregory L and Wuster, Arthur},
journal={bioRxiv},
pages={2023.07.18.549537},
year={2023},
publisher={Cold Spring Harbor Laboratory}
}
@software{deeplife_mlinfra,
title={DeepLife ML Infra: Infrastructure for Biological Deep Learning Models},
author={DeepLife AI Team},
year={2023},
url={https://github.com/deeplifeai/deeplife-mlinfra},
version={1.0.0}
}
```
## License
### Code License
The SCimilarity code is licensed under the Apache License, Version 2.0. The full text of the license is as follows:
```
Copyright 2023 Genentech, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
### Model Weights License
The SCimilarity model weights are licensed under the Creative Commons Attribution Share Alike 4.0 International license. Users are free to share and adapt the material under the following terms:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made.
- ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
For the full text of this license, please visit: [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
## Additional Resources
- [SCimilarity Documentation](https://genentech.github.io/scimilarity/index.html)
- [Pretrained Model Weights and Data](https://zenodo.org/records/10685499)
## Contact
For questions or issues related to this model implementation in DeepLife ML Infra, please open an issue in the [repository](https://github.com/deeplifeai/deeplife-mlinfra).
For questions about the original SCimilarity model, please refer to the [original repository](https://github.com/genentech/scimilarity).
|
argearriojas/Phi-3.5-mini-instruct-Q4_0-GGUF
|
argearriojas
| 2024-09-27T11:44:52Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"multilingual",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:quantized:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-09-27T11:44:42Z |
---
base_model: microsoft/Phi-3.5-mini-instruct
language:
- multilingual
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# argearriojas/Phi-3.5-mini-instruct-Q4_0-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3.5-mini-instruct`](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo argearriojas/Phi-3.5-mini-instruct-Q4_0-GGUF --hf-file phi-3.5-mini-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo argearriojas/Phi-3.5-mini-instruct-Q4_0-GGUF --hf-file phi-3.5-mini-instruct-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo argearriojas/Phi-3.5-mini-instruct-Q4_0-GGUF --hf-file phi-3.5-mini-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo argearriojas/Phi-3.5-mini-instruct-Q4_0-GGUF --hf-file phi-3.5-mini-instruct-q4_0.gguf -c 2048
```
|
s0uL141/fine_tuned_science_gemma2b-it
|
s0uL141
| 2024-09-27T11:44:41Z | 6 | 0 | null |
[
"safetensors",
"gemma2",
"text-generation",
"conversational",
"en",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-09-16T06:17:50Z |
---
license: apache-2.0
language:
- en
base_model:
- google/gemma-2-2b-it
pipeline_tag: text-generation
---
|
Sourav1111/layoutlmv3-finetuned-invoice
|
Sourav1111
| 2024-09-27T11:12:54Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-27T11:12:30Z |
---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-invoice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-invoice
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0026
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2437 | 1.25 | 100 | 0.1687 | 0.8536 | 0.9088 | 0.8803 | 0.9675 |
| 0.006 | 2.5 | 200 | 0.0026 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0071 | 3.75 | 300 | 0.0015 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.002 | 5.0 | 400 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
downtown1/google-gemma-2b-1727435394
|
downtown1
| 2024-09-27T11:10:19Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2024-09-27T11:09:54Z |
---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
navkaggle/my_awesome_mind_model
|
navkaggle
| 2024-09-27T11:04:43Z | 145 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-09-27T10:59:10Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.035398230088495575
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6478
- Accuracy: 0.0354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6332 | 0.0531 |
| No log | 1.8667 | 7 | 2.6388 | 0.0708 |
| 2.6365 | 2.9333 | 11 | 2.6420 | 0.0442 |
| 2.6365 | 4.0 | 15 | 2.6410 | 0.0619 |
| 2.6365 | 4.8 | 18 | 2.6405 | 0.0619 |
| 2.625 | 5.8667 | 22 | 2.6429 | 0.0619 |
| 2.625 | 6.9333 | 26 | 2.6463 | 0.0354 |
| 2.6195 | 8.0 | 30 | 2.6478 | 0.0354 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Zohaib002/LED-cnn-dataset-summarization
|
Zohaib002
| 2024-09-27T11:00:45Z | 77 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"led",
"text2text-generation",
"generated_from_trainer",
"base_model:pszemraj/led-base-book-summary",
"base_model:finetune:pszemraj/led-base-book-summary",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-27T09:12:55Z |
---
library_name: transformers
license: bsd-3-clause
base_model: pszemraj/led-base-book-summary
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: LED-cnn-dataset-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LED-cnn-dataset-summarization
This model is a fine-tuned version of [pszemraj/led-base-book-summary](https://huggingface.co/pszemraj/led-base-book-summary) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0098
- Rouge1: 0.4061
- Rouge2: 0.1676
- Rougel: 0.2695
- Rougelsum: 0.3756
- Gen Len: 79.036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 250 | 1.8883 | 0.4074 | 0.1733 | 0.2733 | 0.3741 | 81.696 |
| 1.9196 | 2.0 | 500 | 1.8782 | 0.4105 | 0.1738 | 0.2735 | 0.3789 | 85.312 |
| 1.9196 | 3.0 | 750 | 1.8763 | 0.408 | 0.1734 | 0.2747 | 0.3754 | 84.348 |
| 1.4188 | 4.0 | 1000 | 1.9043 | 0.4086 | 0.1716 | 0.273 | 0.3795 | 79.842 |
| 1.4188 | 5.0 | 1250 | 1.9344 | 0.4084 | 0.1686 | 0.2713 | 0.377 | 79.926 |
| 1.168 | 6.0 | 1500 | 1.9623 | 0.4121 | 0.1733 | 0.2749 | 0.3813 | 77.228 |
| 1.168 | 7.0 | 1750 | 2.0004 | 0.4092 | 0.1711 | 0.273 | 0.3794 | 77.102 |
| 1.0279 | 8.0 | 2000 | 2.0098 | 0.4061 | 0.1676 | 0.2695 | 0.3756 | 79.036 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Othniel74/legalcase_outcomepred_model
|
Othniel74
| 2024-09-27T11:00:28Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T09:48:40Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: legalcase_outcomepred_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalcase_outcomepred_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0116
- Accuracy: 0.3307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.7752 | 0.9981 | 132 | 1.8412 | 0.2640 |
| 1.6453 | 1.9962 | 264 | 1.8323 | 0.2867 |
| 1.6322 | 2.9943 | 396 | 1.7919 | 0.2985 |
| 1.4239 | 4.0 | 529 | 1.8052 | 0.3188 |
| 1.3082 | 4.9981 | 661 | 1.8625 | 0.3217 |
| 1.2395 | 5.9962 | 793 | 1.8780 | 0.3382 |
| 1.103 | 6.9943 | 925 | 1.9332 | 0.3302 |
| 1.0687 | 8.0 | 1058 | 1.9723 | 0.3382 |
| 1.0303 | 8.9981 | 1190 | 2.0012 | 0.3363 |
| 0.9643 | 9.9811 | 1320 | 2.0116 | 0.3307 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
athuldev/layoutlmv3-financial-document-classification-dc
|
athuldev
| 2024-09-27T10:51:58Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"layoutlmv3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T10:50:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kshitizrimal/Gemma-2-2b-it-ne-detector-v2_full
|
kshitizrimal
| 2024-09-27T10:50:16Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T10:45:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lgk03/ACROSSAPPS_NDD-pagekit_test-content_tags
|
lgk03
| 2024-09-27T10:45:01Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T09:27:26Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: ACROSSAPPS_NDD-pagekit_test-content_tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ACROSSAPPS_NDD-pagekit_test-content_tags
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3844
- Accuracy: 0.6554
- F1: 0.6119
- Precision: 0.6638
- Recall: 0.6554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0449 | 0.9993 | 684 | 1.2269 | 0.6554 | 0.6119 | 0.6638 | 0.6554 |
| 0.0303 | 1.9985 | 1368 | 1.3844 | 0.6554 | 0.6119 | 0.6638 | 0.6554 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
FreedomIntelligence/DiagnosisGPT-34B
|
FreedomIntelligence
| 2024-09-27T10:36:14Z | 16 | 7 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:2407.13301",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-17T07:52:44Z |
---
license: apache-2.0
---
## Citation
```
@misc{chen2024codinterpretablemedicalagent,
title={CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis},
author={Junying Chen and Chi Gui and Anningzhe Gao and Ke Ji and Xidong Wang and Xiang Wan and Benyou Wang},
year={2024},
eprint={2407.13301},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.13301},
}
```
|
FreedomIntelligence/DiagnosisGPT-6B
|
FreedomIntelligence
| 2024-09-27T10:35:45Z | 48 | 3 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:2407.13301",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-17T07:53:05Z |
---
license: apache-2.0
---
## Citation
```
@misc{chen2024codinterpretablemedicalagent,
title={CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis},
author={Junying Chen and Chi Gui and Anningzhe Gao and Ke Ji and Xidong Wang and Xiang Wan and Benyou Wang},
year={2024},
eprint={2407.13301},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.13301},
}
```
|
Khoa/sentiment-analysis-all-category
|
Khoa
| 2024-09-27T10:33:48Z | 100 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T10:33:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ShayanV3/AntModel-7B-XLLM-Demo
|
ShayanV3
| 2024-09-27T10:07:42Z | 60 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-09-27T10:03:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Heralax/Mistrilitary-7b
|
Heralax
| 2024-09-27T09:59:38Z | 127 | 19 |
transformers
|
[
"transformers",
"pytorch",
"gguf",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Heralax/army-pretrain-1",
"base_model:quantized:Heralax/army-pretrain-1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T06:27:56Z |
---
library_name: transformers
license: apache-2.0
base_model: Heralax/army-pretrain-1
tags:
- generated_from_trainer
model-index:
- name: us-army-finetune-1
results: []
---
Was torn between calling it MiLLM and Mistrillitary. *Sigh* naming is one of the two great problems in computer science...
This is a domain-expert finetune based on the US Army field manuals (the ones that are published and available for civvies like me). It's focused on factual question answer only, but seems to be able to answer slightly deeper questions in a pinch.
## Model Quirks
- I had to focus on the army field manuals because the armed forces publishes a truly massive amount of text.
- No generalist assistant data was included, which means this is very very very focused on QA, and may be inflexible.
- Experimental change: data was mostly generated by a smaller model, Mistral NeMo. Quality seems unaffected, costs are much lower. Had problems with the open-ended questions not being in the right format.
- Low temperture recommended. Screenshots use 0.
- ChatML
- No special tokens added.
Examples:
)




## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 5
- gradient_accumulation_steps: 6
- total_train_batch_size: 60
- total_eval_batch_size: 5
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 48
- num_epochs: 6
### Training results
It answers questions alright.
### Framework versions
- Transformers 4.45.0
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0
|
RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf
|
RichardErkhov
| 2024-09-27T09:58:30Z | 69 | 0 | null |
[
"gguf",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-27T09:33:42Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-Coder-1.5B-Instruct - GGUF
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-Coder-1.5B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q2_K.gguf) | Q2_K | 0.63GB |
| [Qwen2.5-Coder-1.5B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [Qwen2.5-Coder-1.5B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.IQ3_S.gguf) | IQ3_S | 0.71GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [Qwen2.5-Coder-1.5B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q3_K.gguf) | Q3_K | 0.77GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [Qwen2.5-Coder-1.5B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q4_0.gguf) | Q4_0 | 0.87GB |
| [Qwen2.5-Coder-1.5B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q4_K.gguf) | Q4_K | 0.92GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q4_1.gguf) | Q4_1 | 0.95GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q5_0.gguf) | Q5_0 | 1.02GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q5_K.gguf) | Q5_K | 1.05GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q5_1.gguf) | Q5_1 | 1.1GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q6_K.gguf) | Q6_K | 1.19GB |
| [Qwen2.5-Coder-1.5B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Coder-1.5B-Instruct-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-1.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
# Qwen2.5-Coder-1.5B-Instruct
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). For Qwen2.5-Coder, we release three base language models and instruction-tuned language models, 1.5, 7 and 32 (coming soon) billion parameters. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the instruction-tuned 1.5B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-Coder-1.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "write a quick sort algorithm."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
RohiniPS/Qwen1B-QnA-1
|
RohiniPS
| 2024-09-27T09:51:12Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T08:22:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AI-ML-Research/Qwen2.5-0.5b-unsloth_q8_k
|
AI-ML-Research
| 2024-09-27T09:40:57Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:quantized:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-27T09:40:47Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
---
# Uploaded model
- **Developed by:** AiisNothing
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
devcnn5/sql-training-1727428870
|
devcnn5
| 2024-09-27T09:39:12Z | 188 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-27T09:39:03Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: sql-training-1727428870
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sql-training-1727428870
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0259 | 0.5086 | 500 | 0.0138 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf
|
RichardErkhov
| 2024-09-27T09:36:05Z | 13 | 0 | null |
[
"gguf",
"arxiv:2407.10671",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-27T09:13:22Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-1.5B-Instruct - GGUF
- Model creator: https://huggingface.co/unsloth/
- Original model: https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-1.5B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q2_K.gguf) | Q2_K | 0.63GB |
| [Qwen2.5-1.5B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [Qwen2.5-1.5B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.IQ3_S.gguf) | IQ3_S | 0.71GB |
| [Qwen2.5-1.5B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [Qwen2.5-1.5B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [Qwen2.5-1.5B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q3_K.gguf) | Q3_K | 0.77GB |
| [Qwen2.5-1.5B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [Qwen2.5-1.5B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [Qwen2.5-1.5B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [Qwen2.5-1.5B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q4_0.gguf) | Q4_0 | 0.87GB |
| [Qwen2.5-1.5B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [Qwen2.5-1.5B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [Qwen2.5-1.5B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q4_K.gguf) | Q4_K | 0.92GB |
| [Qwen2.5-1.5B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [Qwen2.5-1.5B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q4_1.gguf) | Q4_1 | 0.95GB |
| [Qwen2.5-1.5B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q5_0.gguf) | Q5_0 | 1.02GB |
| [Qwen2.5-1.5B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [Qwen2.5-1.5B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q5_K.gguf) | Q5_K | 1.05GB |
| [Qwen2.5-1.5B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [Qwen2.5-1.5B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q5_1.gguf) | Q5_1 | 1.1GB |
| [Qwen2.5-1.5B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q6_K.gguf) | Q6_K | 1.19GB |
| [Qwen2.5-1.5B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-1.5B-Instruct-gguf/blob/main/Qwen2.5-1.5B-Instruct.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
tags:
- unsloth
- transformers
---
# Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing).
Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing).
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
# Qwen2.5-1.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 1.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-1.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
mradermacher/Yi-Ko-6B-dpo-v5-GGUF
|
mradermacher
| 2024-09-27T09:23:11Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"ko",
"base_model:GAI-LLM/Yi-Ko-6B-dpo-v5",
"base_model:quantized:GAI-LLM/Yi-Ko-6B-dpo-v5",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-26T22:54:57Z |
---
base_model: GAI-LLM/Yi-Ko-6B-dpo-v5
language:
- ko
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/GAI-LLM/Yi-Ko-6B-dpo-v5
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.Q2_K.gguf) | Q2_K | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.IQ3_XS.gguf) | IQ3_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.Q3_K_S.gguf) | Q3_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.IQ3_S.gguf) | IQ3_S | 2.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.IQ3_M.gguf) | IQ3_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.Q3_K_M.gguf) | Q3_K_M | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.Q3_K_L.gguf) | Q3_K_L | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.IQ4_XS.gguf) | IQ4_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.Q4_K_S.gguf) | Q4_K_S | 3.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.Q4_K_M.gguf) | Q4_K_M | 3.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.Q5_K_S.gguf) | Q5_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.Q5_K_M.gguf) | Q5_K_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.Q6_K.gguf) | Q6_K | 5.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.Q8_0.gguf) | Q8_0 | 6.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-dpo-v5-GGUF/resolve/main/Yi-Ko-6B-dpo-v5.f16.gguf) | f16 | 12.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Maltokar/GOT_OCR_MP
|
Maltokar
| 2024-09-27T09:21:00Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"GOT",
"feature-extraction",
"got",
"vision-language",
"ocr2.0",
"custom_code",
"image-text-to-text",
"multilingual",
"arxiv:2409.01704",
"arxiv:2405.14295",
"arxiv:2312.06109",
"license:apache-2.0",
"region:us"
] |
image-text-to-text
| 2024-09-27T08:18:35Z |
---
pipeline_tag: image-text-to-text
library_name: transformers
language:
- multilingual
tags:
- got
- vision-language
- ocr2.0
- custom_code
license: apache-2.0
---
<h1>General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model
</h1>
[🔋Online Demo](https://huggingface.co/spaces/ucaslcl/GOT_online) | [🌟GitHub](https://github.com/Ucas-HaoranWei/GOT-OCR2.0/) | [📜Paper](https://arxiv.org/abs/2409.01704)</a>
[Haoran Wei*](https://scholar.google.com/citations?user=J4naK0MAAAAJ&hl=en), Chenglong Liu*, Jinyue Chen, Jia Wang, Lingyu Kong, Yanming Xu, [Zheng Ge](https://joker316701882.github.io/), Liang Zhao, [Jianjian Sun](https://scholar.google.com/citations?user=MVZrGkYAAAAJ&hl=en), [Yuang Peng](https://scholar.google.com.hk/citations?user=J0ko04IAAAAJ&hl=zh-CN&oi=ao), Chunrui Han, [Xiangyu Zhang](https://scholar.google.com/citations?user=yuB-cfoAAAAJ&hl=en)

## Usage
Inference using Huggingface transformers on CPU. Requirements tested on python 3.10:
```
torch==2.0.1
torchvision==0.15.2
transformers==4.37.2
tiktoken==0.6.0
verovio==4.3.1
accelerate==0.28.0
```
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('srimanth-d/GOT_CPU', trust_remote_code=True)
model = AutoModel.from_pretrained('srimanth-d/GOT_CPU', trust_remote_code=True, low_cpu_mem_usage=True, use_safetensors=True, pad_token_id=tokenizer.eos_token_id)
model = model.eval()
# input your test image
image_file = 'xxx.jpg'
# plain texts OCR
res = model.chat(tokenizer, image_file, ocr_type='ocr')
# format texts OCR:
# res = model.chat(tokenizer, image_file, ocr_type='format')
# fine-grained OCR:
# res = model.chat(tokenizer, image_file, ocr_type='ocr', ocr_box='')
# res = model.chat(tokenizer, image_file, ocr_type='format', ocr_box='')
# res = model.chat(tokenizer, image_file, ocr_type='ocr', ocr_color='')
# res = model.chat(tokenizer, image_file, ocr_type='format', ocr_color='')
# multi-crop OCR:
# res = model.chat_crop(tokenizer, image_file, ocr_type='ocr')
# res = model.chat_crop(tokenizer, image_file, ocr_type='format')
# render the formatted OCR results:
# res = model.chat(tokenizer, image_file, ocr_type='format', render=True, save_render_file = './demo.html')
print(res)
```
More details about 'ocr_type', 'ocr_box', 'ocr_color', and 'render' can be found at our GitHub.
Our training codes are available at our [GitHub](https://github.com/Ucas-HaoranWei/GOT-OCR2.0/).
## More Multimodal Projects
👏 Welcome to explore more multimodal projects of our team:
[Vary](https://github.com/Ucas-HaoranWei/Vary) | [Fox](https://github.com/ucaslcl/Fox) | [OneChart](https://github.com/LingyvKong/OneChart)
## Citation
If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!
```bib
@article{wei2024general,
title={General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model},
author={Wei, Haoran and Liu, Chenglong and Chen, Jinyue and Wang, Jia and Kong, Lingyu and Xu, Yanming and Ge, Zheng and Zhao, Liang and Sun, Jianjian and Peng, Yuang and others},
journal={arXiv preprint arXiv:2409.01704},
year={2024}
}
@article{liu2024focus,
title={Focus Anywhere for Fine-grained Multi-page Document Understanding},
author={Liu, Chenglong and Wei, Haoran and Chen, Jinyue and Kong, Lingyu and Ge, Zheng and Zhu, Zining and Zhao, Liang and Sun, Jianjian and Han, Chunrui and Zhang, Xiangyu},
journal={arXiv preprint arXiv:2405.14295},
year={2024}
}
@article{wei2023vary,
title={Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models},
author={Wei, Haoran and Kong, Lingyu and Chen, Jinyue and Zhao, Liang and Ge, Zheng and Yang, Jinrong and Sun, Jianjian and Han, Chunrui and Zhang, Xiangyu},
journal={arXiv preprint arXiv:2312.06109},
year={2023}
}
```
|
AI-ML-Research/Qwen2.5-0.5b-unsloth_q4_k_m
|
AI-ML-Research
| 2024-09-27T09:19:52Z | 27 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:quantized:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-27T09:19:30Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
---
# Uploaded model
- **Developed by:** AiisNothing
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf
|
RichardErkhov
| 2024-09-27T09:19:15Z | 11 | 0 | null |
[
"gguf",
"arxiv:2407.10671",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-27T09:11:07Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-0.5B-Instruct - GGUF
- Model creator: https://huggingface.co/unsloth/
- Original model: https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-0.5B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q2_K.gguf) | Q2_K | 0.32GB |
| [Qwen2.5-0.5B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.IQ3_XS.gguf) | IQ3_XS | 0.32GB |
| [Qwen2.5-0.5B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.IQ3_S.gguf) | IQ3_S | 0.32GB |
| [Qwen2.5-0.5B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.32GB |
| [Qwen2.5-0.5B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.IQ3_M.gguf) | IQ3_M | 0.32GB |
| [Qwen2.5-0.5B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q3_K.gguf) | Q3_K | 0.33GB |
| [Qwen2.5-0.5B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.33GB |
| [Qwen2.5-0.5B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.34GB |
| [Qwen2.5-0.5B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.33GB |
| [Qwen2.5-0.5B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q4_0.gguf) | Q4_0 | 0.33GB |
| [Qwen2.5-0.5B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.IQ4_NL.gguf) | IQ4_NL | 0.33GB |
| [Qwen2.5-0.5B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.36GB |
| [Qwen2.5-0.5B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q4_K.gguf) | Q4_K | 0.37GB |
| [Qwen2.5-0.5B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.37GB |
| [Qwen2.5-0.5B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q4_1.gguf) | Q4_1 | 0.35GB |
| [Qwen2.5-0.5B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q5_0.gguf) | Q5_0 | 0.37GB |
| [Qwen2.5-0.5B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.38GB |
| [Qwen2.5-0.5B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q5_K.gguf) | Q5_K | 0.39GB |
| [Qwen2.5-0.5B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.39GB |
| [Qwen2.5-0.5B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q5_1.gguf) | Q5_1 | 0.39GB |
| [Qwen2.5-0.5B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q6_K.gguf) | Q6_K | 0.47GB |
| [Qwen2.5-0.5B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-0.5B-Instruct-gguf/blob/main/Qwen2.5-0.5B-Instruct.Q8_0.gguf) | Q8_0 | 0.49GB |
Original model description:
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
tags:
- unsloth
- transformers
---
# Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing).
Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing).
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
mateiaassAI/T5_MEID-new-MT-RONACC-nonMT-16
|
mateiaassAI
| 2024-09-27T09:15:10Z | 125 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-27T09:14:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FariqF/VITS_TTS_HIN
|
FariqF
| 2024-09-27T09:14:33Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-09-27T09:14:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf
|
RichardErkhov
| 2024-09-27T09:09:33Z | 10 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-27T06:34:21Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918 - GGUF
- Model creator: https://huggingface.co/KONIexp/
- Original model: https://huggingface.co/KONIexp/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q2_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q2_K.gguf) | Q2_K | 2.96GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.IQ3_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.IQ3_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q3_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q3_K.gguf) | Q3_K | 3.74GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q4_0.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q4_0.gguf) | Q4_0 | 4.34GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q4_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q4_K.gguf) | Q4_K | 4.58GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q4_1.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q4_1.gguf) | Q4_1 | 4.78GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q5_0.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q5_0.gguf) | Q5_0 | 5.21GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q5_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q5_K.gguf) | Q5_K | 5.34GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q5_1.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q5_1.gguf) | Q5_1 | 5.65GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q6_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q6_K.gguf) | Q6_K | 6.14GB |
| [v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q8_0.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918-gguf/blob/main/v3_1_pt_ep1_sft_5_based_on_llama3_1_8b_50_per_data_20240918.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FourOhFour/Virgil_9B
|
FourOhFour
| 2024-09-27T09:09:09Z | 5 | 4 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:FourOhFour/Dante_9B",
"base_model:finetune:FourOhFour/Dante_9B",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T09:08:13Z |
---
library_name: transformers
license: gemma
base_model: jeiku/Dante_9B
tags:
- generated_from_trainer
model-index:
- name: outputs/out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: jeiku/Dante_9B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: FourOhFour/RP_Phase
type: sharegpt
conversation: chatml
chat_template: chatml
val_set_size: 0.0025
output_dir: ./outputs/out
adapter:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: false
liger_swiglu: true
liger_fused_linear_cross_entropy: false
wandb_project: chatml9B
wandb_entity:
wandb_watch:
wandb_name: chatml9B
wandb_log_model:
gradient_accumulation_steps: 32
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000008
weight_decay: 0.05
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.1
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
special_tokens:
pad_token: <pad>
```
</details><br>
# outputs/out
This model is a fine-tuned version of [jeiku/Dante_9B](https://huggingface.co/jeiku/Dante_9B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 14
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7474 | 0.0135 | 1 | 1.7996 |
| 1.6968 | 0.2570 | 19 | 0.9551 |
| 1.6583 | 0.5139 | 38 | 0.8805 |
| 1.5418 | 0.7709 | 57 | 0.7926 |
| 1.3997 | 1.0271 | 76 | 0.7500 |
| 1.3921 | 1.2847 | 95 | 0.7168 |
| 1.4141 | 1.5424 | 114 | 0.7155 |
| 1.4139 | 1.8 | 133 | 0.7075 |
### Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0
|
lgk03/ACROSSAPPS_NDD-dimeshift_test-content_tags
|
lgk03
| 2024-09-27T09:05:21Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T07:48:49Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: ACROSSAPPS_NDD-dimeshift_test-content_tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ACROSSAPPS_NDD-dimeshift_test-content_tags
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9161
- Accuracy: 0.8718
- F1: 0.8761
- Precision: 0.8807
- Recall: 0.8718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2688 | 0.9989 | 669 | 1.7662 | 0.8718 | 0.8763 | 0.8812 | 0.8718 |
| 0.1742 | 1.9978 | 1338 | 1.9161 | 0.8718 | 0.8761 | 0.8807 | 0.8718 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
klcsp/gemma7b-gpt4o_1k_summarize-fft
|
klcsp
| 2024-09-27T09:03:39Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T04:24:48Z |
---
library_name: transformers
license: gemma
base_model: google/gemma-7b
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: gemma7b-gpt4o_1k_summarize-fft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma7b-gpt4o_1k_summarize-fft
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9687 | 1.0 | 392 | 6.4970 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
herisan/Llama-3.1-8B
|
herisan
| 2024-09-27T08:59:21Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-27T08:56:34Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** herisan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mukel/Qwen2.5-Math-7B-Instruct-GGUF
|
mukel
| 2024-09-27T08:58:19Z | 24 | 1 | null |
[
"gguf",
"chat",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-09-24T10:26:18Z |
---
base_model: Qwen/Qwen2.5-Math-7B-Instruct
language:
- en
pipeline_tag: text-generation
tags:
- chat
quantized_by: mukel
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct/blob/main/LICENSE
---
> [!Warning]
> <div align="center">
> <b>
> 🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks.
> </b>
> </div>
# GGUF models for qwen2.java
Pure .gguf Q4_0 and Q8_0 quantizations of Qwen 2.5 models, ready to consume by `qwen2.java`.
In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the token embeddings are quantized with Q6_K, instead of Q4_0.
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the llama-quantize utility from llama.cpp as follows:
```
./llama-quantize --pure ./Qwen-2.5-7B-Instruct-BF16.gguf ./Qwen-2.5-7B-Instruct-Q4_0.gguf Q4_0
```
## Introduction
In August 2024, we released the first series of mathematical LLMs - [Qwen2-Math](https://qwenlm.github.io/blog/qwen2-math/) - of our Qwen family. A month later, we have upgraded it and open-sourced **Qwen2.5-Math** series, including base models **Qwen2.5-Math-1.5B/7B/72B**, instruction-tuned models **Qwen2.5-Math-1.5B/7B/72B-Instruct**, and mathematical reward model **Qwen2.5-Math-RM-72B**.
Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT.

While CoT plays a vital role in enhancing the reasoning capabilities of LLMs, it faces challenges in achieving computational accuracy and handling complex mathematical or algorithmic reasoning tasks, such as finding the roots of a quadratic equation or computing the eigenvalues of a matrix. TIR can further improve the model's proficiency in precise computation, symbolic manipulation, and algorithmic manipulation. Qwen2.5-Math-1.5B/7B/72B-Instruct achieve 79.7, 85.3, and 87.8 respectively on the MATH benchmark using TIR.
## Model Details
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen2.5-math/) and [GitHub repo](https://github.com/QwenLM/Qwen2.5-Math).
|
CCTD/rosborg_sentiment
|
CCTD
| 2024-09-27T08:48:46Z | 102 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-27T08:43:13Z |
---
library_name: transformers
---
|
nicolinho/QRM-Llama3.1-8B
|
nicolinho
| 2024-09-27T08:48:06Z | 281 | 1 | null |
[
"safetensors",
"llama",
"custom_code",
"arxiv:2409.10164",
"license:llama3",
"region:us"
] | null | 2024-09-25T07:50:46Z |
---
license: llama3
---
# Quantile Regression for Distributional Reward Models in RLHF
+ **Author:** Nicolai Dorka
+ **Tech Report**: https://arxiv.org/abs/2409.10164
+ **Code Repository:** https://github.com/Nicolinho/QRM
+ **Method Overview:** QRM generates a distribution over rewards by aggregating individual distributions over attribute scores like helpfulness and harmlessness.
<p align="left">
<img width="800" alt="image" src="https://github.com/Nicolinho/QRM/blob/main/assets/method_vis.png?raw=true">
</p>
This model uses [Skywork/Skywork-Reward-Llama-3.1-8B](https://huggingface.co/Skywork/Skywork-Reward-Llama-3.1-8B) as backbone and used
[Skywork/Skywork-Reward-Preference-80K-v0.1](https://huggingface.co/datasets/Skywork/Skywork-Reward-Preference-80K-v0.1) for training the gating network.
Apart from this, it has been trained exactly as described in the tech report.
## Demo Code
```python
# export ACCELERATE_MIXED_PRECISION=bf16
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
device = "cuda"
path = "nicolinho/QRM-Llama3.1-8B"
model = AutoModelForSequenceClassification.from_pretrained(path, device_map=device, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(path, use_fast=True)
# We load a random sample from the validation set of the HelpSteer dataset
prompt = 'Does pineapple belong on a Pizza?'
response = "There are different opinions on this. Some people like pineapple on a Pizza while others condemn this."
messages = [{"role": "user", "content": prompt},
{"role": "assistant", "content": response}]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
with torch.no_grad():
output = model(input_ids)
# Expectation of the reward distribution
reward = output.score.cpu().float()
# Quantile estimates for the quantiles 0.05, 0.1, ..., 0.9, 0.95 representing the distribution over rewards
reward_quantiles = output.reward_quantiles.cpu().float()
# The attributes of the 19 reward objectives
attributes = ['helpsteer-helpfulness','helpsteer-correctness','helpsteer-coherence',
'helpsteer-complexity','helpsteer-verbosity','ultrafeedback-overall_score',
'ultrafeedback-instruction_following', 'ultrafeedback-truthfulness',
'ultrafeedback-honesty','ultrafeedback-helpfulness','beavertails-is_safe',
'prometheus-score','argilla-overall_quality','argilla-judge_lm','code-complexity',
'code-style','code-explanation','code-instruction-following','code-readability']
```
## Citation
If you find this work useful for your research, please consider citing:
```
@article{dorka2024quantile,
title={Quantile Regression for Distributional Reward Models in RLHF},
author={Dorka, Nicolai},
journal={arXiv preprint arXiv:2409.10164},
year={2024}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.