Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,4 @@
|
|
1 |
---
|
2 |
-
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
|
3 |
language:
|
4 |
- en
|
5 |
- it
|
@@ -13,9 +12,9 @@ tags:
|
|
13 |
- sft
|
14 |
---
|
15 |
|
16 |
-
# Meta LLaMA 3.1 8B
|
17 |
|
18 |
-
This model is a fine-tuned version of `
|
19 |
|
20 |
---
|
21 |
|
@@ -54,7 +53,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
54 |
import torch
|
55 |
|
56 |
# Load the model and tokenizer
|
57 |
-
model_name = "ruslanmv/
|
58 |
|
59 |
# Ensure you have the right device setup
|
60 |
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
|
|
1 |
---
|
|
|
2 |
language:
|
3 |
- en
|
4 |
- it
|
|
|
12 |
- sft
|
13 |
---
|
14 |
|
15 |
+
# Meta LLaMA 3.1 8B 4-bit Finetuned Model
|
16 |
|
17 |
+
This model is a fine-tuned version of `Meta-Llama-3.1-8B`, developed by **ruslanmv** for text generation tasks. It leverages 4-bit quantization, making it more efficient for inference while maintaining strong performance in natural language generation.
|
18 |
|
19 |
---
|
20 |
|
|
|
53 |
import torch
|
54 |
|
55 |
# Load the model and tokenizer
|
56 |
+
model_name = "ruslanmv/Meta-Llama-3.1-8B-Text-to-SQL"
|
57 |
|
58 |
# Ensure you have the right device setup
|
59 |
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|