File size: 2,953 Bytes
bf483d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
# 🌍 Language Translation Model

This repository hosts a fine-tuned **T5-small-based** model optimized for **language translation**. The model translates text between multiple languages, including English, Spanish, German, French, and Hindi.

## πŸ“Œ Model Details

- **Model Architecture**: T5-small
- **Task**: Language Translation
- **Dataset**: Custom multilingual dataset
- **Fine-tuning Framework**: Hugging Face Transformers
- **Quantization**: Dynamic (int8) for efficiency

## πŸš€ Usage

### Installation

```bash
pip install transformers torch datasets
```

### Loading the Model

```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"

model_name = AventIQ-AI/t5-language-translation
model = T5ForConditionalGeneration.from_pretrained(model_name).to(device)
tokenizer = T5Tokenizer.from_pretrained(model_name)
```

### Perform Translation

```python

def translate_text(model, tokenizer, input_text, target_language):
    device = "cuda" if torch.cuda.is_available() else "cpu"
    formatted_text = f"translate English to {target_language}: {input_text}"
    input_ids = tokenizer(formatted_text, return_tensors="pt").input_ids.to(device)
    
    with torch.no_grad():
        output_ids = model.generate(input_ids, max_length=50)
    
    return tokenizer.decode(output_ids[0], skip_special_tokens=True)

# πŸ”Ή **Test Translation**
input_text = "Hello, how are you?"
target_language = "French"  # Options: "Spanish", "German".
translated_text = translate_text(model, tokenizer, input_text, target_language)

print(f"Original: {input_text}")
print(f"Translated: {translated_text}")
```

## πŸ“Š Evaluation Results

After fine-tuning, the model was evaluated on a multilingual dataset, achieving the following performance:

| Metric              | Score | Meaning                             |
| ------------------- | ----- | ----------------------------------- |
| **BLEU Score**      | 38.5  | Measures translation accuracy       |
| **Inference Speed** | Fast  | Optimized for real-time translation |

## πŸ”§ Fine-Tuning Details

### Dataset

The model was trained using a **multilingual dataset** containing sentence pairs from multiple language sources.

### Training Configuration

- **Number of epochs**: 3
- **Batch size**: 8
- **Optimizer**: AdamW
- **Learning rate**: 2e-5
- **Evaluation strategy**: Epoch-based

### Quantization

The model was quantized using **fp16 quantization**, reducing latency and memory usage while maintaining accuracy.

## πŸ“‚ Repository Structure

```bash
.
β”œβ”€β”€ model/               
β”œβ”€β”€ tokenizer_config/   
β”œβ”€β”€ quantized_model/     
β”œβ”€β”€ README.md            
```

## ⚠️ Limitations

- The model may struggle with **very complex sentences**.
- **Low-resource languages** may have slightly lower accuracy.
- **Contextual understanding** is limited to sentence-level translation.