File size: 2,820 Bytes
bb93a87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6b4de6f
bb93a87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
786468c
 
 
 
 
 
 
 
 
 
 
 
bb93a87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
# Next Word Prediction With GPT2

## πŸ“Œ Overview

This repository hosts the quantized version of the GPT2 model fine-tuned for next word prediction tasks. The model has been trained on the bookcorpus dataset from Hugging Face. The model is quantized to Float16 (FP16) to optimize inference speed and efficiency while maintaining high performance.

## πŸ— Model Details

- **Model Architecture:** GPT2
- **Task:** Next Word Prediction  
- **Dataset:** Hugging Face's `bookcorpus`  
- **Quantization:** Float16 (FP16) for optimized inference  
- **Fine-tuning Framework:** Hugging Face Transformers  

## πŸš€ Usage

### Installation

```bash
pip install transformers torch
```

### Loading the Model

```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"

model_name = "AventIQ-AI/gpt2-next-word-prediction"
model = GPT2LMHeadModel.from_pretrained(model_name).to(device)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
```

### Question Answer Example

```python
# Input text
text = "Hi! How are"

# Tokenize input text
input_ids = tokenizer.encode(text, return_tensors="pt").to(device)

# Generate next word (max_length ensures we get only the next token)
output = model.generate(input_ids, max_length=input_ids.shape[1] + 1, do_sample=False)

# Decode output
generated_text = tokenizer.decode(output[0])

print("Generated Sentence:", generated_text)
```

## ⚑ Quantization Details

Post-training quantization was applied using PyTorch's built-in quantization framework. The model was quantized to Float16 (FP16) to reduce model size and improve inference efficiency while balancing accuracy.

## Evaluation Metrics

A well-trained language model should have a perplexity closer to 10–50, depending on the dataset and domain and our model's perplexity score is 32.4.

## πŸ”§ Fine-Tuning Details

### Dataset
The **Bookcorpus** dataset was used for training and evaluation. The dataset consists of **texts**.

### Training Configuration
- **Number of epochs**: 3
- **Batch size**: 8  
- **Learning rate**: 5e-5  
- **Evaluation strategy**: steps


## πŸ“‚ Repository Structure

```
.
β”œβ”€β”€ model/               # Contains the quantized model files
β”œβ”€β”€ tokenizer_config/    # Tokenizer configuration and vocabulary files
β”œβ”€β”€ model.safetensors/   # Quantized Model
β”œβ”€β”€ README.md            # Model documentation
```

## ⚠️ Limitations

- The model may struggle for out of scope tasks.
- Quantization may lead to slight degradation in accuracy compared to full-precision models.
- Performance may vary across different writing styles and sentence structures.

## 🀝 Contributing

Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.