File size: 3,937 Bytes
a9dce72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
# 🧠 Beans-Image-Classification-AI-Model

A fine-tuned image classification model trained on the Beans dataset with 3 classes: angular_leaf_spot, bean_rust, and healthy. This model is built using Hugging Face Transformers and the ViT (Vision Transformer) architecture and is suitable for educational use, plant disease classification tasks, and image classification experiments.

---


## ✨ Model Highlights

- πŸ“Œ Base Model: google/vit-base-patch16-224-in21k
- πŸ“š Fine-tuned:  Beans dataset
- 🌿 Classes: angular_leaf_spot, bean_rust, healthy
- πŸ”§ Framework: Hugging Face Transformers + PyTorch
- πŸ“¦ Preprocessing: AutoImageProcessor from Transformers

---

## 🧠 Intended Uses

- βœ… Educational tools for training and evaluation in agriculture and plant disease detection
- βœ… Benchmarking vision transformer models on small datasets
- βœ… Demonstration of fine-tuning workflows with Hugging Face

---

## 🚫 Limitations

- ❌ Not suitable for real-world diagnosis in agriculture without further domain validation
- ❌ Not robust to significant background noise or occlusion in images
- ❌ Trained on small dataset, may not generalize beyond bean leaf diseases

---

πŸ“ Input & Output

- Input: RGB image of a bean leaf (expected size 224x224)
- Output: Predicted class label β€” angular_leaf_spot, bean_rust, or healthy

---

## πŸ‹οΈβ€β™‚οΈ Training Details

| Attribute          | Value                            |
|--------------------|----------------------------------|
| Base Model         |`google/vit-base-patch16-224-in21k|
| Dataset            |Beans Dataset (train/val/test)    |
| Task Type          | Image Classification             |
| Image Size         |  224 Γ— 224                       |
| Epochs             | 3                                |
| Batch Size         | 16                               |
| Optimizer          | AdamW                            |
| Loss Function      | CrossEntropyLoss                 |
| Framework          | PyTorch + Transformers           |
| Hardware           | CUDA-enabled GPU                 |

---

## πŸ“Š Evaluation Metrics


| Metric                                          | Score |
| ----------------------------------------------- | ----- |
| Accuracy                                        | 0.98  |
| F1-Score                                        | 0.99  |
| Precision                                       | 0.98  |
| Recall                                          | 0.99  |


---

---
πŸš€ Usage
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import torch

model_name = "AventIQ-AI/Beans-Image-Classification-AI-Model"

processor = AutoImageProcessor.from_pretrained(model_name)
model = AutoModelForImageClassification.from_pretrained(model_name)
model.eval()

def predict(image_path):
    image = Image.open(image_path).convert("RGB")
    inputs = processor(images=image, return_tensors="pt").to(model.device)
    with torch.no_grad():
        outputs = model(**inputs)
    preds = torch.argmax(outputs.logits, dim=1)
    return model.config.id2label[preds.item()]

# Example
print(predict("example_leaf.jpg"))


```
---

- 🧩 Quantization
- Post-training static quantization applied using PyTorch to reduce model size and accelerate inference on edge devices.

----

πŸ—‚ Repository Structure
```
.
beans-vit-finetuned/
β”œβ”€β”€ config.json               βœ… Model architecture & config
β”œβ”€β”€ pytorch_model.bin         βœ… Model weights
β”œβ”€β”€ preprocessor_config.json  βœ… Image processor config
β”œβ”€β”€ special_tokens_map.json   βœ… (Auto-generated, not critical for ViT)
β”œβ”€β”€ training_args.bin         βœ… Training metadata
β”œβ”€β”€ README.md                 βœ… Model card

```
---
🀝 Contributing

Open to improvements and feedback! Feel free to submit a pull request or open an issue if you find any bugs or want to enhance the model.