File size: 5,429 Bytes
b5b1a67 73e7069 7bf1f33 b5b1a67 217a016 b5b1a67 7bf1f33 f378f90 7bf1f33 f378f90 7bf1f33 b5b1a67 496f005 7bf1f33 b5b1a67 496f005 b5b1a67 7bf1f33 b5b1a67 c244c45 da271ab 7bf1f33 da271ab 7bf1f33 da271ab a789d1a 7bf1f33 a789d1a b5b1a67 84eebcf 4e05994 73e7069 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
---
license: mit
tags:
- machine-unlearning
- unlearning
- resnet18
pipeline_tag: image-classification
library_name: pytorch
---
# Model Card for jaeunglee/resnet18-cifar10-unlearning
This repository contains ResNet18 models retrained on the CIFAR-10 dataset with specific classes excluded during training. Each model is trained to study the impact of class exclusion on model performance and generalization.
**Paper:** [Unlearning Comparator: A Visual Analytics System for Comparative Evaluation of Machine Unlearning Methods](https://huggingface.co/papers/2508.12730)
**Project Page:** [https://gnueaj.github.io/Machine-Unlearning-Comparator/](https://gnueaj.github.io/Machine-Unlearning-Comparator/)
**GitHub Repository:** [https://github.com/gnueaj/Machine-Unlearning-Comparator](https://github.com/gnueaj/Machine-Unlearning-Comparator)
---
## Evaluation
- **Testing Data:** CIFAR-10 test set
- **Metrics:** Top-1 accuracy
### Results
| Model | Excluded Class | CIFAR-10 Accuracy |
|-------------------------------------|----------------|--------------------|
| `resnet18_cifar10_full.pth` | **None** | **95.4%** |
| `resnet18_cifar10_no_airplane.pth` | Airplane | 95.3% |
| `resnet18_cifar10_no_automobile.pth`| Automobile | 95.4% |
| `resnet18_cifar10_no_bird.pth` | Bird | 95.6% |
| `resnet18_cifar10_no_cat.pth` | Cat | 96.6% |
| `resnet18_cifar10_no_deer.pth` | Deer | 95.2% |
| `resnet18_cifar10_no_dog.pth` | Dog | 96.6% |
| `resnet18_cifar10_no_frog.pth` | Frog | 95.2% |
| `resnet18_cifar10_no_horse.pth` | Horse | 95.3% |
| `resnet18_cifar10_no_ship.pth` | Ship | 95.4% |
| `resnet18_cifar10_no_truck.pth` | Truck | 95.3% |
## Training Details
### Training Procedure
- **Base Model:** ResNet18
- **Dataset:** CIFAR-10
- **Excluded Class:** Varies by model
- **Loss Function:** CrossEntropyLoss
- **Optimizer:** SGD with:
- Learning rate: `0.1`
- Momentum: `0.9`
- Weight decay: `5e-4`
- Nesterov: `True`
- **Scheduler:** CosineAnnealingLR (T_max: `200`)
- **Training Epochs:** `200`
- **Batch Size:** `128`
- **Hardware:** Single GPU
### Notes on Training
The training recipe is adapted from the paper **"Benchopt: Reproducible, efficient and collaborative optimization benchmarks"**, which provides a reproducible and optimized setup for training ResNet18 on the CIFAR-10 dataset. This ensures that the training methodology aligns with established benchmarks for reproducibility and comparability.
### Data Preprocessing
The following transformations were applied to the CIFAR-10 dataset:
- **Base Transformations (applied to both training and test sets):**
- Conversion to PyTorch tensors using `ToTensor()`.
- Normalization using mean `(0.4914, 0.4822, 0.4465)` and standard deviation `(0.2023, 0.1994, 0.2010)`.
- **Training Set Augmentation (only for training data):**
- **RandomCrop(32, padding=4):** Randomly crops images with padding for spatial variation.
- **RandomHorizontalFlip():** Randomly flips images horizontally with a 50% probability.
These augmentations help improve the model's ability to generalize by introducing variability in the training data.
### Model Description
- **Developed by:** Jaeung Lee
- **Model type:** Image Classification
- **License:** MIT
### Related Work
This model is part of the research conducted using the [Machine Unlearning Comparator](https://github.com/gnueaj/Machine-Unlearning-Comparator). The tool was developed to compare various machine unlearning methods and their effects on models.
## Uses
### Direct Use
These models can be directly used for evaluating the effect of excluding specific classes from the CIFAR-10 dataset during training.
### Out-of-Scope Use
The models are not suitable for tasks requiring general-purpose image classification beyond the CIFAR-10 dataset.
## How to Get Started with the Model
Use the code below to load the models with the appropriate architecture and weights:
```python
import torch
import torch.nn as nn
from torchvision import models
def get_resnet18(num_classes=10):
model = models.resnet18(weights=None)
model.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
model.maxpool = nn.Identity()
model.fc = nn.Linear(model.fc.in_features, num_classes)
return model
# Load a pretrained model
def load_model(model_path, num_classes=10):
model = get_resnet18(num_classes=num_classes)
model.load_state_dict(torch.load(model_path))
return model
# Example usage
model = load_model("resnet18_cifar10_no_airplane.pth", num_classes=10)
```
## Citation
If you use this repository or its models in your work, please consider citing it:
## 📄 Paper
[Unlearning Comparator: A Visual Analytics System for Comparative Evaluation of Machine Unlearning Methods](https://arxiv.org/abs/2508.12730)
**APA:**
Jaeung Lee. (2024). ResNet18 Models Trained on CIFAR-10 with Class Exclusion. Retrieved from https://huggingface.co/jaeunglee/resnet18-cifar10-unlearn
## License
This repository is shared under the [MIT License](https://opensource.org/licenses/MIT). |