File size: 3,456 Bytes
410d317 a90f84c 410d317 d0eba91 a90f84c d0eba91 a90f84c d0eba91 feaddf8 a90f84c d0eba91 feaddf8 410d317 62aabfb e46bd87 a90f84c 8233137 a90f84c 09a777a dd8e433 a90f84c e46bd87 a90f84c e46bd87 a90f84c e46bd87 a90f84c 09a777a a90f84c e46bd87 ae1a849 a90f84c 7cc2d5c ae1a849 def1364 d147d49 ae1a849 e6779ba |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
---
language:
- en
- id
tags:
- bert
- text-classification
- token-classification
- cybersecurity
- fill-mask
- named-entity-recognition
- transformers
- tensorflow
- pytorch
- masked-language-modeling
base_model: boltuix/bert-micro
library_name: transformers
pipeline_tag: fill-mask
---
# bert-micro-cybersecurity
## 1. Model Details
**Model description**
"bert-micro-cybersecurity" is a compact transformer model adapted for cybersecurity text classification tasks (e.g., threat detection, incident reports, malicious vs benign content).
- Model type: fine-tuned lightweight BERT variant
- Languages: English & Indonesia
- Finetuned from: `boltuix/bert-micro`
- Status: **Early version** — trained on **99.45%** of planned data.
**Model sources**
- Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
- Data: Cybersecurity Data
## 2. Uses
### Direct use
You can use this model to classify cybersecurity-related text — for example, whether a given message, report or log entry indicates malicious intent, abnormal behaviour, or threat presence.
### Downstream use
- Embedding extraction for clustering.
- Named Entity Recognition on log or security data.
- Classification of security data.
- Anomaly detection in security logs.
- As part of a pipeline for phishing detection, malicious email filtering, incident triage.
- As a feature extractor feeding a downstream system (e.g., alert-generation, SOC dashboard).
### Out-of-scope use
- Not meant for high-stakes automated blocking decisions without human review.
- Not optimized for languages other than English and Indonesian.
- Not tested for non-cybersecurity domains or out-of-distribution data.
### Downstream Usecase in Development using this model
- NER on security log, botnet data, and json data.
- Early classification of SIEM alert & events.
## 3. Bias, Risks, and Limitations
Because the model is based on a small subset (99.45%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
- Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
- **Should not be used as sole authority for incident decisions; only as an aid to human analysts.**
## 4. Training Details
### Text Processing & Chunking
Since cybersecurity data often contains lengthy alert descriptions and execution logs that exceed BERT's 512 token limit, we implement an overlapping chunking strategy:
- **Max sequence length**: 512 tokens
- **Stride**: 32 tokens (overlap between consecutive chunks)
- **Chunking behavior**: Long texts are split into overlapping segments. For example, with max_length=512 and stride=128, a 1000-token document becomes ~3 chunks with 128-token overlaps, preserving context across boundaries.
### Training Hyperparameters
- **Base model**: `boltuix/bert-micro`
- **Training epochs**: 3
- **Learning rate**: 5e-05
- **Batch size**: 16
- **Weight decay**: 0.01
- **Warmup ratio**: 0.06
- **Gradient accumulation steps**: 1
- **Optimizer**: AdamW
- **LR scheduler**: Linear with warmup
### Training Data
- **Total database rows**: 164,092
- **Rows processed (cumulative)**: 163,191 (99.45%)
- **Training date**: 2025-11-13 07:05:35
### Post-Training Metrics
- **Final training loss**:
- **Rows→Samples ratio**:
|