codechrl commited on
Commit
def1364
·
verified ·
1 Parent(s): c117ac4

Training update: 163,113/164,092 rows (99.40%) | +10 new @ 2025-11-12 17:58:58

Browse files
Files changed (4) hide show
  1. README.md +5 -5
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
  4. training_metadata.json +7 -7
README.md CHANGED
@@ -25,7 +25,7 @@ pipeline_tag: fill-mask
25
  - Model type: fine-tuned lightweight BERT variant
26
  - Languages: English & Indonesia
27
  - Finetuned from: `boltuix/bert-micro`
28
- - Status: **Early version** — trained on **99.39%** of planned data.
29
 
30
  **Model sources**
31
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
@@ -51,7 +51,7 @@ You can use this model to classify cybersecurity-related text — for example, w
51
  - Early classification of SIEM alert & events.
52
 
53
  ## 3. Bias, Risks, and Limitations
54
- Because the model is based on a small subset (99.39%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
55
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
56
  - **Should not be used as sole authority for incident decisions; only as an aid to human analysts.**
57
 
@@ -75,9 +75,9 @@ Since cybersecurity data often contains lengthy alert descriptions and execution
75
  - **LR scheduler**: Linear with warmup
76
 
77
  ### Training Data
78
- - **Total database rows**: 164,090
79
- - **Rows processed (cumulative)**: 163,094 (99.39%)
80
- - **Training date**: 2025-11-12 17:12:09
81
 
82
  ### Post-Training Metrics
83
  - **Final training loss**:
 
25
  - Model type: fine-tuned lightweight BERT variant
26
  - Languages: English & Indonesia
27
  - Finetuned from: `boltuix/bert-micro`
28
+ - Status: **Early version** — trained on **99.40%** of planned data.
29
 
30
  **Model sources**
31
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
 
51
  - Early classification of SIEM alert & events.
52
 
53
  ## 3. Bias, Risks, and Limitations
54
+ Because the model is based on a small subset (99.40%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
55
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
56
  - **Should not be used as sole authority for incident decisions; only as an aid to human analysts.**
57
 
 
75
  - **LR scheduler**: Linear with warmup
76
 
77
  ### Training Data
78
+ - **Total database rows**: 164,092
79
+ - **Rows processed (cumulative)**: 163,113 (99.40%)
80
+ - **Training date**: 2025-11-12 17:58:58
81
 
82
  ### Post-Training Metrics
83
  - **Final training loss**:
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:81ee68a1c1f05bbe05c0f6beb186ae4f3b78257645bd7af8c689d8ada92e1ef9
3
  size 17671560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:244932c700dfd7d0ca65b01241e42a8a4374e93bdf43d364d280802cfc374c13
3
  size 17671560
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:859c3dbd8cca4f14f8f63c3226638f0c68bdef8f2bdd9b222b20e7ef2c5dc0be
3
  size 5905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77e3f3566c79d1af5accd9e4dba71ec4b4981512af32006bf8d6d4f0dad6f434
3
  size 5905
training_metadata.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
- "trained_at": 1762967529.6049545,
3
- "trained_at_readable": "2025-11-12 17:12:09",
4
- "samples_this_session": 1342,
5
- "new_rows_this_session": 19,
6
- "trained_rows_total": 163094,
7
- "total_db_rows": 164090,
8
- "percentage": 99.39301602778963,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05,
 
1
  {
2
+ "trained_at": 1762970338.5018234,
3
+ "trained_at_readable": "2025-11-12 17:58:58",
4
+ "samples_this_session": 1500,
5
+ "new_rows_this_session": 10,
6
+ "trained_rows_total": 163113,
7
+ "total_db_rows": 164092,
8
+ "percentage": 99.40338346781074,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05,