codechrl commited on
Commit
05f0253
·
verified ·
1 Parent(s): e6779ba

Training update: 7,754/238,469 rows (3.25%) | +4922 new @ 2025-10-23 04:53:41

Browse files
Files changed (5) hide show
  1. README.md +7 -7
  2. config.json +1 -1
  3. model.safetensors +1 -1
  4. training_args.bin +1 -1
  5. training_metadata.json +7 -7
README.md CHANGED
@@ -24,7 +24,7 @@ pipeline_tag: fill-mask
24
  - Model type: fine-tuned lightweight BERT variant
25
  - Languages: English & Indonesia
26
  - Finetuned from: `boltuix/bert-micro`
27
- - Status: **Early version** — trained on **3.10%** of planned data.
28
  **Model sources**
29
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
30
  - Data: Cybersecurity Data
@@ -40,7 +40,7 @@ You can use this model to classify cybersecurity-related text — for example, w
40
  - Not optimized for languages other than English and Indonesian.
41
  - Not tested for non-cybersecurity domains or out-of-distribution data.
42
  ## 3. Bias, Risks, and Limitations
43
- Because the model is based on a small subset (3.10%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
44
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
45
  - Should not be used as sole authority for incident decisions; only as an aid to human analysts.
46
  ## 4. How to Get Started with the Model
@@ -74,11 +74,11 @@ Since cybersecurity data often contains lengthy alert descriptions and execution
74
  - **LR scheduler**: Linear with warmup
75
 
76
  ### Training Data
77
- - **Total database rows**: 238,461
78
- - **Rows processed (cumulative)**: 7,396 (3.10%)
79
- - **Rows in this session**: 358
80
- - **Training samples (after chunking)**: 5,575
81
- - **Training date**: 2025-10-23 04:06:52
82
 
83
  ### Post-Training Metrics
84
  - **Final training loss**:
 
24
  - Model type: fine-tuned lightweight BERT variant
25
  - Languages: English & Indonesia
26
  - Finetuned from: `boltuix/bert-micro`
27
+ - Status: **Early version** — trained on **3.25%** of planned data.
28
  **Model sources**
29
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
30
  - Data: Cybersecurity Data
 
40
  - Not optimized for languages other than English and Indonesian.
41
  - Not tested for non-cybersecurity domains or out-of-distribution data.
42
  ## 3. Bias, Risks, and Limitations
43
+ Because the model is based on a small subset (3.25%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
44
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
45
  - Should not be used as sole authority for incident decisions; only as an aid to human analysts.
46
  ## 4. How to Get Started with the Model
 
74
  - **LR scheduler**: Linear with warmup
75
 
76
  ### Training Data
77
+ - **Total database rows**: 238,469
78
+ - **Rows processed (cumulative)**: 7,754 (3.25%)
79
+ - **Rows in this session**: 4,922
80
+ - **Training samples (after chunking)**: 5,000
81
+ - **Training date**: 2025-10-23 04:53:41
82
 
83
  ### Post-Training Metrics
84
  - **Final training loss**:
config.json CHANGED
@@ -17,7 +17,7 @@
17
  "num_hidden_layers": 2,
18
  "pad_token_id": 0,
19
  "position_embedding_type": "absolute",
20
- "transformers_version": "4.57.0",
21
  "type_vocab_size": 2,
22
  "use_cache": true,
23
  "vocab_size": 30522
 
17
  "num_hidden_layers": 2,
18
  "pad_token_id": 0,
19
  "position_embedding_type": "absolute",
20
+ "transformers_version": "4.57.1",
21
  "type_vocab_size": 2,
22
  "use_cache": true,
23
  "vocab_size": 30522
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5d944ab1003df47a72c98e63c340a856db9daf0f5ce5e9ff7f9b1cd3a78be54b
3
  size 17671560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e34d67e8d356f460943f98143843d64ac1d706bd3d53a639e1776fafddce67af
3
  size 17671560
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:36fd20f724c8e961c678c7f1c4b979cd8088a71c668665d75605ee21e8212f0b
3
  size 5905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b6723809aae4d6e81df5bc3af0fef812aef6e4e443ec397ab0eca959b1da657
3
  size 5905
training_metadata.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
- "trained_at": 1761192412.0134265,
3
- "trained_at_readable": "2025-10-23 04:06:52",
4
- "samples_this_session": 5575,
5
- "new_rows_this_session": 358,
6
- "trained_rows_total": 7396,
7
- "total_db_rows": 238461,
8
- "percentage": 3.1015553906089464,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05,
 
1
  {
2
+ "trained_at": 1761195221.323231,
3
+ "trained_at_readable": "2025-10-23 04:53:41",
4
+ "samples_this_session": 5000,
5
+ "new_rows_this_session": 4922,
6
+ "trained_rows_total": 7754,
7
+ "total_db_rows": 238469,
8
+ "percentage": 3.2515756765030255,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05,