codechrl commited on
Commit
7cc2d5c
·
verified ·
1 Parent(s): 4300aab

Training update: 2,116/237,881 rows (0.89%) | +500 new @ 2025-10-21 06:00:02

Browse files
Files changed (4) hide show
  1. README.md +4 -4
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
  4. training_metadata.json +6 -6
README.md CHANGED
@@ -21,7 +21,7 @@ library_name: transformers
21
  - Model type: fine-tuned lightweight BERT variant
22
  - Languages: English & Indonesia
23
  - Finetuned from: `boltuix/bert-micro`
24
- - Status: **Early version** — trained on **0.68%** of planned data.
25
 
26
  **Model sources**
27
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
@@ -42,7 +42,7 @@ You can use this model to classify cybersecurity-related text — for example, w
42
  - Not tested for non-cybersecurity domains or out-of-distribution data.
43
 
44
  ## 3. Bias, Risks, and Limitations
45
- Because the model is based on a small subset (0.68%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
46
 
47
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
48
  - Should not be used as sole authority for incident decisions; only as an aid to human analysts.
@@ -62,8 +62,8 @@ predicted_class = logits.argmax(dim=-1).item()
62
  ```
63
 
64
  ## 5. Training Details
65
- - **Trained records**: 1,616 / 237,718 (0.68%)
66
  - **Learning rate**: 5e-05
67
  - **Epochs**: 3
68
- - **Batch size**: 8
69
  - **Max sequence length**: 512
 
21
  - Model type: fine-tuned lightweight BERT variant
22
  - Languages: English & Indonesia
23
  - Finetuned from: `boltuix/bert-micro`
24
+ - Status: **Early version** — trained on **0.89%** of planned data.
25
 
26
  **Model sources**
27
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
 
42
  - Not tested for non-cybersecurity domains or out-of-distribution data.
43
 
44
  ## 3. Bias, Risks, and Limitations
45
+ Because the model is based on a small subset (0.89%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
46
 
47
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
48
  - Should not be used as sole authority for incident decisions; only as an aid to human analysts.
 
62
  ```
63
 
64
  ## 5. Training Details
65
+ - **Trained records**: 2,116 / 237,881 (0.89%)
66
  - **Learning rate**: 5e-05
67
  - **Epochs**: 3
68
+ - **Batch size**: 16
69
  - **Max sequence length**: 512
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:77bacaf8a88c4d1a2ec9600b44d7c70b6f0a0779ba39204c66e7560f2822fa01
3
  size 17671560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7ce222f46cef58ed19978bf1a4b88f5447d4d114d7ad81f61ed702f8f3c9d9c
3
  size 17671560
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85b878d7ad871ddbb6534d459d90aa4c48bd16f59636bcec6c7495740a5d8ccf
3
  size 5905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:930fff7680df5b08cacf7b90bbbc38051edff1aea340c88857c47f5238ec8dc4
3
  size 5905
training_metadata.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
- "trained_at": 1761021667.0032144,
3
- "trained_at_readable": "2025-10-21 04:41:07",
4
- "samples_this_session": 4634,
5
  "new_rows_this_session": 500,
6
- "trained_rows_total": 1616,
7
- "total_db_rows": 237718,
8
- "percentage": 0.679797070478466,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05
 
1
  {
2
+ "trained_at": 1761026402.3035963,
3
+ "trained_at_readable": "2025-10-21 06:00:02",
4
+ "samples_this_session": 960,
5
  "new_rows_this_session": 500,
6
+ "trained_rows_total": 2116,
7
+ "total_db_rows": 237881,
8
+ "percentage": 0.8895203904473246,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05