codechrl commited on
Commit
d0eba91
·
verified ·
1 Parent(s): efa612d

Training update: 1,598/237,655 rows (0.67%) | +1 new @ 2025-10-20 09:29:54

Browse files
Files changed (4) hide show
  1. README.md +8 -9
  2. model.safetensors +1 -1
  3. training_args.bin +2 -2
  4. training_metadata.json +7 -7
README.md CHANGED
@@ -3,29 +3,31 @@ language:
3
  - en
4
  - id
5
  tags:
 
6
  - text-classification
 
7
  - cybersecurity
 
 
8
  base_model: boltuix/bert-micro
 
9
  ---
10
-
11
  # bert-micro-cybersecurity
12
 
13
  ## 1. Model Details
14
-
15
  **Model description**
16
  "bert-micro-cybersecurity" is a compact transformer model adapted for cybersecurity text classification tasks (e.g., threat detection, incident reports, malicious vs benign content).
17
 
18
  - Model type: fine-tuned lightweight BERT variant
19
  - Languages: English & Indonesia
20
  - Finetuned from: `boltuix/bert-micro`
21
- - Status: **Early version** — trained on **0.59%** of planned data.
22
 
23
  **Model sources**
24
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
25
  - Data: Cybersecurity Data
26
 
27
  ## 2. Uses
28
-
29
  ### Direct use
30
  You can use this model to classify cybersecurity-related text — for example, whether a given message, report or log entry indicates malicious intent, abnormal behaviour, or threat presence.
31
 
@@ -40,14 +42,12 @@ You can use this model to classify cybersecurity-related text — for example, w
40
  - Not tested for non-cybersecurity domains or out-of-distribution data.
41
 
42
  ## 3. Bias, Risks, and Limitations
43
-
44
- Because the model is based on a small subset (0.59%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
45
 
46
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
47
  - Should not be used as sole authority for incident decisions; only as an aid to human analysts.
48
 
49
  ## 4. How to Get Started with the Model
50
-
51
  ```python
52
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
53
 
@@ -62,8 +62,7 @@ predicted_class = logits.argmax(dim=-1).item()
62
  ```
63
 
64
  ## 5. Training Details
65
-
66
- - **Trained records**: 1,398 / 237,619 (0.59%)
67
  - **Learning rate**: 5e-05
68
  - **Epochs**: 3
69
  - **Batch size**: 1
 
3
  - en
4
  - id
5
  tags:
6
+ - bert
7
  - text-classification
8
+ - token-classification
9
  - cybersecurity
10
+ - fill-mask
11
+ - named-entity-recognition
12
  base_model: boltuix/bert-micro
13
+ library_name: transformers
14
  ---
 
15
  # bert-micro-cybersecurity
16
 
17
  ## 1. Model Details
 
18
  **Model description**
19
  "bert-micro-cybersecurity" is a compact transformer model adapted for cybersecurity text classification tasks (e.g., threat detection, incident reports, malicious vs benign content).
20
 
21
  - Model type: fine-tuned lightweight BERT variant
22
  - Languages: English & Indonesia
23
  - Finetuned from: `boltuix/bert-micro`
24
+ - Status: **Early version** — trained on **0.67%** of planned data.
25
 
26
  **Model sources**
27
  - Base model: [boltuix/bert-micro](https://huggingface.co/boltuix/bert-micro)
28
  - Data: Cybersecurity Data
29
 
30
  ## 2. Uses
 
31
  ### Direct use
32
  You can use this model to classify cybersecurity-related text — for example, whether a given message, report or log entry indicates malicious intent, abnormal behaviour, or threat presence.
33
 
 
42
  - Not tested for non-cybersecurity domains or out-of-distribution data.
43
 
44
  ## 3. Bias, Risks, and Limitations
45
+ Because the model is based on a small subset (0.67%) of planned data, performance is preliminary and may degrade on unseen or specialized domains (industrial control, IoT logs, foreign language).
 
46
 
47
  - Inherits any biases present in the base model (`boltuix/bert-micro`) and in the fine-tuning data — e.g., over-representation of certain threat types, vendor or tooling-specific vocabulary.
48
  - Should not be used as sole authority for incident decisions; only as an aid to human analysts.
49
 
50
  ## 4. How to Get Started with the Model
 
51
  ```python
52
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
53
 
 
62
  ```
63
 
64
  ## 5. Training Details
65
+ - **Trained records**: 1,598 / 237,655 (0.67%)
 
66
  - **Learning rate**: 5e-05
67
  - **Epochs**: 3
68
  - **Batch size**: 1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c2cdb393c590e97126f470f40c54d65ffefdb8bbe59e24272f3416693d1a4986
3
  size 17671560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8256fb20d6b61fe0310f53219033d25608c25418739f63cc54efbe6e6a2f833
3
  size 17671560
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:141535d1c3044e612f8853b14d81e1fa4aaa4e2439becfdb8741a6cb1f511a76
3
- size 5841
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:944823f3e9b0060cf3527486bbe82b1c34c2c3c5b3e9cefe53fa6360de9c5a72
3
+ size 5905
training_metadata.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
- "trained_at": 1760940612.8562534,
3
- "trained_at_readable": "2025-10-20 06:10:12",
4
- "samples_this_session": 100,
5
- "new_rows_this_session": 100,
6
- "trained_rows_total": 1398,
7
- "total_db_rows": 237619,
8
- "percentage": 0.5883367912498579,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05
 
1
  {
2
+ "trained_at": 1760952594.7217977,
3
+ "trained_at_readable": "2025-10-20 09:29:54",
4
+ "samples_this_session": 1,
5
+ "new_rows_this_session": 1,
6
+ "trained_rows_total": 1598,
7
+ "total_db_rows": 237655,
8
+ "percentage": 0.6724032736529844,
9
  "final_loss": 0,
10
  "epochs": 3,
11
  "learning_rate": 5e-05