ModernBERT-base_en-tr_jobs_sentence-transformer_isco08-clf_part-02
This model is a fine-tuned version of ai4jobs/ModernBERT-base_en-tr_jobs_sentence-transformer on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.0692
- F1 Weighted: 0.7669
- F1 Macro: 0.5431
- Accuracy: 0.7708
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
Training results
Training Loss | Epoch | Step | Validation Loss | F1 Weighted | F1 Macro | Accuracy |
---|---|---|---|---|---|---|
6.4904 | 0.1290 | 1000 | 1.4772 | 0.6040 | 0.2802 | 0.6462 |
2.5573 | 0.2580 | 2000 | 1.2522 | 0.6395 | 0.3673 | 0.6638 |
2.3433 | 0.3870 | 3000 | 1.1454 | 0.6578 | 0.4047 | 0.6806 |
2.2062 | 0.5160 | 4000 | 1.1468 | 0.6624 | 0.4110 | 0.6792 |
2.1015 | 0.6450 | 5000 | 1.0693 | 0.6828 | 0.4313 | 0.7034 |
2.0108 | 0.7740 | 6000 | 0.9765 | 0.6891 | 0.4446 | 0.7142 |
1.8167 | 0.9030 | 7000 | 0.9834 | 0.6957 | 0.4598 | 0.7114 |
1.6552 | 1.0320 | 8000 | 0.9275 | 0.7244 | 0.4871 | 0.7326 |
1.3873 | 1.1610 | 9000 | 0.9227 | 0.7174 | 0.4808 | 0.7296 |
1.376 | 1.2900 | 10000 | 0.9020 | 0.7297 | 0.5029 | 0.742 |
1.3524 | 1.4190 | 11000 | 0.8717 | 0.7351 | 0.5013 | 0.7428 |
1.3172 | 1.5480 | 12000 | 0.8563 | 0.7437 | 0.5106 | 0.753 |
1.2506 | 1.6770 | 13000 | 0.8629 | 0.7350 | 0.5051 | 0.741 |
1.2296 | 1.8060 | 14000 | 0.8507 | 0.7440 | 0.5156 | 0.752 |
1.1538 | 1.9350 | 15000 | 0.8372 | 0.7488 | 0.5253 | 0.7584 |
0.9544 | 2.0640 | 16000 | 0.8336 | 0.7563 | 0.5409 | 0.7652 |
0.7021 | 2.1930 | 17000 | 0.9057 | 0.7460 | 0.5122 | 0.7546 |
0.7581 | 2.3220 | 18000 | 0.8897 | 0.7562 | 0.5457 | 0.7626 |
0.7315 | 2.4510 | 19000 | 0.8816 | 0.7607 | 0.5276 | 0.7656 |
0.7082 | 2.5800 | 20000 | 0.8700 | 0.7638 | 0.5396 | 0.7674 |
0.6794 | 2.7090 | 21000 | 0.8733 | 0.7675 | 0.5465 | 0.7732 |
0.7045 | 2.8380 | 22000 | 0.8698 | 0.7630 | 0.5367 | 0.7666 |
0.6592 | 2.9670 | 23000 | 0.8903 | 0.7574 | 0.5402 | 0.7634 |
0.3938 | 3.0960 | 24000 | 0.9457 | 0.7647 | 0.5518 | 0.7674 |
0.3166 | 3.2250 | 25000 | 1.0020 | 0.7602 | 0.5373 | 0.7656 |
0.3188 | 3.3540 | 26000 | 0.9943 | 0.7666 | 0.5538 | 0.7696 |
0.3073 | 3.4830 | 27000 | 1.0299 | 0.7599 | 0.5498 | 0.7644 |
0.3004 | 3.6120 | 28000 | 1.0391 | 0.7656 | 0.5561 | 0.7682 |
0.2886 | 3.7410 | 29000 | 1.0656 | 0.7648 | 0.5504 | 0.7688 |
0.299 | 3.8700 | 30000 | 1.0814 | 0.7556 | 0.5294 | 0.76 |
0.2841 | 3.9990 | 31000 | 1.0692 | 0.7669 | 0.5431 | 0.7708 |
Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.3.0
- Tokenizers 0.21.0
- Downloads last month
- 77
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support