jbcs2025_phi4-balanced-C5 / run_experiment.log
abarbosa's picture
Pushing fine-tuned model to Hugging Face Hub
0b7f5ff verified
[2025-03-23 23:16:55,398][__main__][INFO] - cache_dir: /media/data/tmp
dataset:
name: kamel-usp/aes_enem_dataset
split: JBCS2025
training_params:
seed: 42
num_train_epochs: 20
logging_steps: 100
metric_for_best_model: QWK
bf16: true
post_training_results:
model_path: /workspace/jbcs2025/outputs/2025-03-23/20-41-58
experiments:
model:
name: microsoft/phi-4
type: phi4_classification_lora
num_labels: 6
output_dir: ./results/phi4-balanced/C5
logging_dir: ./logs/phi4-balanced/C5
best_model_dir: ./results/phi4-balanced/C5/best_model
lora_r: 8
lora_dropout: 0.05
lora_alpha: 16
lora_target_modules: all-linear
dataset:
grade_index: 4
training_id: phi4-balanced-C5
training_params:
weight_decay: 0.01
warmup_ratio: 0.1
learning_rate: 5.0e-05
train_batch_size: 1
eval_batch_size: 16
gradient_accumulation_steps: 16
gradient_checkpointing: false
[2025-03-23 23:16:55,400][__main__][INFO] - Starting the Fine Tuning training process.
[2025-03-23 23:16:59,427][transformers.tokenization_utils_base][INFO] - loading file vocab.json from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/vocab.json
[2025-03-23 23:16:59,427][transformers.tokenization_utils_base][INFO] - loading file merges.txt from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/merges.txt
[2025-03-23 23:16:59,427][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/tokenizer.json
[2025-03-23 23:16:59,428][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/added_tokens.json
[2025-03-23 23:16:59,428][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/special_tokens_map.json
[2025-03-23 23:16:59,428][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/tokenizer_config.json
[2025-03-23 23:16:59,428][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
[2025-03-23 23:16:59,643][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: False
[2025-03-23 23:17:00,914][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json
[2025-03-23 23:17:00,915][transformers.configuration_utils][INFO] - Model config Phi3Config {
"architectures": [
"Phi3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"embd_pdrop": 0.0,
"eos_token_id": 100265,
"hidden_act": "silu",
"hidden_size": 5120,
"id2label": {
"0": 0,
"1": 40,
"2": 80,
"3": 120,
"4": 160,
"5": 200
},
"initializer_range": 0.02,
"intermediate_size": 17920,
"label2id": {
"0": 0,
"40": 1,
"80": 2,
"120": 3,
"160": 4,
"200": 5
},
"max_position_embeddings": 16384,
"model_type": "phi3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100349,
"partial_rotary_factor": 1.0,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.50.0",
"use_cache": true,
"vocab_size": 100352
}
[2025-03-23 23:17:00,937][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /media/data/tmp/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/model.safetensors.index.json
[2025-03-23 23:17:00,938][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.bfloat16 as defined in model's config object
[2025-03-23 23:17:00,938][transformers.modeling_utils][INFO] - Instantiating Phi3ForSequenceClassification model under default dtype torch.bfloat16.
[2025-03-23 23:17:22,309][transformers.modeling_utils][INFO] - Some weights of the model checkpoint at microsoft/phi-4 were not used when initializing Phi3ForSequenceClassification: ['lm_head.weight']
- This IS expected if you are initializing Phi3ForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing Phi3ForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[2025-03-23 23:17:22,309][transformers.modeling_utils][WARNING] - Some weights of Phi3ForSequenceClassification were not initialized from the model checkpoint at microsoft/phi-4 and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[2025-03-23 23:17:24,119][__main__][INFO] - None
[2025-03-23 23:17:24,121][transformers.training_args][INFO] - PyTorch: setting up devices
[2025-03-23 23:17:24,160][__main__][INFO] - Total steps: 620. Number of warmup steps: 62
[2025-03-23 23:17:24,166][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
[2025-03-23 23:17:24,188][transformers.trainer][INFO] - Using auto half precision backend
[2025-03-23 23:17:24,189][transformers.trainer][WARNING] - No label_names provided for model class `PeftModelForSequenceClassification`. Since `PeftModel` hides base models input arguments, if label_names is not given, label_names can't be set automatically within `Trainer`. Note that empty label_names list will be used instead.
[2025-03-23 23:17:24,244][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-23 23:17:24,258][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-23 23:17:24,258][transformers.trainer][INFO] - Num examples = 132
[2025-03-23 23:17:24,258][transformers.trainer][INFO] - Batch size = 16
[2025-03-23 23:17:52,247][transformers][INFO] - {'accuracy': 0.18181818181818182, 'RMSE': 83.4847109936722, 'QWK': -0.005937549700471889, 'HDIV': 0.28787878787878785, 'Macro_F1': 0.08773946360153256, 'Micro_F1': 0.18181818181818182, 'Weighted_F1': 0.08864507140369209, 'Macro_F1_(ignoring_nan)': np.float64(0.2632183908045977)}
[2025-03-23 23:17:52,250][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-23 23:17:52,477][transformers.trainer][INFO] - The following columns in the training set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-23 23:17:52,509][transformers.trainer][INFO] - ***** Running training *****
[2025-03-23 23:17:52,509][transformers.trainer][INFO] - Num examples = 500
[2025-03-23 23:17:52,509][transformers.trainer][INFO] - Num Epochs = 20
[2025-03-23 23:17:52,509][transformers.trainer][INFO] - Instantaneous batch size per device = 1
[2025-03-23 23:17:52,509][transformers.trainer][INFO] - Total train batch size (w. parallel, distributed & accumulation) = 16
[2025-03-23 23:17:52,509][transformers.trainer][INFO] - Gradient Accumulation steps = 16
[2025-03-23 23:17:52,509][transformers.trainer][INFO] - Total optimization steps = 620
[2025-03-23 23:17:52,511][transformers.trainer][INFO] - Number of trainable parameters = 27,883,520
[2025-03-23 23:25:14,205][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-23 23:25:14,207][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-23 23:25:14,207][transformers.trainer][INFO] - Num examples = 132
[2025-03-23 23:25:14,207][transformers.trainer][INFO] - Batch size = 16
[2025-03-23 23:25:42,038][transformers][INFO] - {'accuracy': 0.12121212121212122, 'RMSE': 59.39084716749482, 'QWK': -0.05031171387947064, 'HDIV': 0.045454545454545414, 'Macro_F1': 0.04404761904761905, 'Micro_F1': 0.12121212121212122, 'Weighted_F1': 0.0461038961038961, 'Macro_F1_(ignoring_nan)': np.float64(0.06607142857142857)}
[2025-03-23 23:25:42,038][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-23 23:25:42,041][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-32
[2025-03-23 23:25:42,738][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json
[2025-03-23 23:25:42,739][transformers.configuration_utils][INFO] - Model config Phi3Config {
"architectures": [
"Phi3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"embd_pdrop": 0.0,
"eos_token_id": 100265,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"model_type": "phi3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100349,
"partial_rotary_factor": 1.0,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.50.0",
"use_cache": true,
"vocab_size": 100352
}
[2025-03-23 23:33:04,853][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-23 23:33:04,856][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-23 23:33:04,856][transformers.trainer][INFO] - Num examples = 132
[2025-03-23 23:33:04,856][transformers.trainer][INFO] - Batch size = 16
[2025-03-23 23:33:32,550][transformers][INFO] - {'accuracy': 0.3181818181818182, 'RMSE': 58.77538136452586, 'QWK': 0.2504781638508129, 'HDIV': 0.09848484848484851, 'Macro_F1': 0.1403307622819818, 'Micro_F1': 0.3181818181818182, 'Weighted_F1': 0.21991964674891504, 'Macro_F1_(ignoring_nan)': np.float64(0.2806615245639636)}
[2025-03-23 23:33:32,551][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-23 23:33:32,554][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-64
[2025-03-23 23:33:32,940][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json
[2025-03-23 23:33:32,941][transformers.configuration_utils][INFO] - Model config Phi3Config {
"architectures": [
"Phi3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"embd_pdrop": 0.0,
"eos_token_id": 100265,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"model_type": "phi3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100349,
"partial_rotary_factor": 1.0,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.50.0",
"use_cache": true,
"vocab_size": 100352
}
[2025-03-23 23:33:33,641][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-32] due to args.save_total_limit
[2025-03-23 23:40:54,983][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-23 23:40:54,985][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-23 23:40:54,985][transformers.trainer][INFO] - Num examples = 132
[2025-03-23 23:40:54,985][transformers.trainer][INFO] - Batch size = 16
[2025-03-23 23:41:22,758][transformers][INFO] - {'accuracy': 0.18181818181818182, 'RMSE': 54.27204202399745, 'QWK': 0.19317838816782373, 'HDIV': 0.022727272727272707, 'Macro_F1': 0.1194578636607622, 'Micro_F1': 0.18181818181818182, 'Weighted_F1': 0.11230176157712389, 'Macro_F1_(ignoring_nan)': np.float64(0.2389157273215244)}
[2025-03-23 23:41:22,758][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-23 23:41:22,761][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-96
[2025-03-23 23:41:23,054][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json
[2025-03-23 23:41:23,055][transformers.configuration_utils][INFO] - Model config Phi3Config {
"architectures": [
"Phi3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"embd_pdrop": 0.0,
"eos_token_id": 100265,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"model_type": "phi3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100349,
"partial_rotary_factor": 1.0,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.50.0",
"use_cache": true,
"vocab_size": 100352
}
[2025-03-23 23:48:45,477][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-23 23:48:45,480][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-23 23:48:45,480][transformers.trainer][INFO] - Num examples = 132
[2025-03-23 23:48:45,480][transformers.trainer][INFO] - Batch size = 16
[2025-03-23 23:49:13,271][transformers][INFO] - {'accuracy': 0.29545454545454547, 'RMSE': 62.957417066111944, 'QWK': 0.264367032517554, 'HDIV': 0.13636363636363635, 'Macro_F1': 0.17574013949013947, 'Micro_F1': 0.29545454545454547, 'Weighted_F1': 0.24230158730158732, 'Macro_F1_(ignoring_nan)': np.float64(0.21088816738816737)}
[2025-03-23 23:49:13,272][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-23 23:49:13,274][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-128
[2025-03-23 23:49:13,582][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json
[2025-03-23 23:49:13,583][transformers.configuration_utils][INFO] - Model config Phi3Config {
"architectures": [
"Phi3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"embd_pdrop": 0.0,
"eos_token_id": 100265,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"model_type": "phi3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100349,
"partial_rotary_factor": 1.0,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.50.0",
"use_cache": true,
"vocab_size": 100352
}
[2025-03-23 23:49:14,345][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-64] due to args.save_total_limit
[2025-03-23 23:49:14,390][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-96] due to args.save_total_limit
[2025-03-23 23:56:36,123][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-23 23:56:36,126][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-23 23:56:36,126][transformers.trainer][INFO] - Num examples = 132
[2025-03-23 23:56:36,126][transformers.trainer][INFO] - Batch size = 16
[2025-03-23 23:57:03,856][transformers][INFO] - {'accuracy': 0.38636363636363635, 'RMSE': 50.81159495448044, 'QWK': 0.6200335153251527, 'HDIV': 0.0757575757575758, 'Macro_F1': 0.2967917478882391, 'Micro_F1': 0.38636363636363635, 'Weighted_F1': 0.3230093330970524, 'Macro_F1_(ignoring_nan)': np.float64(0.44518762183235866)}
[2025-03-23 23:57:03,857][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-23 23:57:03,860][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-160
[2025-03-23 23:57:04,170][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json
[2025-03-23 23:57:04,171][transformers.configuration_utils][INFO] - Model config Phi3Config {
"architectures": [
"Phi3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"embd_pdrop": 0.0,
"eos_token_id": 100265,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"model_type": "phi3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100349,
"partial_rotary_factor": 1.0,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.50.0",
"use_cache": true,
"vocab_size": 100352
}
[2025-03-23 23:57:04,948][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-128] due to args.save_total_limit
[2025-03-24 00:04:26,242][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-24 00:04:26,244][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-24 00:04:26,244][transformers.trainer][INFO] - Num examples = 132
[2025-03-24 00:04:26,244][transformers.trainer][INFO] - Batch size = 16
[2025-03-24 00:04:54,063][transformers][INFO] - {'accuracy': 0.3409090909090909, 'RMSE': 63.3413073130103, 'QWK': 0.5011417610522471, 'HDIV': 0.16666666666666663, 'Macro_F1': 0.2601190476190476, 'Micro_F1': 0.3409090909090909, 'Weighted_F1': 0.2519480519480519, 'Macro_F1_(ignoring_nan)': np.float64(0.3121428571428571)}
[2025-03-24 00:04:54,064][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-24 00:04:54,067][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-192
[2025-03-24 00:05:05,042][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json
[2025-03-24 00:05:05,043][transformers.configuration_utils][INFO] - Model config Phi3Config {
"architectures": [
"Phi3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"embd_pdrop": 0.0,
"eos_token_id": 100265,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"model_type": "phi3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100349,
"partial_rotary_factor": 1.0,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.50.0",
"use_cache": true,
"vocab_size": 100352
}
[2025-03-24 00:12:26,477][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-24 00:12:26,479][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-24 00:12:26,479][transformers.trainer][INFO] - Num examples = 132
[2025-03-24 00:12:26,479][transformers.trainer][INFO] - Batch size = 16
[2025-03-24 00:12:54,193][transformers][INFO] - {'accuracy': 0.3409090909090909, 'RMSE': 61.987290975039734, 'QWK': 0.517926267281106, 'HDIV': 0.15909090909090906, 'Macro_F1': 0.24659673516262667, 'Micro_F1': 0.3409090909090909, 'Weighted_F1': 0.2650430103918476, 'Macro_F1_(ignoring_nan)': np.float64(0.36989510274394)}
[2025-03-24 00:12:54,194][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-24 00:12:54,197][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-224
[2025-03-24 00:12:54,509][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json
[2025-03-24 00:12:54,510][transformers.configuration_utils][INFO] - Model config Phi3Config {
"architectures": [
"Phi3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"embd_pdrop": 0.0,
"eos_token_id": 100265,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"model_type": "phi3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100349,
"partial_rotary_factor": 1.0,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.50.0",
"use_cache": true,
"vocab_size": 100352
}
[2025-03-24 00:12:55,227][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-192] due to args.save_total_limit
[2025-03-24 00:20:16,667][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-24 00:20:16,669][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-24 00:20:16,669][transformers.trainer][INFO] - Num examples = 132
[2025-03-24 00:20:16,669][transformers.trainer][INFO] - Batch size = 16
[2025-03-24 00:20:44,402][transformers][INFO] - {'accuracy': 0.4090909090909091, 'RMSE': 47.60952285695233, 'QWK': 0.5631150442477876, 'HDIV': 0.045454545454545414, 'Macro_F1': 0.3140257192676547, 'Micro_F1': 0.4090909090909091, 'Weighted_F1': 0.36155693816984147, 'Macro_F1_(ignoring_nan)': np.float64(0.3768308631211857)}
[2025-03-24 00:20:44,403][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-24 00:20:44,406][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-256
[2025-03-24 00:20:44,706][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json
[2025-03-24 00:20:44,707][transformers.configuration_utils][INFO] - Model config Phi3Config {
"architectures": [
"Phi3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"embd_pdrop": 0.0,
"eos_token_id": 100265,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"model_type": "phi3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100349,
"partial_rotary_factor": 1.0,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.50.0",
"use_cache": true,
"vocab_size": 100352
}
[2025-03-24 00:20:45,443][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-224] due to args.save_total_limit
[2025-03-24 00:28:06,290][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-24 00:28:06,292][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-24 00:28:06,292][transformers.trainer][INFO] - Num examples = 132
[2025-03-24 00:28:06,292][transformers.trainer][INFO] - Batch size = 16
[2025-03-24 00:28:34,075][transformers][INFO] - {'accuracy': 0.3333333333333333, 'RMSE': 57.52469825293227, 'QWK': 0.5462832393231265, 'HDIV': 0.11363636363636365, 'Macro_F1': 0.2698364410228817, 'Micro_F1': 0.3333333333333333, 'Weighted_F1': 0.29051889729855834, 'Macro_F1_(ignoring_nan)': np.float64(0.32380372922745804)}
[2025-03-24 00:28:34,075][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-24 00:28:34,077][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-288
[2025-03-24 00:28:34,585][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json
[2025-03-24 00:28:34,586][transformers.configuration_utils][INFO] - Model config Phi3Config {
"architectures": [
"Phi3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"embd_pdrop": 0.0,
"eos_token_id": 100265,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"model_type": "phi3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100349,
"partial_rotary_factor": 1.0,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.50.0",
"use_cache": true,
"vocab_size": 100352
}
[2025-03-24 00:28:35,556][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-256] due to args.save_total_limit
[2025-03-24 00:35:57,242][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-24 00:35:57,244][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-24 00:35:57,244][transformers.trainer][INFO] - Num examples = 132
[2025-03-24 00:35:57,244][transformers.trainer][INFO] - Batch size = 16
[2025-03-24 00:36:24,886][transformers][INFO] - {'accuracy': 0.3560606060606061, 'RMSE': 55.37749241945383, 'QWK': 0.5304010349288486, 'HDIV': 0.0757575757575758, 'Macro_F1': 0.3200910987796234, 'Micro_F1': 0.3560606060606061, 'Weighted_F1': 0.36146696131793, 'Macro_F1_(ignoring_nan)': np.float64(0.3200910987796234)}
[2025-03-24 00:36:24,887][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-24 00:36:24,890][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-320
[2025-03-24 00:36:25,182][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json
[2025-03-24 00:36:25,182][transformers.configuration_utils][INFO] - Model config Phi3Config {
"architectures": [
"Phi3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"embd_pdrop": 0.0,
"eos_token_id": 100265,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"model_type": "phi3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100349,
"partial_rotary_factor": 1.0,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.50.0",
"use_cache": true,
"vocab_size": 100352
}
[2025-03-24 00:36:25,925][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-288] due to args.save_total_limit
[2025-03-24 00:36:25,966][transformers.trainer][INFO] -
Training completed. Do not forget to share your model on huggingface.co/models =)
[2025-03-24 00:36:25,966][transformers.trainer][INFO] - Loading best model from /workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-160 (score: 0.6200335153251527).
[2025-03-24 00:36:26,125][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-03-23/23-16-55/results/phi4-balanced/C5/checkpoint-320] due to args.save_total_limit
[2025-03-24 00:36:26,228][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-24 00:36:26,230][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-24 00:36:26,230][transformers.trainer][INFO] - Num examples = 132
[2025-03-24 00:36:26,230][transformers.trainer][INFO] - Batch size = 16
[2025-03-24 00:36:53,892][transformers][INFO] - {'accuracy': 0.38636363636363635, 'RMSE': 50.81159495448044, 'QWK': 0.6200335153251527, 'HDIV': 0.0757575757575758, 'Macro_F1': 0.2967917478882391, 'Micro_F1': 0.38636363636363635, 'Weighted_F1': 0.3230093330970524, 'Macro_F1_(ignoring_nan)': np.float64(0.44518762183235866)}
[2025-03-24 00:36:53,894][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-24 00:36:53,896][__main__][INFO] - Training completed successfully.
[2025-03-24 00:36:53,896][__main__][INFO] - Running on Test
[2025-03-24 00:36:53,896][transformers.trainer][INFO] - The following columns in the evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt. If id, reference, essay_year, essay_text, supporting_text, prompt, grades, id_prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message.
[2025-03-24 00:36:53,898][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-03-24 00:36:53,898][transformers.trainer][INFO] - Num examples = 138
[2025-03-24 00:36:53,898][transformers.trainer][INFO] - Batch size = 16
[2025-03-24 00:37:23,490][transformers][INFO] - {'accuracy': 0.3115942028985507, 'RMSE': 60.91095901015048, 'QWK': 0.45744053469628465, 'HDIV': 0.1376811594202898, 'Macro_F1': 0.18965878221692178, 'Micro_F1': 0.3115942028985507, 'Weighted_F1': 0.24393006835878425, 'Macro_F1_(ignoring_nan)': np.float64(0.2844881733253827)}
[2025-03-24 00:37:23,491][tensorboardX.summary][INFO] - Summary name eval/Macro_F1_(ignoring_nan) is illegal; using eval/Macro_F1__ignoring_nan_ instead.
[2025-03-24 00:37:23,493][transformers.trainer][INFO] - Saving model checkpoint to ./results/phi4-balanced/C5/best_model
[2025-03-24 00:37:23,784][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--microsoft--phi-4/snapshots/187ef0342fff0eb3333be9f00389385e95ef0b61/config.json
[2025-03-24 00:37:23,785][transformers.configuration_utils][INFO] - Model config Phi3Config {
"architectures": [
"Phi3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"embd_pdrop": 0.0,
"eos_token_id": 100265,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"model_type": "phi3",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100349,
"partial_rotary_factor": 1.0,
"resid_pdrop": 0.0,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.50.0",
"use_cache": true,
"vocab_size": 100352
}
[2025-03-24 00:37:24,325][__main__][INFO] - Fine Tuning Finished.