TajaKuzman commited on
Commit
75cf051
·
verified ·
1 Parent(s): 6a3d7a8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -25,7 +25,7 @@ following the [LLM teacher-student framework](https://ieeexplore.ieee.org/docume
25
  Evaluation of the GPT model has shown that its annotation performance is
26
  comparable to those of human annotators.
27
 
28
- The fine-tuned ParlaCAP model achieves 0.808 in macro-F1 on an English test set (440 instances from ParlaMint-GB 4.1, balanced by labels)
29
  and 0.656 in macro-F1 on a Croatian test set (440 instances from ParlaMint-HR 4.1, balanced by labels).
30
 
31
  An additional evaluation on smaller samples from Czech ParlaMint-CZ, Bulgarian ParlaMint-BG and Ukrainian ParlaMint-UA datasets shows
@@ -43,9 +43,10 @@ Performance of the model on the remaining instances (all instances not annotated
43
 
44
  | | micro-F1 | macro-F1 | accuracy |
45
  |:---|-----------:|-----------:|-----------:|
46
- | EN | 0.838 | 0.838 | 0.838 |
47
  | HR | 0.724 | 0.726 | 0.724 |
48
 
 
49
  ## Use
50
 
51
  To use the model:
 
25
  Evaluation of the GPT model has shown that its annotation performance is
26
  comparable to those of human annotators.
27
 
28
+ The fine-tuned ParlaCAP model achieves 0.752 in macro-F1 on an English test set (440 instances from ParlaMint-GB 4.1, balanced by labels)
29
  and 0.656 in macro-F1 on a Croatian test set (440 instances from ParlaMint-HR 4.1, balanced by labels).
30
 
31
  An additional evaluation on smaller samples from Czech ParlaMint-CZ, Bulgarian ParlaMint-BG and Ukrainian ParlaMint-UA datasets shows
 
43
 
44
  | | micro-F1 | macro-F1 | accuracy |
45
  |:---|-----------:|-----------:|-----------:|
46
+ | EN | 0.780 | 0.779 | 0.779 |
47
  | HR | 0.724 | 0.726 | 0.724 |
48
 
49
+
50
  ## Use
51
 
52
  To use the model: