Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ following the [LLM teacher-student framework](https://ieeexplore.ieee.org/docume
|
|
25 |
Evaluation of the GPT model has shown that its annotation performance is
|
26 |
comparable to those of human annotators.
|
27 |
|
28 |
-
The fine-tuned ParlaCAP model achieves 0.
|
29 |
and 0.656 in macro-F1 on a Croatian test set (440 instances from ParlaMint-HR 4.1, balanced by labels).
|
30 |
|
31 |
An additional evaluation on smaller samples from Czech ParlaMint-CZ, Bulgarian ParlaMint-BG and Ukrainian ParlaMint-UA datasets shows
|
@@ -43,9 +43,10 @@ Performance of the model on the remaining instances (all instances not annotated
|
|
43 |
|
44 |
| | micro-F1 | macro-F1 | accuracy |
|
45 |
|:---|-----------:|-----------:|-----------:|
|
46 |
-
| EN | 0.
|
47 |
| HR | 0.724 | 0.726 | 0.724 |
|
48 |
|
|
|
49 |
## Use
|
50 |
|
51 |
To use the model:
|
|
|
25 |
Evaluation of the GPT model has shown that its annotation performance is
|
26 |
comparable to those of human annotators.
|
27 |
|
28 |
+
The fine-tuned ParlaCAP model achieves 0.752 in macro-F1 on an English test set (440 instances from ParlaMint-GB 4.1, balanced by labels)
|
29 |
and 0.656 in macro-F1 on a Croatian test set (440 instances from ParlaMint-HR 4.1, balanced by labels).
|
30 |
|
31 |
An additional evaluation on smaller samples from Czech ParlaMint-CZ, Bulgarian ParlaMint-BG and Ukrainian ParlaMint-UA datasets shows
|
|
|
43 |
|
44 |
| | micro-F1 | macro-F1 | accuracy |
|
45 |
|:---|-----------:|-----------:|-----------:|
|
46 |
+
| EN | 0.780 | 0.779 | 0.779 |
|
47 |
| HR | 0.724 | 0.726 | 0.724 |
|
48 |
|
49 |
+
|
50 |
## Use
|
51 |
|
52 |
To use the model:
|