Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,7 @@ Evaluation of the GPT model has shown that its annotation performance is
|
|
26 |
comparable to those of human annotators.
|
27 |
|
28 |
The fine-tuned ParlaCAP model achieves 0.752 in macro-F1 on an English test set (440 instances from ParlaMint-GB 4.1, balanced by labels)
|
29 |
-
and 0.
|
30 |
|
31 |
An additional evaluation on smaller samples from Czech ParlaMint-CZ, Bulgarian ParlaMint-BG and Ukrainian ParlaMint-UA datasets shows
|
32 |
that the model achieves macro-F1 scores of 0.736, 0.75 and 0.805 on these three test datasets, respectively.
|
|
|
26 |
comparable to those of human annotators.
|
27 |
|
28 |
The fine-tuned ParlaCAP model achieves 0.752 in macro-F1 on an English test set (440 instances from ParlaMint-GB 4.1, balanced by labels)
|
29 |
+
and 0.694 in macro-F1 on a Croatian test set (440 instances from ParlaMint-HR 4.1, balanced by labels).
|
30 |
|
31 |
An additional evaluation on smaller samples from Czech ParlaMint-CZ, Bulgarian ParlaMint-BG and Ukrainian ParlaMint-UA datasets shows
|
32 |
that the model achieves macro-F1 scores of 0.736, 0.75 and 0.805 on these three test datasets, respectively.
|