Update README.md
Browse files
README.md
CHANGED
@@ -57,6 +57,8 @@ MAMUT-MathBERT is intended for downstream tasks that require improved mathematic
|
|
57 |
|
58 |
**Note: This model was saved without the MLM or NSP heads and requires fine-tuning before use in downstream tasks.**
|
59 |
|
|
|
|
|
60 |
## Training Details
|
61 |
|
62 |
Training configurations are described in [Appendix C of the MAMUT paper](https://arxiv.org/abs/2502.20855).
|
|
|
57 |
|
58 |
**Note: This model was saved without the MLM or NSP heads and requires fine-tuning before use in downstream tasks.**
|
59 |
|
60 |
+
Similarly trained models are [MAMUT-BERT based on `bert-base-cased`](https://huggingface.co/aieng-lab/bert-base-cased-mamut) and [MAMUT-MPBERT based on `AnReu/math_structure_bert`](https://huggingface.co/ddrg/math_structure_bert) (best of the three models according to our evaluation).
|
61 |
+
|
62 |
## Training Details
|
63 |
|
64 |
Training configurations are described in [Appendix C of the MAMUT paper](https://arxiv.org/abs/2502.20855).
|