File size: 4,384 Bytes
83ac5f4
 
 
 
 
 
623d5c6
f2f3ebb
 
 
623d5c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e8eba8d
623d5c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f2f3ebb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117

> ⚠️ **Note:** This model has been re-uploaded to a new organization as part of a consolidated collection.  
> The updated version is available at [https://huggingface.co/aieng-lab/bert-base-cased-mamut](https://huggingface.co/aieng-lab/bert-base-cased-mamut).  
> Please refer to the new repository for future updates, documentation, and related models.


# MAMUT Bert (Mathematical Structure Aware BERT)

<!-- Provide a quick summary of what the model is/does. -->

Pretrained model based on [bert-base-cased](https://huggingface.co/bert-base-cased) with further mathematical pre-training, introduced in [MAMUT: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training](https://arxiv.org/abs/2502.20855).

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This model has been mathematically pretrained based on four tasks/datasets:

- **[Mathematical Formulas (MF)](https://huggingface.co/datasets/ddrg/math_formulas):** Masked Language Modeling (MLM) task on math formulas written in LaTeX
- **[Mathematical Texts (MT)](https://huggingface.co/datasets/ddrg/math_text):** MLM task on mathematical texts (i.e., texts containing LaTeX formulas). The masked tokens are more likely to be a one of the formula tokens or *mathematical words* (e.g., *sum*, *one*, ...)
- **[Named Math Formulas (NMF)](https://huggingface.co/datasets/ddrg/named_math_formulas):** Next-Sentence-Prediction (NSP)-like task associating a name of a well known mathematical identity (e.g., Pythagorean Theorem) with a formula representation (and the task is to classify whether the formula matches the identity described by the name)
- **[Math Formula Retrieval (MFR)](https://huggingface.co/datasets/ddrg/math_formula_retrieval):** NSP-like task associating two formulas (and the task is to decide whether both describe the same mathematical concept(identity))

![Training Overview](mamutbert-training.png)

Compared to bert-base-cased, 300 additional mathematical [LaTeX tokens](added_tokens.json) have been added before the mathematical pre-training.



- **Further pretrained from model:** [bert-base-cased](https://huggingface.co/google-bert/bert-base-cased)

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** aieng-lab/transformer-math-pretraining](https://github.com/aieng-lab/transformer-math-pretraining)
- **Paper:** [MAMUT: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training](https://arxiv.org/abs/2502.20855)

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->




## How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

[More Information Needed]

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->



## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->

[More Information Needed]

#### Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

[More Information Needed]

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

[More Information Needed]

### Results

[More Information Needed]

#### Summary



## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

- **Hardware Type:** 8xA100
- **Hours used:** 48
- **Compute Region:** Germany


## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]