jdrechsel commited on
Commit
623d5c6
·
verified ·
1 Parent(s): acc1c60

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -3
README.md CHANGED
@@ -1,4 +1,9 @@
1
  ---
 
 
 
 
 
2
  datasets:
3
  - ddrg/math_formulas
4
  - ddrg/math_formula_retrieval
@@ -6,10 +11,113 @@ datasets:
6
  - ddrg/named_math_formulas
7
  ---
8
 
9
- # Mathematical Structure Aware BERT
10
 
11
  <!-- Provide a quick summary of what the model is/does. -->
12
 
13
- Pretrained model based on [bert-base-cased](https://huggingface.co/bert-base-cased) with further mathematical pre-training.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- Compared to bert-base-cased, 300 additional mathematical [LaTeX tokens](added_tokens.json) have been added before the mathematical pre-training.
 
1
  ---
2
+ # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
+ # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
+ {}
5
+ ---
6
+ ---
7
  datasets:
8
  - ddrg/math_formulas
9
  - ddrg/math_formula_retrieval
 
11
  - ddrg/named_math_formulas
12
  ---
13
 
14
+ # MAMUT Bert (Mathematical Structure Aware BERT)
15
 
16
  <!-- Provide a quick summary of what the model is/does. -->
17
 
18
+ Pretrained model based on [bert-base-cased](https://huggingface.co/bert-base-cased) with further mathematical pre-training, introduced in [MAMUT: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training](https://arxiv.org/abs/2502.20855).
19
+
20
+ ## Model Details
21
+
22
+ ### Model Description
23
+
24
+ <!-- Provide a longer summary of what this model is. -->
25
+
26
+ This model has been mathematically pretrained based on four tasks/datasets:
27
+
28
+ - **[Mathematical Formulas (MF)](https://huggingface.co/datasets/ddrg/math_formulas):** Masked Language Modeling (MLM) task on math formulas written in LaTeX
29
+ - **[Mathematical Texts (MT)](https://huggingface.co/datasets/ddrg/math_text):** MLM task on mathematical texts (i.e., texts containing LaTeX formulas). The masked tokens are more likely to be a one of the formula tokens or *mathematical words* (e.g., *sum*, *one*, ...)
30
+ - **[Named Math Formulas (NMF)](https://huggingface.co/datasets/ddrg/named_math_formulas):** Next-Sentence-Prediction (NSP)-like task associating a name of a well known mathematical identity (e.g., Pythagorean Theorem) with a formula representation (and the task is to classify whether the formula matches the identity described by the name)
31
+ - **[Math Formula Retrieval (MFR)](https://huggingface.co/datasets/ddrg/math_formula_retrieval):** NSP-like task associating two formulas (and the task is to decide whether both describe the same mathematical concept(identity))
32
+
33
+ [!Training Overview](img/mamutbert-training.png)
34
+
35
+ Compared to bert-base-cased, 300 additional mathematical [LaTeX tokens](added_tokens.json) have been added before the mathematical pre-training.
36
+
37
+
38
+
39
+ - **Further pretrained from model:** [bert-base-cased](https://huggingface.co/google-bert/bert-base-cased)
40
+
41
+ ### Model Sources [optional]
42
+
43
+ <!-- Provide the basic links for the model. -->
44
+
45
+ - **Repository:** aieng-lab/transformer-math-pretraining](https://github.com/aieng-lab/transformer-math-pretraining)
46
+ - **Paper:** [MAMUT: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training](https://arxiv.org/abs/2502.20855)
47
+
48
+ ## Uses
49
+
50
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
51
+
52
+
53
+
54
+
55
+ ## How to Get Started with the Model
56
+
57
+ Use the code below to get started with the model.
58
+
59
+ [More Information Needed]
60
+
61
+ ## Training Details
62
+
63
+ ### Training Data
64
+
65
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
66
+
67
+ [More Information Needed]
68
+
69
+ ### Training Procedure
70
+
71
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
72
+
73
+
74
+
75
+ ## Evaluation
76
+
77
+ <!-- This section describes the evaluation protocols and provides the results. -->
78
+
79
+ ### Testing Data, Factors & Metrics
80
+
81
+ #### Testing Data
82
+
83
+ <!-- This should link to a Dataset Card if possible. -->
84
+
85
+ [More Information Needed]
86
+
87
+ #### Factors
88
+
89
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
90
+
91
+ [More Information Needed]
92
+
93
+ #### Metrics
94
+
95
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
96
+
97
+ [More Information Needed]
98
+
99
+ ### Results
100
+
101
+ [More Information Needed]
102
+
103
+ #### Summary
104
+
105
+
106
+
107
+ ## Environmental Impact
108
+
109
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
110
+
111
+ - **Hardware Type:** 8xA100
112
+ - **Hours used:** 48
113
+ - **Compute Region:** Germany
114
+
115
+
116
+ ## Citation
117
+
118
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
119
+
120
+ **BibTeX:**
121
+
122
+ [More Information Needed]
123