File size: 5,367 Bytes
adb1b1c
 
 
97807ea
adb1b1c
 
 
 
 
 
 
 
 
 
 
 
97807ea
2c7a706
97807ea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
adb1b1c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e33a4b5
adb1b1c
 
97807ea
 
 
 
 
 
 
 
 
 
 
adb1b1c
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
library_name: transformers
license: mit
base_model: sumitD/table-transformer-structure-recognition-v1.1-all-finetuned
tags:
- generated_from_trainer
model-index:
- name: table-transformer-structure-recognition-v1.1-all-finetuned
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# table-transformer-structure-recognition-v1.1-all-finetuned

This model is a fine-tuned version of [sumitD/table-transformer-structure-recognition-v1.1-all-finetuned](https://huggingface.co/sumitD/table-transformer-structure-recognition-v1.1-all-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1419
- Map: 0.9219
- Map 50: 0.966
- Map 75: 0.9496
- Map Small: -1.0
- Map Medium: 0.8782
- Map Large: 0.921
- Mar 1: 0.5537
- Mar 10: 0.9407
- Mar 100: 0.9694
- Mar Small: -1.0
- Mar Medium: 0.9079
- Mar Large: 0.9693
- Map Table: 0.9882
- Mar 100 Table: 0.9964
- Map Table column: 0.9732
- Mar 100 Table column: 0.9892
- Map Table column header: 0.9543
- Mar 100 Table column header: 0.9847
- Map Table projected row header: 0.8673
- Mar 100 Table projected row header: 0.964
- Map Table row: 0.9584
- Mar 100 Table row: 0.9838
- Map Table spanning cell: 0.7903
- Mar 100 Table spanning cell: 0.8983

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 5
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step   | Validation Loss | Map    | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1  | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Table | Mar 100 Table | Map Table column | Mar 100 Table column | Map Table column header | Mar 100 Table column header | Map Table projected row header | Mar 100 Table projected row header | Map Table row | Mar 100 Table row | Map Table spanning cell | Mar 100 Table spanning cell |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:---------:|:-------------:|:----------------:|:--------------------:|:-----------------------:|:---------------------------:|:------------------------------:|:----------------------------------:|:-------------:|:-----------------:|:-----------------------:|:---------------------------:|
| 0.2338        | 1.0   | 23715  | 0.1991          | 0.8756 | 0.9505 | 0.9307 | -1.0      | 0.7912     | 0.8748    | 0.5395 | 0.9175 | 0.9471  | -1.0      | 0.8518     | 0.947     | 0.9844    | 0.9935        | 0.9582           | 0.981                | 0.9111                  | 0.9647                      | 0.7701                         | 0.9364                             | 0.9147        | 0.9577            | 0.7149                  | 0.8496                      |
| 0.2048        | 2.0   | 47430  | 0.1915          | 0.8827 | 0.9567 | 0.9384 | -1.0      | 0.8103     | 0.8823    | 0.54   | 0.9197 | 0.9498  | -1.0      | 0.8538     | 0.9499    | 0.9855    | 0.9944        | 0.9564           | 0.9819               | 0.9047                  | 0.9527                      | 0.7905                         | 0.9437                             | 0.9222        | 0.9651            | 0.7371                  | 0.8607                      |
| 0.1841        | 3.0   | 71145  | 0.1605          | 0.9087 | 0.9636 | 0.9467 | -1.0      | 0.8373     | 0.9077    | 0.548  | 0.933  | 0.9616  | -1.0      | 0.8868     | 0.9613    | 0.9836    | 0.9935        | 0.9703           | 0.9888               | 0.94                    | 0.9771                      | 0.8468                         | 0.9545                             | 0.9466        | 0.9781            | 0.765                   | 0.8778                      |
| 0.1914        | 4.0   | 94860  | 0.1496          | 0.9181 | 0.9652 | 0.9496 | -1.0      | 0.8741     | 0.917     | 0.552  | 0.9387 | 0.9678  | -1.0      | 0.9024     | 0.9676    | 0.9886    | 0.9968        | 0.9724           | 0.9886               | 0.9508                  | 0.9824                      | 0.8561                         | 0.9628                             | 0.9574        | 0.9829            | 0.7836                  | 0.8934                      |
| 0.1739        | 5.0   | 118575 | 0.1419          | 0.9219 | 0.966  | 0.9496 | -1.0      | 0.8782     | 0.921     | 0.5537 | 0.9407 | 0.9694  | -1.0      | 0.9079     | 0.9693    | 0.9882    | 0.9964        | 0.9732           | 0.9892               | 0.9543                  | 0.9847                      | 0.8673                         | 0.964                              | 0.9584        | 0.9838            | 0.7903                  | 0.8983                      |


### Framework versions

- Transformers 4.48.2
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0