rubentito commited on
Commit
2596f14
1 Parent(s): 45d0fc5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -14
README.md CHANGED
@@ -17,19 +17,6 @@ This is BERT trained on [SinglePage DocVQA](https://arxiv.org/abs/2007.00398) an
17
  This model was used as a baseline in [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf).
18
  - Training hyperparameters can be found in Table 8 of Appendix D.
19
 
20
- ## Model results
21
-
22
- Extended experimentation can be found in Table 2 of [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf).
23
- You can also check the live leaderboard at the [RRC Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4).
24
- | Model | HF name | ANLS | APPA |
25
- |-----------------------------------------------------------------------------------|:--------------------------------------|:-------------:|:---------:|
26
- | [**Bert-large**](https://huggingface.co/rubentito/bert-large-mpdocvqa) | rubentito/bert-large-mpdocvqa | 0.4183 | 51.6177 |
27
- | [Longformer-base](https://huggingface.co/rubentito/longformer-base-mpdocvqa) | rubentito/longformer-base-mpdocvqa | 0.5287 | 71.1696 |
28
- | [BigBird ITC base](https://huggingface.co/rubentito/bigbird-base-itc-mpdocvqa) | rubentito/bigbird-base-itc-mpdocvqa | 0.4929 | 67.5433 |
29
- | [LayoutLMv3 base](https://huggingface.co/rubentito/layoutlmv3-base-mpdocvqa) | rubentito/layoutlmv3-base-mpdocvqa | 0.4538 | 51.9426 |
30
- | [T5 base](https://huggingface.co/rubentito/t5-base-mpdocvqa) | rubentito/t5-base-mpdocvqa | 0.5050 | 0.0000 |
31
- | Hi-VT5
32
-
33
  ## How to use
34
 
35
  Here is how to use this model to get the features of a given text in PyTorch:
@@ -45,7 +32,19 @@ encoded_input = tokenizer(question, context, return_tensors='pt')
45
  output = model(**encoded_input)
46
  ```
47
 
48
- | TBA | 0.6201 | 79.23
 
 
 
 
 
 
 
 
 
 
 
 
49
  ## BibTeX entry
50
 
51
  ```tex
 
17
  This model was used as a baseline in [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf).
18
  - Training hyperparameters can be found in Table 8 of Appendix D.
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ## How to use
21
 
22
  Here is how to use this model to get the features of a given text in PyTorch:
 
32
  output = model(**encoded_input)
33
  ```
34
 
35
+ ## Model results
36
+
37
+ Extended experimentation can be found in Table 2 of [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf).
38
+ You can also check the live leaderboard at the [RRC Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4).
39
+ | Model | HF name | ANLS | APPA |
40
+ |-----------------------------------------------------------------------------------|:--------------------------------------|:-------------:|:---------:|
41
+ | [**Bert-large**](https://huggingface.co/rubentito/bert-large-mpdocvqa) | rubentito/bert-large-mpdocvqa | 0.4183 | 51.6177 |
42
+ | [Longformer-base](https://huggingface.co/rubentito/longformer-base-mpdocvqa) | rubentito/longformer-base-mpdocvqa | 0.5287 | 71.1696 |
43
+ | [BigBird ITC base](https://huggingface.co/rubentito/bigbird-base-itc-mpdocvqa) | rubentito/bigbird-base-itc-mpdocvqa | 0.4929 | 67.5433 |
44
+ | [LayoutLMv3 base](https://huggingface.co/rubentito/layoutlmv3-base-mpdocvqa) | rubentito/layoutlmv3-base-mpdocvqa | 0.4538 | 51.9426 |
45
+ | [T5 base](https://huggingface.co/rubentito/t5-base-mpdocvqa) | rubentito/t5-base-mpdocvqa | 0.5050 | 0.0000 |
46
+ | Hi-VT5 | TBA | 0.6201 | 79.23 |
47
+
48
  ## BibTeX entry
49
 
50
  ```tex