patrickvonplaten commited on
Commit
6f56197
·
1 Parent(s): b60465c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -13
README.md CHANGED
@@ -38,7 +38,7 @@ A sequence of word embeddings is therefore processed sequentially by each transf
38
 
39
  The *conventional* T5 architectures are summarized in the following table:
40
 
41
- | Model | nl | ff | dm | kv | nh | #Params|
42
  | ----| ---- | ---- | ---- | ---- | ---- | ----|
43
  | Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
44
  | Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
@@ -52,29 +52,57 @@ with the following definitions:
52
 
53
  | Abbreviation | Definition |
54
  | ----| ---- |
55
- | NL | Number of transformer blocks (depth) |
56
- | EL | Number of transformer blocks in the encoder (encoder depth) |
57
- | DL | Number of transformer blocks in the decoder (decoder depth) |
58
- | DM | Dimension of embedding vector (output vector of transformers block) |
59
- | KV | Dimension of key/value projection matrix |
60
- | NH | Number of attention heads |
61
- | FF | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
62
- | SH | Signifies that attention heads are shared |
63
- | SKV | Signifies that key-values projection matrices are tied |
 
 
 
 
 
 
64
 
65
  ## Pre-Training
66
 
67
  The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
68
  the span-based masked language modeling (MLM) objective.
69
 
70
- ## Downstream Performance
 
 
 
 
 
 
 
 
 
 
71
 
72
- TODO:
73
 
 
 
74
 
 
 
 
 
 
 
75
 
 
76
 
 
77
 
78
- Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
 
 
79
 
80
 
 
38
 
39
  The *conventional* T5 architectures are summarized in the following table:
40
 
41
+ | Model | nl (el/dl) | ff | dm | kv | nh | #Params|
42
  | ----| ---- | ---- | ---- | ---- | ---- | ----|
43
  | Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
44
  | Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
 
52
 
53
  | Abbreviation | Definition |
54
  | ----| ---- |
55
+ | nl | Number of transformer blocks (depth) |
56
+ | dm | Dimension of embedding vector (output vector of transformers block) |
57
+ | kv | Dimension of key/value projection matrix |
58
+ | nh | Number of attention heads |
59
+ | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
60
+ | el | Number of transformer blocks in the encoder (encoder depth) |
61
+ | dl | Number of transformer blocks in the decoder (decoder depth) |
62
+ | sh | Signifies that attention heads are shared |
63
+ | skv | Signifies that key-values projection matrices are tied |
64
+
65
+ If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond no *nl*.
66
+
67
+ This model checkpoint - **t5-efficient-xl** - is of model type **XL** with **no** variations.
68
+ It has **2852** million parameters and thus requires **11406** MB of memory in full precision (*fp32*)
69
+ or **5703** MB of memory in half precision (*fp16* or *bf16*).
70
 
71
  ## Pre-Training
72
 
73
  The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
74
  the span-based masked language modeling (MLM) objective.
75
 
76
+ ## Fine-Tuning
77
+
78
+ **Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
79
+ The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
80
+ You can follow on of the following examples on how to fine-tune the model:
81
+
82
+ *PyTorch*:
83
+
84
+ - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
85
+ - [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
86
+ - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
87
 
88
+ *Tensorflow*:
89
 
90
+ - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
91
+ - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
92
 
93
+ *JAX/Flax*:
94
+
95
+ - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
96
+ - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
97
+
98
+ ## Downstream Performance
99
 
100
+ TODO: Add table of full downstream performances if possible.
101
 
102
+ ## More information
103
 
104
+ We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
105
+ As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
106
+ model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description.
107
 
108