AIEngineerYvar commited on
Commit
a2b504f
·
verified ·
1 Parent(s): fb77826

Upload TFT5ForConditionalGeneration

Browse files
Files changed (4) hide show
  1. README.md +57 -0
  2. config.json +58 -0
  3. generation_config.json +7 -0
  4. tf_model.h5 +3 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: t5-small
5
+ tags:
6
+ - generated_from_keras_callback
7
+ model-index:
8
+ - name: transformers-med-summarizer
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
13
+ probably proofread and complete it, then remove this comment. -->
14
+
15
+ # transformers-med-summarizer
16
+
17
+ This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Train Loss: 2.4154
20
+ - Validation Loss: 2.2092
21
+ - Train Rougel: tf.Tensor(0.12209402, shape=(), dtype=float32)
22
+ - Epoch: 1
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': np.float32(2e-05), 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
42
+ - training_precision: float32
43
+
44
+ ### Training results
45
+
46
+ | Train Loss | Validation Loss | Train Rougel | Epoch |
47
+ |:----------:|:---------------:|:----------------------------------------------:|:-----:|
48
+ | 2.6063 | 2.2850 | tf.Tensor(0.12259579, shape=(), dtype=float32) | 0 |
49
+ | 2.4154 | 2.2092 | tf.Tensor(0.12209402, shape=(), dtype=float32) | 1 |
50
+
51
+
52
+ ### Framework versions
53
+
54
+ - Transformers 4.51.3
55
+ - TensorFlow 2.18.0
56
+ - Datasets 3.6.0
57
+ - Tokenizers 0.21.1
config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "T5ForConditionalGeneration"
4
+ ],
5
+ "classifier_dropout": 0.0,
6
+ "d_ff": 2048,
7
+ "d_kv": 64,
8
+ "d_model": 512,
9
+ "decoder_start_token_id": 0,
10
+ "dense_act_fn": "relu",
11
+ "dropout_rate": 0.1,
12
+ "eos_token_id": 1,
13
+ "feed_forward_proj": "relu",
14
+ "initializer_factor": 1.0,
15
+ "is_encoder_decoder": true,
16
+ "is_gated_act": false,
17
+ "layer_norm_epsilon": 1e-06,
18
+ "model_type": "t5",
19
+ "n_positions": 512,
20
+ "num_decoder_layers": 6,
21
+ "num_heads": 8,
22
+ "num_layers": 6,
23
+ "output_past": true,
24
+ "pad_token_id": 0,
25
+ "relative_attention_max_distance": 128,
26
+ "relative_attention_num_buckets": 32,
27
+ "task_specific_params": {
28
+ "summarization": {
29
+ "early_stopping": true,
30
+ "length_penalty": 2.0,
31
+ "max_length": 200,
32
+ "min_length": 30,
33
+ "no_repeat_ngram_size": 3,
34
+ "num_beams": 4
35
+ },
36
+ "translation_en_to_de": {
37
+ "early_stopping": true,
38
+ "max_length": 300,
39
+ "num_beams": 4,
40
+ "prefix": "translate English to German: "
41
+ },
42
+ "translation_en_to_fr": {
43
+ "early_stopping": true,
44
+ "max_length": 300,
45
+ "num_beams": 4,
46
+ "prefix": "translate English to French: "
47
+ },
48
+ "translation_en_to_ro": {
49
+ "early_stopping": true,
50
+ "max_length": 300,
51
+ "num_beams": 4,
52
+ "prefix": "translate English to Romanian: "
53
+ }
54
+ },
55
+ "transformers_version": "4.51.3",
56
+ "use_cache": true,
57
+ "vocab_size": 32128
58
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "decoder_start_token_id": 0,
4
+ "eos_token_id": 1,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.51.3"
7
+ }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8c435125e9a5b9604551f0e4e00df51c2550ca2554653f85cdeb6436f08587e
3
+ size 373902664