Commit
·
e6c192a
1
Parent(s):
795a3c6
Update README.md
Browse filesUpdated README.md with some examples
README.md
CHANGED
@@ -1,10 +1,28 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
3 |
tags:
|
4 |
- generated_from_keras_callback
|
5 |
model-index:
|
6 |
- name: pull_request_comments_model
|
7 |
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
|
10 |
# pull_request_comments_model
|
@@ -66,4 +84,4 @@ The following hyperparameters were used during training:
|
|
66 |
- Transformers 4.26.1
|
67 |
- TensorFlow 2.9.0
|
68 |
- Datasets 2.10.1
|
69 |
-
- Tokenizers 0.13.2
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language: en
|
4 |
tags:
|
5 |
- generated_from_keras_callback
|
6 |
model-index:
|
7 |
- name: pull_request_comments_model
|
8 |
results: []
|
9 |
+
widget:
|
10 |
+
- text: >-
|
11 |
+
Please add a detailed comment explaining what this config parameter means.
|
12 |
+
example_title: Code example
|
13 |
+
- text: >-
|
14 |
+
Because if non-chief workers don’t restore the model from the checkpoint, training can‘t continue.
|
15 |
+
For example, I am using tf.estimator.Estimator API and tf.distribute.CollectiveAllReduceStrategy for distributed training.
|
16 |
+
Firstly, I train for 1000 rounds(train_stpes=1000) and save the checkpoint, it works normally.
|
17 |
+
Then I set the train_steps to 2000, only the is_chief role can restore the model from the checkpoint without any error.
|
18 |
+
example_title: ML example
|
19 |
+
- text: >-
|
20 |
+
Hi, Could you please review this PR when you have some time? Thank you very much!
|
21 |
+
example_title: Management example
|
22 |
+
- text: >-
|
23 |
+
Hello Sorry for the delay, we are working on this PR internally. We will let you know if help require from your end. Thank you!
|
24 |
+
example_title: Other example
|
25 |
+
pipeline_tag: text-classification
|
26 |
---
|
27 |
|
28 |
# pull_request_comments_model
|
|
|
84 |
- Transformers 4.26.1
|
85 |
- TensorFlow 2.9.0
|
86 |
- Datasets 2.10.1
|
87 |
+
- Tokenizers 0.13.2
|