Update README.md
Browse files
README.md
CHANGED
@@ -35,6 +35,8 @@ T5ForConditionalGeneration(
|
|
35 |
(lm_head): Linear(in_features=768, out_features=32100, bias=False)
|
36 |
)
|
37 |
|
|
|
|
|
38 |
```bash
|
39 |
pip install -U transformers torch datasets
|
40 |
#Then, load the model and run inference:
|
@@ -43,7 +45,7 @@ from transformers import T5ForConditionalGeneration, RobertaTokenizer
|
|
43 |
|
44 |
# Download from the 🤗 Hub
|
45 |
```python
|
46 |
-
model_name = "
|
47 |
tokenizer = RobertaTokenizer.from_pretrained(model_name)
|
48 |
model = T5ForConditionalGeneration.from_pretrained(model_name)
|
49 |
|
@@ -91,11 +93,11 @@ snippet: int(''.join(map(str, x))), rewritten_intent: "Convert a list of integer
|
|
91 |
snippet: datetime.strptime('2010-11-13 10:33:54.227806', '%Y-%m-%d %H:%M:%S.%f'), rewritten_intent: "Convert a DateTime string back to a DateTime object of format '%Y-%m-%d %H:%M:%S.%f'"
|
92 |
```
|
93 |
# Training Hyperparameters
|
94 |
-
Non-Default Hyperparameters:
|
95 |
-
**per_device_train_batch_size:** 4
|
96 |
-
**per_device_eval_batch_size:** 4
|
97 |
-
**gradient_accumulation_steps:** 2 (effective batch size = 8)
|
98 |
-
**num_train_epochs:** 10
|
99 |
-
**learning_rate:** 1e-4
|
100 |
-
**fp16:** True
|
101 |
|
|
|
35 |
(lm_head): Linear(in_features=768, out_features=32100, bias=False)
|
36 |
)
|
37 |
|
38 |
+
```
|
39 |
+
|
40 |
```bash
|
41 |
pip install -U transformers torch datasets
|
42 |
#Then, load the model and run inference:
|
|
|
45 |
|
46 |
# Download from the 🤗 Hub
|
47 |
```python
|
48 |
+
model_name = "AventIQ-AI/t5_code_summarizer" # Update with your HF model ID
|
49 |
tokenizer = RobertaTokenizer.from_pretrained(model_name)
|
50 |
model = T5ForConditionalGeneration.from_pretrained(model_name)
|
51 |
|
|
|
93 |
snippet: datetime.strptime('2010-11-13 10:33:54.227806', '%Y-%m-%d %H:%M:%S.%f'), rewritten_intent: "Convert a DateTime string back to a DateTime object of format '%Y-%m-%d %H:%M:%S.%f'"
|
94 |
```
|
95 |
# Training Hyperparameters
|
96 |
+
### Non-Default Hyperparameters:
|
97 |
+
- **per_device_train_batch_size:** 4
|
98 |
+
- **per_device_eval_batch_size:** 4
|
99 |
+
- **gradient_accumulation_steps:** 2 (effective batch size = 8)
|
100 |
+
- **num_train_epochs:** 10
|
101 |
+
- **learning_rate:** 1e-4
|
102 |
+
- **fp16:** True
|
103 |
|