File size: 497 Bytes
d1396f0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Optimizer

2 considerations:

1. AdamW - robust, but memory-hungry
2. Adafactor - more lean, but more difficult to figure out to converge - more likely to be used if the model is t5-like


## HF

default AdamW

## Deepspeed

default AdamW


## Megatron

Has `--optimizer adam` via `apex`

To add a new optimizer need to add a new option [here](https://github.com/NVIDIA/Megatron-LM/blob/aed2f75e209e525c842aec7c044af7acae2a4614/megatron/optimizer/__init__.py#L50) and import that new optimizer.