Curious to try this with diloco, would still do bs=1 on the inner optimizer and still get benefits of data parallelism
Micah Goldblum
Micah Goldblum10.7. klo 22.12
🚨 Did you know that small-batch vanilla SGD without momentum (i.e. the first optimizer you learn about in intro ML) is virtually as fast as AdamW for LLM pretraining on a per-FLOP basis? 📜 1/n
1,93K