What is: Chimera?
Source | Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
Chimera is a pipeline model parallelism scheme which combines bidirectional pipelines for efficiently training large-scale models. The key idea of Chimera is to combine two pipelines in different directions (down and up pipelines).
Denote as the number of micro-batches executed by each worker within a training iteration, and the number of pipeline stages (depth), and the number of workers.
The Figure shows an example with four pipeline stages (i.e. ). Here we assume there are micro-batches executed by each worker within a training iteration, namely , which is the minimum to keep all the stages active.
In the down pipeline, stage∼stage are mapped to linearly, while in the up pipeline the stages are mapped in a completely opposite order. The (assuming an even number) micro-batches are equally partitioned among the two pipelines. Each pipeline schedules micro-batches using 1F1B strategy, as shown in the left part of the Figure. Then, by merging these two pipelines together, we obtain the pipeline schedule of Chimera. Given an even number of stages (which can be easily satisfied in practice), it is guaranteed that there is no conflict (i.e., there is at most one micro-batch occupies the same time slot on each worker) during merging.