Unified FP8: Beyond Mixed Precision, Achieving Stable Accelerated MoE RL Training
We implemented an end-to-end FP8 sampling and training pipeline for RL. Experiments show that for MoE models, using BF16 training with FP8 rollout leads to severe train-inference inconsistency as model size increases. Unified FP8 for both training and rollout effectively eliminates quantization-induced inconsistency, improving RL training speed and stability.