SGLang Diffusion: Accelerating Video and Image Generation
SGLang Diffusion brings SGLang's top performance to diffusion model image and video generation, supporting mainstream open-source models with 1.2x to 5.9x speedups across diverse workloads.
SGLang Diffusion brings SGLang's top performance to diffusion model image and video generation, supporting mainstream open-source models with 1.2x to 5.9x speedups across diverse workloads.
We are excited to announce the official collaboration between SGLang and AutoRound, supporting low-bit quantization for efficient LLM inference. This integration enables developers to quantize large models using AutoRound's signed gradient optimization techniques and deploy them directly in SGLang's efficient runtime, achieving low-bit model inference while minimizing accuracy loss and significantly reducing latency.
Today we release Miles, an enterprise-grade reinforcement learning framework designed for large-scale MoE training and production workloads, built on the proven foundation of slime.
LMSYS announces its Fellowship Program offering up to $50,000 in funding for U.S. PhD students who have made significant contributions to open-source AI infrastructure.
We implemented an end-to-end FP8 sampling and training pipeline for RL. Experiments show that for MoE models, using BF16 training with FP8 rollout leads to severe train-inference inconsistency as model size increases. Unified FP8 for both training and rollout effectively eliminates quantization-induced inconsistency, improving RL training speed and stability.
This article details how EAGLE-3 (Extrapolative Attention Guided LEarning) was productionized on Vertex AI, achieving 2-3x speedup for LLM inference through lightweight draft heads instead of separate draft models, along with engineering challenges and lessons learned.
SGLang now features native integration with NVIDIA Model Optimizer, enabling direct quantization and deployment within the SGLang ecosystem, achieving up to 2x single-GPU throughput improvements.
We introduce Tensor R-Fork, a novel weight loading method that leverages efficient cross-node device-to-device interconnects to achieve zero-copy tensor loading from running SGLang instances to new instances, reducing loading time from minutes to seconds.
SGLang announces same-day support for NVIDIA's new Nemotron 3 Nano model, a compact MoE language model offering industry-leading computational efficiency and accuracy for building specialized agentic AI systems.
SGLang now supports the MiMo-V2-Flash model, a 309B parameter model optimized for inference with sliding window attention and multi-layer MTP, achieving balanced throughput and latency on H200 GPUs.
We introduce Mini-SGLang, a lightweight yet high-performance Large Language Models (LLMs) inference framework that preserves core state-of-the-art features in just 5k lines of Python code, serving as both a reliable inference engine and transparent reference implementation for researchers and developers.
SGLang introduces a seamless integration framework for Diffusion Large Language Models (dLLMs), enabling LLaDA 2.0 support through existing ChunkedPrefill mechanisms without core architecture changes, while maintaining full performance benefits and allowing customizable diffusion decoding algorithms.
SpecForge team, in collaboration with industry partners including Ant, Meituan, Nex-AGI, and EigenAI, releases SpecBundle (Phase 1), a collection of production-grade EAGLE-3 model checkpoints trained on large-scale datasets. Alongside, SpecForge v0.2 brings major system upgrades including comprehensive refactoring for improved usability and multi-backend support.
SGLang introduces Encoder-Prefill-Decode (EPD) disaggregation architecture that separates vision encoding from language processing in VLMs, enabling independent scaling and significantly reducing TTFT by 6-8x in image-intensive scenarios.
The SGLang RL team achieves major breakthroughs in RL training stability and efficiency, implementing end-to-end INT4 QAT that enables ~1TB model deployment on a single H200 GPU while maintaining training-inference consistency.
Novita AI developed production-proven optimizations for deploying GLM4-MoE models on SGLang, achieving up to 65% TTFT reduction and 22% TPOT improvement through Shared Experts Fusion and Suffix Decoding techniques.