🚀 AutoRound Partners with SGLang: A New Era of Efficient Quantized Model Inference

We are excited to announce the official collaboration between SGLang and AutoRound, supporting low-bit quantization for efficient LLM inference. This integration enables developers to quantize large models using AutoRound's signed gradient optimization techniques and deploy them directly in SGLang's efficient runtime, achieving low-bit model inference while minimizing accuracy loss and significantly reducing latency.

LMSYS AutoRound SGLang
793

SpecBundle & SpecForge v0.2: Production-Ready Speculative Decoding Models and Framework Released

SpecForge team, in collaboration with industry partners including Ant, Meituan, Nex-AGI, and EigenAI, releases SpecBundle (Phase 1), a collection of production-grade EAGLE-3 model checkpoints trained on large-scale datasets. Alongside, SpecForge v0.2 brings major system upgrades including comprehensive refactoring for improved usability and multi-backend support.

LMSYS 推测解码 SpecForge
590