AI性能基准 (3 articles)

Meta Releases Llama 3.1 405B: Strongest Open-Source Model Achieves 88.6% on MMLU, Developer Community Celebrates

Meta officially launched Llama 3.1 series on July 24th Beijing time, with the 405B parameter version achieving 88.6% on MMLU benchmark, crowning it as the pinnacle of open-source large language model performance. The model not only excels in multilingual support and long-context processing but also offers free commercial licensing as a fully open-source solution, quickly igniting enthusiasm in the developer community.

Llama 3.1 Meta 开源AI
771

DeepSeek-V2 Open Source Release: 236B Parameters Run on Just 16GB VRAM, Math Capabilities Surpass Llama3, Igniting Developer Community

DeepSeek team releases open-source LLM DeepSeek-V2 with 236B parameters requiring only 16GB VRAM for inference, outperforming Meta's Llama3 in mathematical benchmarks. The model has garnered over 150,000 reposts in Chinese communities, marking a major breakthrough for domestic AI in efficient large model development.