Industry News | 6/17/2025

MiniMax Launches Open-Source AI Model with Million-Token Context

Chinese startup MiniMax has unveiled its open-source language model, MiniMax-M1, featuring a one-million-token context window and high efficiency. This release aims to enhance competition in the AI sector, particularly against models like Deepseek's R1.

MiniMax Launches Open-Source AI Model with Million-Token Context

Chinese AI startup MiniMax has entered the global AI market with its new open-source language model, MiniMax-M1. This model is designed to compete directly with other notable AI models, particularly Deepseek's R1, and is recognized for its efficiency in managing large context windows, a vital feature for advanced AI applications.

Key Features of MiniMax-M1

  • One-Million-Token Context Window: MiniMax-M1 can process and recall information from extensive text collections, equivalent to an entire novel, in one instance. This capability is essential for tasks requiring long-range reasoning.
  • Thinking Budget: The model supports complex reasoning with a budget of up to 80,000 tokens.
  • Comparison with Competitors: In terms of context window size, MiniMax-M1 matches Google's Gemini 2.5 Pro and significantly exceeds the 128,000-token capacity of OpenAI's GPT-4o.

Efficiency and Cost-Effectiveness

MiniMax-M1 is built on a hybrid Mixture-of-Experts (MoE) architecture, which reduces computational demands. It also features a lightning attention mechanism that accelerates training and minimizes memory usage. Notably, when generating 100,000 tokens, MiniMax-M1 reportedly uses only 25% of the computational resources required by DeepSeek R1. The training cost for MiniMax-M1 is approximately $534,700, a fraction of the investments made in competitors like DeepSeek R1 and OpenAI's GPT-4.

Performance Benchmarks

In benchmark tests, MiniMax-M1 has shown strong performance across various domains:

  • Mathematical Reasoning: Achieved a score of 86.0% on the AIME 2024 benchmark.
  • Coding: Scored 65.0% on LiveCodeBench and 56.0% on SWE-bench Verified.
  • Complex Reasoning: Performance in long-text reasoning tests is competitive with leading models, closely matching OpenAI's Gemini 2.5 Pro.

Implications for the AI Industry

The launch of MiniMax-M1 is significant for the AI landscape, as it democratizes access to advanced AI capabilities. The model's combination of a large context window, high efficiency, and strong reasoning abilities positions it as a viable option for various applications, from AI-powered agents to in-depth data analysis. Additionally, MiniMax's challenge to Deepseek R1 is expected to stimulate further innovation and competition within the open-source AI community, enhancing the overall dynamics of the global AI market.