MiniMax launches cutting‑edge AI agent model, then revises its license, touts 2026
Photo by Kevin Ku on Unsplash
While MiniMax touted a state‑of‑the‑art AI agent model and a 2026 roadmap, reports indicate it quietly revised the model’s license shortly after launch, tempering the initial fanfare.
Key Facts
- •Key company: MiniMax
MiniMax’s initial fanfare around its new AI‑agent model was quickly muted by a licensing shift that many developers only discovered after the model went live. According to a Decrypt report, the company rolled out the “state‑of‑the‑art” agent and then, within days, altered the model’s license to impose stricter usage terms, effectively curbing the open‑access promise that had been a key selling point. The report notes that the change was made quietly, without a public announcement, prompting concern among early adopters who had begun integrating the model into proprietary workflows.
The technical breakthrough behind MiniMax’s offering is “lightning attention,” a hybrid linear/softmax architecture that reduces computational complexity from quadratic to linear. Remote OpenClaw’s analysis ranks MiniMax M2.7 as the top model for 2026, scoring first out of 136 on the Artificial Analysis Intelligence Index with a 50‑point rating while charging only $0.30 per million input tokens (Remote OpenClaw, Apr 13). This efficiency enables a 4‑million‑token inference context window—far larger than any competitor’s offering at a comparable price point.
MiniMax has positioned the M2.7 model as the flagship for its Hermes Agent platform, which demands long‑context handling for memory‑intensive, multi‑turn tasks. The same Remote OpenClaw piece highlights a 205K‑token context window and a 131K‑token output limit for Hermes Agent, priced at $0.30 per million input tokens and $1.20 per million output tokens. The near‑linear cost scaling of lightning attention makes it feasible to run ultra‑long session workflows, such as full‑codebase analysis or multi‑day project collaborations, without the degradation typical of standard 128K‑200K context models.
Beyond Hermes, MiniMax’s Text‑01 variant leverages the same 4‑million‑token window to support OpenClaw operators, delivering strong cost‑to‑performance metrics. Remote OpenClaw reports that M2.7 achieves 56.22 % on the SWE‑Pro benchmark and 57.0 % on Terminal Bench 2, while maintaining the $0.30 per million input token price. The model also scores 97 % on skill adherence for complex tasks, underscoring its versatility across both coding and general‑purpose workloads.
The licensing revision, however, introduces an element of uncertainty for enterprises that had counted on unrestricted API access. While MiniMax’s pricing and performance remain attractive, the tighter terms may compel customers to reassess risk and compliance implications, especially for developers building self‑evolving agents that rely on extensive context windows. As the market watches MiniMax’s next moves, the company’s ability to balance technical leadership with transparent licensing will likely determine whether its 2026 roadmap translates into sustained adoption or a cautionary tale of overpromised openness.
Sources
- Decrypt
- Dev.to AI Tag
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.