According to a recent LinkedIn post from Together AI, the company is featuring deployment of MiniMax’s new M2.5 reasoning and coding model on its infrastructure. The post highlights capabilities such as architect-level planning across the software development lifecycle, including specification writing with feature decomposition prior to coding.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The post suggests that M2.5 targets software engineering use cases with what is described as state-of-the-art agentic coding performance, citing 80.2% on SWE-Bench Verified and 37% faster operation than the prior M2.1 model while matching the speed of Opus 4.6. It also notes support for office-oriented deliverables such as Word, PowerPoint, and Excel outputs, reportedly trained with industry experts and achieving a 59.0% win rate versus unspecified mainstream models.
According to the post, the model is positioned as production-ready on Together AI’s AI Native Cloud, with a referenced 99.9% SLA and availability on both serverless and dedicated infrastructure. The post further indicates a partnership with MiniMax, framing this integration as aimed at enabling production-scale agentic workflows for AI-native customers operating on Together AI.
From an investor perspective, this integration points to an expansion of Together AI’s model portfolio toward higher-value reasoning and coding workloads, which could enhance the platform’s appeal to enterprise software and productivity customers. If customer adoption follows the performance claims referenced in the post, Together AI could see increased infrastructure usage, deeper enterprise integration, and a stronger competitive position in the AI infrastructure and agentic automation segment.

