According to a recent LinkedIn post from Together AI, the company is highlighting the availability of MiniMax’s new M2.5 reasoning and coding model on its platform. The post suggests this model targets “AI native” customers who need production-scale, agentic workflows with a focus on software development and office productivity tasks.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The LinkedIn post describes capabilities such as architect-level planning, specification writing, and feature decomposition across the development lifecycle, alongside what is presented as state-of-the-art agentic coding performance. It cites metrics including 80.2% on SWE-Bench Verified and speed comparable to leading models, as well as office-document outputs in Word, PowerPoint, and Excel with a reported 59.0% win rate versus mainstream alternatives.
According to the post, MiniMax M2.5 is being offered on Together AI’s “AI Native Cloud,” with service-level targets cited at 99.9% and availability on both serverless and dedicated infrastructure. This positioning suggests Together AI is aiming to deepen its role as an infrastructure provider for advanced AI agents and workflow automation, potentially supporting higher usage-based revenues if adoption scales.
For investors, the partnership with MiniMax and the emphasis on production-ready, agentic workloads could signal a strategy to compete more directly in the high-performance model hosting and inference segment. If the performance claims and reliability targets attract enterprise and developer demand, this may enhance Together AI’s competitive positioning within the broader AI infrastructure market and support longer-term platform stickiness.

