According to a recent LinkedIn post from Together AI, the company is now hosting MiniMax’s M2.7 large language model, a 229B-parameter system designed for software engineering and agentic workflows. The post highlights that the model was refined via its own reinforcement learning loop and achieved the top open-source score on the MLE Bench Lite benchmark.
Claim 55% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The post suggests that M2.7 delivers competitive performance versus leading proprietary models, including a 66.6% medal rate across 22 ML competitions and parity or near-parity with GPT and Opus variants on several software engineering metrics. It also indicates built-in multi-agent collaboration, high reported skill compliance and advanced tool-calling capabilities, all accessible via Together AI’s serverless and dedicated AI Native Cloud infrastructure.
From an investor perspective, this integration may enhance Together AI’s value proposition as a model-agnostic infrastructure provider, potentially attracting enterprise customers seeking high-end open-source alternatives for coding and automation. Strong benchmark positioning in software engineering and agentic tasks could support higher-margin workloads and reinforce Together AI’s competitive stance against other AI infrastructure and model-serving platforms.
If customer adoption follows the technical benchmarks cited in the post, Together AI could see increased compute utilization and revenue tied to both serverless and dedicated deployments. More broadly, the move underscores ongoing demand for powerful open-source or partner models as complements or substitutes to closed commercial systems, which may influence pricing dynamics and bargaining power across the AI infrastructure ecosystem.

