tiprankstipranks
Advertisement
Advertisement

Together AI Deepens AI Native Cloud With Advanced Coding and Agentic Models

Together AI Deepens AI Native Cloud With Advanced Coding and Agentic Models

Together AI continued to build out its AI Native Cloud this week, adding two production-ready third-party models aimed at advanced coding and agentic workloads. The company emphasized reliability with 99.9% SLAs and serverless deployment options designed for AI-native developers and enterprises.

Claim 55% Off TipRanks

Together AI introduced DeepSeek V4 Pro, a long-context reasoning and coding model optimized for complex software tasks and agentic workflows. The model supports up to a 512K token context window and offers three reasoning modes that allow users to balance speed against depth of analysis.

Benchmark results highlighted in the company’s materials include 93.5% on LiveCodeBench, a 3206 Codeforces rating, and 80.6% on SWE-Bench Verified. Together AI also cited efficiency gains over the prior V3.2 version, including reduced FLOPs and KV cache usage for long-context inference, which could help customers manage costs.

A related DeepSeek V4 Flash variant is described as coming soon, signaling an expanding roadmap around this model family. The broader DeepSeek offering is positioned to capture workloads that require long-horizon coding, high-context reasoning, and production-grade reliability on Together AI’s infrastructure.

In parallel, Together AI rolled out access to Moonshot AI’s Kimi K2.6, a multimodal and agentic model designed for autonomous workflows. The model can coordinate up to 300 sub-agents executing as many as 4,000 steps, targeting complex orchestration scenarios and long-horizon coding tasks.

Kimi K2.6 supports text, image, and video understanding and is offered via both serverless and dedicated deployments on the AI Native Cloud. Performance benchmarks, including SWE-Bench Verified, LiveCodeBench v6, and MMMU-Pro, are used to position the model for demanding enterprise use cases.

These additions underscore Together AI’s strategy of curating advanced third-party foundation models rather than focusing solely on in-house model development. By emphasizing reliability, efficiency, and support for agentic and multimodal workflows, the company aims to deepen engagement with AI-native developers.

From a financial perspective, the expanded model catalog could increase usage-based revenue and improve customer retention if adoption scales. Overall, the week marked a strengthening of Together AI’s competitive stance in the AI infrastructure and model-hosting market, with a clear focus on high-value, production-ready workloads.

Disclaimer & DisclosureReport an Issue

1