According to a recent LinkedIn post from Together AI, the company is now offering deployment of DeepSeek V4 Pro, a long-context reasoning and coding model, on its platform. The post highlights performance metrics such as 93.5% on LiveCodeBench, a 3206 Codeforces rating, and 80.6% on SWE-Bench Verified, along with efficiency improvements over the prior V3.2 model.
Claim 55% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The post suggests that DeepSeek V4 Pro is available in multiple reasoning modes and is positioned as production-ready on Together AI’s AI-native cloud, with a 99.9% SLA on serverless and 512K context support. For investors, this could signal an effort to deepen Together AI’s value proposition for advanced coding, agentic workflows, and long-context inference, potentially supporting higher usage, customer retention, and competitive differentiation in the AI infrastructure market.
As shared in the post, a forthcoming DeepSeek V4 Flash version is also mentioned, indicating an expanding product roadmap around this model family. This roadmap may enhance Together AI’s ability to attract AI-native developers and enterprises seeking scalable, high-performance inference, which could in turn reinforce its positioning versus other AI cloud and model-serving platforms.

