According to a recent LinkedIn post from Together AI, the company is integrating Deepgram’s speech and voice models into its AI Native Cloud infrastructure. The post indicates that AI-focused customers can deploy Deepgram’s speech-to-text and text-to-speech offerings on Together AI’s Dedicated Model Inference platform to support real-time voice applications.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The LinkedIn post highlights models including Flux, Nova-3, Nova-3 Multilingual, and Aura-2, which are described as optimized for low-latency, production-grade voice agents. The content emphasizes capabilities such as fast conversational turn-taking, handling of noisy and multilingual audio, and enterprise-grade SLA, security, and compliance options.
For investors, the post suggests Together AI is broadening its ecosystem by adding specialized voice infrastructure capabilities that may increase its appeal to enterprise and AI-native developers. This type of integration could support higher usage of Together AI’s cloud platform, potentially improving revenue scale and competitive positioning in the AI infrastructure and model-serving market.
The collaboration with Deepgram also points to a strategy of partnering rather than building every vertical capability in-house, which may accelerate time-to-market in fast-moving AI segments like voice agents. If adopted by customers for production workloads, the combined offering could deepen Together AI’s role in mission-critical applications, which may drive stickier, higher-margin enterprise relationships over time.

