According to a recent LinkedIn post from Together AI, the company is featuring Deepgram’s speech-to-text and text-to-speech models on its AI Native Cloud via Dedicated Model Inference. The post highlights support for real-time voice workflows, spanning transcription, reasoning, and synthesis for production-grade voice agents.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The company’s LinkedIn post notes that Deepgram models such as Flux, Nova-3, Nova-3 Multilingual, and Aura-2 are positioned for low-latency, turn-taking conversations and noisy, multilingual environments. The post suggests that this type of integration may enhance Together AI’s appeal to AI-native customers building enterprise voice applications.
According to the post, the offering emphasizes enterprise-focused infrastructure, including dedicated workloads, a 99.9% uptime SLA, zero data retention, SOC 2 Type II alignment, HIPAA-ready support, and data residency options. For investors, these elements point to an effort to target regulated and compliance-sensitive sectors, potentially supporting higher-value, recurring cloud workloads.
The LinkedIn content also indicates that Together AI is seeking to strengthen its position in the real-time AI and voice-agent segment by partnering with a specialized speech and voice provider. If adopted at scale, such capabilities could deepen Together AI’s role in customers’ production systems, which may contribute to improved platform stickiness and longer-term revenue visibility.

