According to a recent LinkedIn post from Together AI, the company is being highlighted as the infrastructure partner behind Decagon’s AI-driven concierge and customer support offering. The post describes Decagon as requiring highly responsive, low-latency, multi-modal conversational capabilities to deliver “unforgettable” customer experiences at scale.
Claim 55% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The LinkedIn post suggests that Together AI is providing production inference for Decagon’s multi-modal stack, along with access to high-end compute such as NVIDIA Blackwell GPUs. It also notes the use of research-oriented optimizations like speculative decoding to improve response speeds and meet strict latency constraints for voice-based AI interactions.
For investors, the collaboration indicates that Together AI is positioning its AI Native Cloud as critical infrastructure for latency-sensitive, real-time customer engagement applications. Partnering with a specialized voice AI provider could support usage-based revenue growth, strengthen Together AI’s credibility in enterprise-grade customer support workloads, and showcase demand for advanced GPU capacity.
More broadly, the post underscores growing enterprise interest in sub‑second voice AI and multi-modal support solutions, areas where infrastructure performance and reliability can be key differentiators. If Together AI continues to secure similar production deployments, it may enhance its competitive standing among AI cloud providers and deepen relationships with high-value, customer-facing software platforms.

