According to a recent LinkedIn post from Together AI, the company is expanding its fine-tuning platform to support tool calling, reasoning, and vision-language models. The post suggests this is aimed at teams shifting from single-turn prompts to more complex agentic workflows, where reliability issues often emerge in tool use, long-horizon reasoning, and domain-specific visual interpretation.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The company’s LinkedIn post highlights new capabilities such as tool call fine-tuning with OpenAI-compatible schema validation, reasoning fine-tuning with native thinking token support, and vision-language fine-tuning for specialized visual data. It also notes infrastructure upgrades, including up to 6x throughput gains on mixture-of-experts models via SonicMoE kernel integration and features for cost estimation and live training ETAs.
According to the post, Together AI now supports fine-tuning on more than 40 models, including large-scale systems like Qwen 3.5-397B, Kimi K2.5, and GLM-4.7, and can handle datasets up to 100GB. For investors, this broadened model and data support may position the platform more competitively for enterprise-scale workloads, potentially increasing usage-based revenue opportunities as customers seek to deploy customized AI agents.
The enhancements around reliability, schema validation, and long-context reasoning suggest a focus on production-grade use cases rather than experimentation alone. This may strengthen Together AI’s appeal to developers building revenue-critical applications, while performance gains on MoE models could improve cost-efficiency, an increasingly important factor in the highly competitive AI infrastructure and model-serving market.

