According to a recent LinkedIn post from Together AI, the company is making Qwen3.5, a new native multimodal model from Qwen, available on its platform. The post highlights support for text, images, and video, and emphasizes capabilities in reasoning, coding, and visual understanding aimed at production-scale deployments.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The company’s LinkedIn post describes architectural features such as Gated Delta Networks and sparse MoE, which it suggests enable significantly faster decoding with 17 billion activated parameters. It also notes broad language coverage across 201 languages, positioning the model for global use cases in areas like customer support, content generation, and analytics.
As shared in the post, Qwen3.5 is presented as “production-ready” on Together AI’s AI Native Cloud, with a referenced 99.9% SLA and availability on both serverless and dedicated infrastructure. For investors, this suggests Together AI is deepening its role as an infrastructure provider for state-of-the-art multimodal models, which could enhance platform stickiness and usage-based revenue if adoption materializes.
The partnership with Qwen, as described in the post, may strengthen Together AI’s competitive positioning versus other AI infrastructure and model-hosting providers by expanding its catalog of advanced models. If Qwen3.5 gains traction among AI-native customers, Together AI could benefit from higher compute consumption and an improved value proposition to developers building globally deployed, multimodal applications.

