According to a recent LinkedIn post from Gradient, the company is highlighting Echo-2, a system aimed at reducing the cost of post-training large AI models. The post suggests that by decoupling rollouts from training and distributing workloads across heterogeneous GPUs, post-training costs for a 30B-parameter model could be reduced by more than 90%, from about $4,490 to $425.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The LinkedIn post describes an architecture with three independent planes—learning, rollout, and data—that is presented as avoiding bottlenecks associated with synchronous reinforcement learning. Gradient also points to testing via its Parallax setup, indicating that training a Qwen3-8B model on distributed RTX 5090s came in 36% cheaper than centralized alternatives.
As shared in the post, Gradient is now building Logits, described as an RL-as-a-Service platform built on top of Echo-2, with a waitlist open for researchers and teams facing cost-prohibitive training workloads. For investors, this emphasis on cost efficiency in post-training and a service-layer platform could signal an attempt to position Gradient competitively in AI infrastructure, potentially tapping demand from enterprises and research groups seeking to lower model training and deployment costs.

