New updates have been reported about ScaleOps.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
ScaleOps has secured $130 million in Series C funding at an $800 million valuation to accelerate its autonomous cloud and AI infrastructure management platform, underscoring growing demand for tools that cut GPU and cloud waste. Led by Insight Partners with participation from existing backers, the round lifts ScaleOps’ total funding to about $210 million as enterprises seek to curb over-provisioning, idle GPUs, and escalating AI compute costs.
Founded in 2022 by CEO Yodar Shafrir, formerly of GPU orchestration startup Run:ai, ScaleOps offers software that dynamically allocates compute, memory, storage, and networking resources in real time, claiming cost reductions of up to 80% for cloud and AI infrastructure. The platform is built specifically for production Kubernetes environments, operates fully autonomously and context-aware, and aims to replace static configurations and manual tuning that have strained DevOps teams as inference workloads scale.
The company positions itself against rivals such as Cast AI, Kubecost, and Spot by emphasizing end-to-end automation and application-aware decisioning, arguing that many existing visibility and optimization tools lack sufficient context and can introduce performance risk or downtime. ScaleOps reports more than 450% year-over-year growth and says it has tripled headcount in the past year, with plans to more than triple again by year-end to support product expansion and global go-to-market.
Headquartered in New York and serving large enterprises in the U.S., Europe, and India, ScaleOps counts Adobe, Wiz, DocuSign, Salesforce, and Coupa among its customers, reflecting early penetration into high-spend, cloud-native organizations. Management plans to deploy the new capital to roll out additional products and extend the platform toward fully autonomous infrastructure, betting that AI-driven demand for compute will make real-time, automated resource management a critical layer in the enterprise stack.

