According to a recent LinkedIn post from Nscale, the company is highlighting three developments aimed at expanding its role in AI infrastructure. The post describes the acquisition of American Intelligence & Power Corporation and the Monarch Compute Campus in West Virginia, giving Nscale control of what is described as a state-certified AI microgrid with a scalable power capacity exceeding eight gigawatts.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The post further indicates that Nscale is creating a new global division, Nscale Energy & Power, which is presented as completing vertical integration from energy generation to compute delivery. This move suggests an effort to manage both power and compute assets directly, which could improve cost control, resilience, and margins over the long term in capital-intensive AI data center operations.
As shared in the LinkedIn post, Nscale also refers to a collaboration with Microsoft and Caterpillar Inc. under a signed agreement focused on the Monarch campus. Nscale is set to design, construct, and operate infrastructure to host NVIDIA Vera Rubin NVL72 GPUs and future technologies, with an indicated goal of delivering up to 1.35 gigawatts of dedicated AI compute capacity at this site.
The post characterizes this installation as among the largest dedicated AI compute deployments globally and notes that Nscale would be the first European company to deploy NVIDIA Vera Rubin at scale. It also mentions agreements to deploy more than 100,000 NVIDIA Vera Rubin GPUs from early 2027 across locations in the U.K., Norway, and the U.S., pointing to a multi-region build-out strategy that, if executed, could materially increase the company’s asset base and revenue potential.
For investors, the LinkedIn post suggests that Nscale is positioning itself as a vertically integrated AI infrastructure provider, with ownership of energy assets and large-scale GPU capacity potentially creating a differentiated competitive position. The scale of the planned deployments implies significant capital requirements and execution risk, but also exposure to rising demand for high-performance AI compute from hyperscalers, enterprises, and AI developers over the next several years.

