Advertisement
Advertisement

d-Matrix Unveils SquadRack for Enhanced AI Inference Efficiency

d-Matrix Unveils SquadRack for Enhanced AI Inference Efficiency

New updates have been reported about d-Matrix (PC:DMATR)

Meet Your ETF AI Analyst

d-Matrix has introduced SquadRack™, a groundbreaking solution designed to revolutionize AI inference infrastructure. In collaboration with industry leaders Arista, Broadcom, and Supermicro, d-Matrix showcased this innovative blueprint at the Open Compute Project Global Summit. SquadRack offers a disaggregated, standards-based rack-scale solution that significantly enhances the efficiency of generative AI inference, addressing the growing demands of cloud providers and enterprises. The architecture promises up to three times better cost-performance and energy efficiency, along with a tenfold increase in token generation speed compared to traditional accelerators.

SquadRack’s configuration allows for the deployment of generative AI models with up to 100 billion parameters using eight nodes within a single rack. For larger scale operations, the system can expand to hundreds of nodes across multiple racks using industry-standard ethernet. d-Matrix’s Corsair accelerators and JetStream IO accelerators are central to this solution, providing ultra-low latency and high throughput. The integration of Supermicro’s AI servers, Arista’s ethernet switches, and Broadcom’s PCIe and ethernet switch chips further enhances performance and scalability. According to Sid Sheth, CEO of d-Matrix, SquadRack represents a significant advancement in making AI infrastructure commercially viable at scale. The solution will be available for purchase through Supermicro in the first quarter of 2026.

Disclaimer & DisclosureReport an Issue

1