tiprankstipranks
Advertisement
Advertisement

Multiverse Computing Launches Ultra-Compressed LittleLamb Models to Drive Edge and Mobile AI

Multiverse Computing Launches Ultra-Compressed LittleLamb Models to Drive Edge and Mobile AI

New updates have been reported about Multiverse Computing.

Claim 55% Off TipRanks

Multiverse Computing has expanded its compressed AI model portfolio with the LittleLamb family, three 0.3B-parameter open-source models built on a compressed version of Qwen3-0.6B and released for free on Hugging Face. Using the company’s CompactifAI technology, the models cut the base architecture size by roughly 50%, targeting cost- and compute-sensitive deployments across edge, mobile, and offline environments.

The three variants—LittleLamb 0.3B, 0.3B Tool-Calling, and 0.3B Mobile—share a ~0.3B footprint, support English and Spanish, and offer dual inference modes to trade off deeper reasoning against lower latency. Multiverse reports that the general and tool-calling versions outperform the original Qwen3-0.6B and models in the Gemma 270M class on HLE benchmarks, while the mobile variant improves accuracy on mobile action tasks.

CompactifAI, based on quantum-inspired tensor network techniques, can shrink model size by up to 95% with an estimated 2–3% precision loss, a sharp contrast with conventional compression approaches that often forfeit 20–30% accuracy at similar ratios. This positions Multiverse to serve enterprises needing on-device intelligence where privacy, latency, or limited connectivity rule out large cloud-based models, potentially broadening its addressable market in industrial, financial, and telecom use cases.

CEO Enrique Lizaso Olmos framed LittleLamb as proof that compact models can deliver more than simple chat, enabling advanced reasoning in constrained settings without major performance sacrifices. Strategically, the launch deepens Multiverse’s push into edge-native AI and strengthens its open-source footprint, which can accelerate developer adoption and funnel enterprise demand for commercial support and customized compression.

All three models and supporting technical documentation, benchmarks, and integration guides are available via the company’s Hugging Face page, signaling a deliberate move to lower integration friction for developers and system integrators. With more than 100 global customers already, the LittleLamb release is likely to be used both as a showcase for CompactifAI’s capabilities and as a lead generator for broader AI compression and deployment projects.

Disclaimer & DisclosureReport an Issue

1