Deccan AI continued sharpening its focus on production-grade and reliability-centric AI this week, highlighted by the hire of senior machine learning specialist Ankit Khedia. With more than a decade at Meta, Google, and Amazon Web Services, he brings experience in vision, multimodal diffusion, and low-latency on-device LLMs for voice.
Claim 55% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
Khedia will lead efforts around data pipelines, curation, and evaluation loops, reflecting Deccan AI’s view that training data quality and constraint design are key drivers of real-world robustness. The company is also expanding its team, with new roles centered on training and fine-tuning data for LLMs and diffusion models.
Deccan AI also publicized new research on constraint-driven risks in production LLM reliability, introducing an instruction-following benchmark using 278 expert-crafted prompts. Tests on GPT 5.2 High and Gemini 3.0 Pro showed failure rates of 28–40% on sentence-counting tasks, versus 1–6% for prose and list outputs, suggesting constraint type can matter more than model choice.
This benchmark targets stacked and sometimes contradictory constraints common in enterprise prompts and could underpin tooling for safety, compliance, and workflow orchestration. If adopted, it may position Deccan AI as a specialist in evaluation and risk management for high-stakes AI deployments across regulated industries.
The company further highlighted its presence at ICLR 2026 in Rio de Janeiro, emphasizing work on agentic systems, post-training alignment, physical AI, and rigorous evaluation under its “Superaccuracy” thesis. Recent R&D has centered on post-training methods, reinforcement learning environments, and physical AI datasets, indicating ongoing investment in advanced applied research.
Deccan AI also convened practitioners and global capability center leaders to discuss governance, trust, and gaps between AI expectations and outcomes. By engaging directly with enterprise stakeholders on system design trade-offs and governance challenges, the firm is reinforcing its positioning around robust, trustworthy, production-grade AI solutions.
Collectively, the week’s developments underscore Deccan AI’s strategy of combining data-centric engineering, reliability research, and thought leadership to deepen its role in enterprise AI infrastructure. This integrated approach may strengthen its competitive standing as organizations prioritize dependable, evaluable AI over experimental deployments.

