New updates have been reported about UpGuard.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
UpGuard has published new research warning that widespread misconfiguration of AI coding agents is creating material cyber, supply chain, and data exposure risks for enterprises. Analyzing more than 18,000 AI agent configuration files on GitHub, the company found that roughly 20% of developers allow these tools to read, write, and delete local files and download web content without human review, effectively granting high-risk system-level permissions to opaque automation.
UpGuard reports that one in five developers permit unrestricted file deletion and nearly 20% let AI agents commit code directly to main repositories, opening a path for prompt injection attacks, silent code tampering, and large-scale compromise of both proprietary and open-source software. The firm also identified high levels of arbitrary code execution permissions for Python and Node.js agents, and extensive typosquatting in the Model Context Protocol ecosystem, where up to 15 unverified lookalike servers shadow a single verified vendor, amplifying impersonation risk and slowing incident response.
Greg Pollock, UpGuard’s director of Research and Insights, said security teams currently lack visibility into which systems and data these agents touch, turning developer productivity shortcuts into potential breach vectors and credential exposure events. The findings directly reinforce the strategic positioning of UpGuard’s Breach Risk solution, which is pitched as a governance and monitoring layer for AI-driven workflows, translating misconfigurations and overly broad permissions into actionable signals that CISOs and risk leaders can track and remediate.
For executives, the research underscores that AI-assisted development should now be treated as a first-order supply chain and insider risk domain, not just a tooling choice left to engineering teams. UpGuard is leveraging this dataset to frame a broader market message: organizations will need centralized oversight of AI agents, their permissions, and their code changes across vendors and internal environments, creating a clear demand tailwind for platforms that provide continuous cyber risk posture management and AI-aware breach detection.

